Planet Igalia

April 24, 2025

Jesse Alama

Notes on AI for Mathematics and Theoretical Computer Science

Notes on AI for Math­e­mat­ics and The­o­ret­i­cal Com­put­er Sci­ence

In April 2025 I had the plea­sure to at­tend an in­tense week-long work­shop at the Si­mons In­sti­tute for the The­o­ry of Com­put­ing en­ti­tled AI for Math­e­mat­ics and The­o­ret­i­cal Com­put­er Sci­ence. The event was or­ga­nized joint­ly with the Si­mons Laufer Math­e­mat­i­cal Sci­ences In­sti­tute (SLMath, for short). It was an in­tense time (five ful­ly-packed days!) for learn­ing a lot about cut­ting-edge ideas in this in­ter­sec­tion of for­mal math­e­mat­ics (pri­mar­i­ly in Lean), AI, and pow­er­ful tech­niques for solv­ing math­e­mat­i­cal prob­lems, such as SAT solvers and de­ci­sion pro­ce­dures (e.g., the Wal­nut sys­tem). Videos of the talks (but not of the train­ing ses­sions) have been made avail­able.

Every day, sev­er­al dozen peo­ple were in at­ten­dance. Judg­ing from the ar­ray of un­claimed badges (eas­i­ly an­oth­er sev­er­al dozen), quite a lot more had signed up for the event, but didn't come for one rea­son or an­oth­er. It was in­spir­ing to be in the room with so many peo­ple in­volved in these ideas. The train­ing ses­sions in the af­ter­noon had a great vibe, since so many peo­ple we learn­ing and work­ing to­geth­er si­mul­ta­ne­ous­ly.

It was great to con­nect with a num­ber of peo­ple, of all stripes. Most of the pre­sen­ters and at­ten­dees were com­ing from acad­e­mia, with a mi­nor­i­ty, such as my­self, com­ing from in­dus­try.

The or­ga­ni­za­tion was fan­tas­tic. We had talks in the morn­ing and train­ing in the af­ter­noon. The fi­nal talk in the morn­ing, be­fore lunch, was an in­tro­duc­tion to the af­ter­noon train­ing. The train­ing top­ics were:

Day 1
Lean with María Inés de Fru­tos-Fer­nán­dez
Day 2
Ma­chine learn­ing with Sean Welleck
Day 3
SAT solv­ing (more ex­am­ples) with Mar­i­jn Heule
Day 4
Tools & demos, and/or con­tin­ue ear­li­er train­ing
Day 5
Open dis­cus­sion

The links above point to the tu­to­r­i­al git re­pos for fol­low­ing along at home.

In the open dis­cus­sion on the fi­nal af­ter­noon, I raised my hand and out­ed my­self as some­one com­ing to the work­shop from an in­dus­try per­spec­tive. Al­though I had al­ready met a few peo­ple in in­dus­try pri­or to Fri­day, I was able to meet even more by rais­ing my hand and invit­ing fel­low prac­tion­ers to dis­cuss things. This led to meet­ing a few more peo­ple.

The talks were fas­ci­nat­ing; the se­lec­tion of speak­ers and top­ics was ex­cel­lent. Go ahead and take a look at the list of videos, pick out one or two of in­ter­est, grab a bev­er­age of your choice, and en­joy.

April 24, 2025 08:25 AM

April 23, 2025

Carlos Bentzen

VVC/H.266 in GStreamer 1.26

With the release of GStreamer 1.26, we now have playback support for Versatile Video Coding (VVC/H.266). In this post, I’ll describe the pieces of the puzzle that enable this, the contributions that led to it, and hopefully provide a useful guideline to adding a new video codec in GStreamer.

Playing back VVC with GStreamer #

With GStreamer 1.26 and the relevant plugins enabled, one can play multimedia files containing VVC content, for example, by using gst-play-1.0:

gst-play-1.0 vvc.mp4

By using gst-play-1.0, a pipeline using playbin3 will be created and the appropriate elements will be auto-plugged to decode and present the VVC content. Here’s what such a pipeline looks like:

GStreamer pipeline for VVC playback

Although the pipeline is quite large, the specific bits we’ll focus on in this blog are inside parsebin and decodebin3:

qtdemux → ... → h266parse → ... → avdec_h266

I’ll explain what each of those elements is doing in the next sections.

(De)muxing files with VVC video #

To store multiple kinds of media (e.g. video, audio and captions) in a way that keeps them synchronized, we typically make use of container formats. This process is usually called muxing, and in order to play back the file we perform de-muxing, which separates the streams again. That is what the qtdemux element is doing in the pipeline above, by extracting the audio and video streams from the input MP4 file and exposing them as the audio_0 and video_0 pads.

Support for muxing and demuxing VVC streams in container formats was added to:

Besides the fact that the demuxers are used for playback, by also adding support to VVC in the muxer elements we are then also able to perform remuxing: changing the container format without transcoding the underlying streams.

Some examples of simplified re-muxing pipelines (only taking into account the VVC video stream):

MP4 to MKV:

gst-launch-1.0 filesrc location=vvc.mp4 ! qtdemux ! matroskamux ! filesink location=vvc.mkv

MKV to MPEG-TS:

gst-launch-1.0 filesrc location=vvc.mkv ! matroskademux ! h266parse ! mpegtsmux ! filesink location=vvc.ts

But why do we need h266parse when re-muxing from Matroska to MPEG-TS? That’s what I’ll explain in the next section.

Parsing and converting between VVC bitstream formats #

Video codecs like H.264, H.265, H.266 and AV1 may have different stream formats, depending on which container format is used to transport them. For VVC specifically, there are two main variants, as shown in the caps for h266parse:

Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
video/x-h266

SRC template: 'src'
Availability: Always
Capabilities:
video/x-h266
parsed: true
stream-format: { (string)vvc1, (string)vvi1, (string)byte-stream }
alignment: { (string)au, (string)nal }
  1. byte-stream or so-called Annex-B format (as in Annex B from the VVC specification): it separates the NAL units by start code prefixes (0x000001 or 0x00000001), and is the format used in MPEG-TS, or also when storing VVC bitstreams in files without containers (so-called “raw bitstream files”).

ℹ️ Note: It’s also possible to play raw VVC bitstream files with gst-play-1.0. That is achieved by the typefind element detecting the input file as VVC and playbin taking care of auto-plugging the elements.

  1. vvc1 and vvi1: those formats use length field prefixes before each NAL unit. The difference between the two formats is the way that parameter sets (e.g. SPS, PPS, VPS NALs) are stored, and reflected in the codec_data field in GStreamer caps. For vvc1, the parameter sets are stored as container-level metadata, while vvi1 allows for the parameter sets to be stored also in the video bitstream.

The alignment field in the caps signals whether h266parse will collect multiple NALs into an Access Unit (AU) for a single GstBuffer, where an AU is the smallest unit for a decodable video frame, or whether each buffer will carry only one NAL.

That explains why we needed the h266parse when converting from MKV to MPEG-TS: it’s converting from vvc1/vvi1 to byte-stream! So the gst-launch-1.0 command with more explicit caps would be:

gst-launch-1.0 filesrc location=vvc.mkv \
! matroskademux \
! video/x-h266,stream-format=vvc1 ! h266parse ! video/x-h266,stream-format=byte-stream \
! mpegtsmux ! filesink location=vvc.ts

The base library for parsing VVC streams and the h266parse element were contributed by Intel, with initial support only for the byte-stream format, and I added support for the vvc1 and vvi1 stream formats.

VVC decoder elements #

As of today, there are three VVC decoder implementations available in GStreamer: FFmpeg, VVdeC and VAAPI.

FFmpeg #

FFmpeg 7.1 has a native VVC decoder which is considered stable. In GStreamer 1.26, we have allowlisted that decoder in gst-libav, and it is now exposed as the avdec_h266 element.

VVdeC #

Another software decoder implementation is VVdeC. The vvdec element has been merged as part of gst-plugins-rs. To achieve that, I wrote safe Rust bindings for VVdeC, which is also published on crates.io.

VAAPI #

Intel has added the vah266dec element in GStreamer 1.26, which enables hardware-accelerated VVC decoding on Intel Lunar Luke CPUs. However, it still has rank of 0 in GStreamer 1.26, so in order to test it out, one would need to, for example, manually set GST_PLUGIN_FEATURE_RANK.

Similar to h266parse, initially vah266dec was added with support for only the byte-stream format. I implemented support for the vvc1 and vvi1 modes in the base h266decoder class, which fixes the support for them in vah266dec as well. However, it hasn’t yet been merged and I don’t expect it to be backported to 1.26, so likely it will only be available in GStreamer 1.28.

Here’s a quick demo of vah266dec in action on an ASUS ExpertBook P5. In this screencast, I perform the following actions:

  1. Run vainfo and display the presence of VVC decoding profile
  2. gst-inspect vah266dec
  3. export GST_PLUGIN_FEATURE_RANK='vah266dec:max'
  4. Start playback of six simultaneous 4K@60 DASH VVC streams. The stream in question is the classic Tears of Steel, sourced from the DVB VVC test streams.
  5. Run nvtop, showing GPU video decoding & CPU usage per process.
  6. Show pipeline dump via gst-dots-viewer.

https://www.youtube.com/watch?v=D-UajfdWBqg

Testing decoder conformance with Fluster #

A tool that is handy for testing the new decoder elements is Fluster. It simplifies the process of testing decoder conformance and comparing decoders by using test suites that are adopted by the industry. It’s worth checking it out, and it’s already common practice to test new decoders with this test framework. I added the GStreamer VVC decoders to it: vvdec, avdec_h266 and vah266dec.

TODO: Encoding VVC with GStreamer and VVenC #

We’re still missing the ability to encode VVC video in GStreamer. I have a work-in-progress branch that adds the vvenc element, by using VVenC and safe Rust bindings (similarly to the vvdec element), but it still needs some work. I intend to work on it during the GStreamer Spring Hackfest 2025 to make it ready to submit upstream 🤞

April 23, 2025 12:00 AM

April 22, 2025

Melissa Wen

2025 FOSDEM: Don't let your motivation go, save time with kworkflow

2025 was my first year at FOSDEM, and I can say it was an incredible experience where I met many colleagues from Igalia who live around the world, and also many friends from the Linux display stack who are part of my daily work and contributions to DRM/KMS. In addition, I met new faces and recognized others with whom I had interacted on some online forums and we had good and long conversations.

During FOSDEM 2025 I had the opportunity to present about kworkflow in the kernel devroom. Kworkflow is a set of tools that help kernel developers with their routine tasks and it is the tool I use for my development tasks. In short, every contribution I make to the Linux kernel is assisted by kworkflow.

The goal of my presentation was to spread the word about kworkflow. I aimed to show how the suite consolidates good practices and recommendations of the kernel workflow in short commands. These commands are easily configurable and memorized for your current work setup, or for your multiple setups.

For me, Kworkflow is a tool that accommodates the needs of different agents in the Linux kernel community. Active developers and maintainers are the main target audience for kworkflow, but it is also inviting for users and user-space developers who just want to report a problem and validate a solution without needing to know every detail of the kernel development workflow.

Something I didn’t emphasize during the presentation but would like to correct this flaw here is that the main author and developer of kworkflow is my colleague at Igalia, Rodrigo Siqueira. Being honest, my contributions are mostly on requesting and validating new features, fixing bugs, and sharing scripts to increase feature coverage.

So, the video and slide deck of my FOSDEM presentation are available for download here.

And, as usual, you will find in this blog post the script of this presentation and more detailed explanation of the demo presented there.


Kworkflow at FOSDEM 2025: Speaker Notes and Demo

Hi, I’m Melissa, a GPU kernel driver developer at Igalia and today I’ll be giving a very inclusive talk to not let your motivation go by saving time with kworkflow.

So, you’re a kernel developer, or you want to be a kernel developer, or you don’t want to be a kernel developer. But you’re all united by a single need: you need to validate a custom kernel with just one change, and you need to verify that it fixes or improves something in the kernel.

And that’s a given change for a given distribution, or for a given device, or for a given subsystem…

Look to this diagram and try to figure out the number of subsystems and related work trees you can handle in the kernel.

So, whether you are a kernel developer or not, at some point you may come across this type of situation:

There is a userspace developer who wants to report a kernel issue and says:

  • Oh, there is a problem in your driver that can only be reproduced by running this specific distribution. And the kernel developer asks:
  • Oh, have you checked if this issue is still present in the latest kernel version of this branch?

But the userspace developer has never compiled and installed a custom kernel before. So they have to read a lot of tutorials and kernel documentation to create a kernel compilation and deployment script. Finally, the reporter managed to compile and deploy a custom kernel and reports:

  • Sorry for the delay, this is the first time I have installed a custom kernel. I am not sure if I did it right, but the issue is still present in the kernel of the branch you pointed out.

And then, the kernel developer needs to reproduce this issue on their side, but they have never worked with this distribution, so they just created a new script, but the same script created by the reporter.

What’s the problem of this situation? The problem is that you keep creating new scripts!

Every time you change distribution, change architecture, change hardware, change project - even in the same company - the development setup may change when you switch to a different project, you create another script for your new kernel development workflow!

You know, you have a lot of babies, you have a collection of “my precious scripts”, like Sméagol (Lord of the Rings) with the precious ring.

Instead of creating and accumulating scripts, save yourself time with kworkflow. Here is a typical script that many of you may have. This is a Raspberry Pi 4 script and contains everything you need to memorize to compile and deploy a kernel on your Raspberry Pi 4.

With kworkflow, you only need to memorize two commands, and those commands are not specific to Raspberry Pi. They are the same commands to different architecture, kernel configuration, target device.

What is kworkflow?

Kworkflow is a collection of tools and software combined to:

  • Optimize Linux kernel development workflow.
  • Reduce time spent on repetitive tasks, since we are spending our lives compiling kernels.
  • Standardize best practices.
  • Ensure reliable data exchange across kernel workflow. For example: two people describe the same setup, but they are not seeing the same thing, kworkflow can ensure both are actually with the same kernel, modules and options enabled.

I don’t know if you will get this analogy, but kworkflow is for me a megazord of scripts. You are combining all of your scripts to create a very powerful tool.

What is the main feature of kworflow?

There are many, but these are the most important for me:

  • Build & deploy custom kernels across devices & distros.
  • Handle cross-compilation seamlessly.
  • Manage multiple architecture, settings and target devices in the same work tree.
  • Organize kernel configuration files.
  • Facilitate remote debugging & code inspection.
  • Standardize Linux kernel patch submission guidelines. You don’t need to double check documentantion neither Greg needs to tell you that you are not following Linux kernel guidelines.
  • Upcoming: Interface to bookmark, apply and “reviewed-by” patches from mailing lists (lore.kernel.org).

This is the list of commands you can run with kworkflow. The first subset is to configure your tool for various situations you may face in your daily tasks.

# Manage kw and kw configurations
kw init             - Initialize kw config file
kw self-update (u)  - Update kw
kw config (g)       - Manage kernel .config files

The second subset is to build and deploy custom kernels.

# Build & Deploy custom kernels
kw kernel-config-manager (k) - Manage kernel .config files
kw build (b)        - Build kernel
kw deploy (d)       - Deploy kernel image (local/remote)
kw bd               - Build and deploy kernel

We have some tools to manage and interact with target machines.

# Manage and interact with target machines
kw ssh (s)          - SSH support
kw remote (r)       - Manage machines available via ssh
kw vm               - QEMU support

To inspect and debug a kernel.

# Inspect and debug
kw device           - Show basic hardware information
kw explore (e)      - Explore string patterns in the work tree and git logs
kw debug            - Linux kernel debug utilities
kw drm              - Set of commands to work with DRM drivers

To automatize best practices for patch submission like codestyle, maintainers and the correct list of recipients and mailing lists of this change, to ensure we are sending the patch to who is interested in it.

# Automatize best practices for patch submission
kw codestyle (c)    - Check code style
kw maintainers (m)  - Get maintainers/mailing list
kw send-patch       - Send patches via email

And the last one, the upcoming patch hub.

# Upcoming
kw patch-hub        - Interact with patches (lore.kernel.org)

How can you save time with Kworkflow?

So how can you save time building and deploying a custom kernel?

First, you need a .config file.

  • Without kworkflow: You may be manually extracting and managing .config files from different targets and saving them with different suffixes to link the kernel to the target device or distribution, or any descriptive suffix to help identify which is which. Or even copying and pasting from somewhere.
  • With kworkflow: you can use the kernel-config-manager command, or simply kw k, to store, describe and retrieve a specific .config file very easily, according to your current needs.

Then you want to build the kernel:

  • Without kworkflow: You are probably now memorizing a combination of commands and options.
  • With kworkflow: you just need kw b (kw build) to build the kernel with the correct settings for cross-compilation, compilation warnings, cflags, etc. It also shows some information about the kernel, like number of modules.

Finally, to deploy the kernel in a target machine.

  • Without kworkflow: You might be doing things like: SSH connecting to the remote machine, copying and removing files according to distributions and architecture, and manually updating the bootloader for the target distribution.
  • With kworkflow: you just need kw d which does a lot of things for you, like: deploying the kernel, preparing the target machine for the new installation, listing available kernels and uninstall them, creating a tarball, rebooting the machine after deploying the kernel, etc.

You can also save time on debugging kernels locally or remotely.

  • Without kworkflow: you do: ssh, manual setup and traces enablement, copy&paste logs.
  • With kworkflow: more straighforward access to debug utilities: events, trace, dmesg.

You can save time on managing multiple kernel images in the same work tree.

  • Without kworkflow: now you can be cloning multiple times the same repository so you don’t lose compiled files when changing kernel configuration or compilation options and manually managing build and deployment scripts.
  • With kworkflow: you can use kw env to isolate multiple contexts in the same worktree as environments, so you can keep different configurations in the same worktree and switch between them easily without losing anything from the last time you worked in a specific context.

Finally, you can save time when submitting kernel patches. In kworkflow, you can find everything you need to wrap your changes in patch format and submit them to the right list of recipients, those who can review, comment on, and accept your changes.

This is a demo that the lead developer of the kw patch-hub feature sent me. With this feature, you will be able to check out a series on a specific mailing list, bookmark those patches in the kernel for validation, and when you are satisfied with the proposed changes, you can automatically submit a reviewed-by for that whole series to the mailing list.


Demo

Now a demo of how to use kw environment to deal with different devices, architectures and distributions in the same work tree without losing compiled files, build and deploy settings, .config file, remote access configuration and other settings specific for those three devices that I have.

Setup

  • Three devices:
    • laptop (debian x86 intel local)
    • SteamDeck (steamos x86 amd remote)
    • RaspberryPi 4 (raspbian arm64 broadcomm remote)
  • Goal: To validate a change on DRM/VKMS using a single kernel tree.
  • Kworkflow commands:
    • kw env
    • kw d
    • kw bd
    • kw device
    • kw debug
    • kw drm

Demo script

In the same terminal and worktree.

First target device: Laptop (debian|x86|intel|local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian|arm64|broadcomm|remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos|x86|amd|remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change

Q&A

Most of the questions raised at the end of the presentation were actually suggestions and additions of new features to kworkflow.

The first participant, that is also a kernel maintainer, asked about two features: (1) automatize getting patches from patchwork (or lore) and triggering the process of building, deploying and validating them using the existing workflow, (2) bisecting support. They are both very interesting features. The first one fits well the patch-hub subproject, that is under-development, and I’ve actually made a similar request a couple of weeks before the talk. The second is an already existing request in kworkflow github project.

Another request was to use kexec and avoid rebooting the kernel for testing. Reviewing my presentation I realized I wasn’t very clear that kworkflow doesn’t support kexec. As I replied, what it does is to install the modules and you can load/unload them for validations, but for built-in parts, you need to reboot the kernel.

Another two questions: one about Android Debug Bridge (ADB) support instead of SSH and another about support to alternative ways of booting when the custom kernel ended up broken but you only have one kernel image there. Kworkflow doesn’t manage it yet, but I agree this is a very useful feature for embedded devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the distro kernel image and using config.txt file to set a custom kernel for booting. For ADB, there is no support too, and as I don’t see currently users of KW working with Android, I don’t think we will have this support any time soon, except if we find new volunteers and increase the pool of contributors.

The last two questions were regarding the status of b4 integration, that is under development, and other debugging features that the tool doesn’t support yet.

Finally, when Andrea and I were changing turn on the stage, he suggested to add support for virtme-ng to kworkflow. So I opened an issue for tracking this feature request in the project github.

With all these questions and requests, I could see the general need for a tool that integrates the variety of kernel developer workflows, as proposed by kworflow. Also, there are still many cases to be covered by kworkflow.

Despite the high demand, this is a completely voluntary project and it is unlikely that we will be able to meet these needs given the limited resources. We will keep trying our best in the hope we can increase the pool of users and contributors too.

April 22, 2025 07:30 PM

April 21, 2025

Carlos García Campos

Graphics improvements in WebKitGTK and WPEWebKit after the switch to Skia

In my previous post, when I introduced the switch to Skia for 2D rendering, I explained that we replaced Cairo with Skia keeping mostly the same architecture. This alone was an important improvement in performance, but still the graphics implementation was designed for Cairo and CPU rendering. Once we considered the switch to Skia as stable, we started to work on changes to take more advantage of Skia and GPU rendering to improve the performance even more. In this post I’m going to present some of those improvements and other not directly related to Skia and GPU rendering.

Explicit fence support

This is related to the DMA-BUF renderer used by the GTK port and WPE when using the new API. The composited buffer is shared as a DMA-BUF between the web and UI processes. Once the web process finished the composition we created a fence and waited for it, to make sure that when the UI process was notified that the composition was done the buffer was actually ready. This approach was safe, but slow. In 281640@main we introduced support for explicit fencing to the WPE port. When possible, an exportable fence is created, so that instead of waiting for it immediately, we export it as a file descriptor that is sent to the UI process as part of the message that notifies that a new frame has been composited. This unblocks the web process as soon as composition is done. When supported by the platform, for example in WPE under Wayland when the zwp_linux_explicit_synchronization_v1 protocol is available, the fence file descriptor is passed to the platform implementation. Otherwise, the UI process asynchronously waits for the fence by polling the file descriptor before passing the buffer to the platform. This is what we always do in the GTK port since 281744@main. This change improved the score of all MotionMark tests, see for example multiply.

Enable MSAA when available

In 282223@main we enabled the support for MSAA when possible in the WPE port only, because this is more important for embedded devices where we use 4 samples providing good enough quality with a better performance. This change improved the Motion Mark tests that use 2D canvas like canvas arcs, paths and canvas lines. You can see here the change in paths when run in a RaspberryPi 4 with WPE 64 bits.

Avoid textures copies in accelerated 2D canvas

As I also explained in the previous post, when 2D canvas is accelerated we now use a dedicated layer that renders into a texture that is copied to be passed to the compositor. In 283460@main we changed the implementation to use a CoordinatedPlatformLayerBufferNativeImage to handle the canvas texture and avoid the copy, directly passing the texture to the compositor. This improved the MotionMark tests that use 2D canvas. See canvas arcs, for example.

Introduce threaded GPU painting mode

In the initial implementation of the GPU rendering mode, layers were painted in the main thread. In 287060@main we moved the rendering task to a dedicated thread when using the GPU, with the same threaded rendering architecture we have always used for CPU rendering, but limited to 1 worker thread. This improved the performance of several MotionMark tests like images, suits and multiply. See images.

Update default GPU thread settings

Parallelization is not so important for GPU rendering compared to CPU, but still we realized that we got better results by increasing a bit the amount of worker threads when doing GPU rendering. In 290781@main we increased the limit of GPU worker threads to 2 for systems with at least 4 CPU cores. This improved mainly images and suits in MotionMark. See suits.

Hybrid threaded CPU+GPU rendering mode

We had either GPU or CPU worker threads for layer rendering. In systems with 4 CPU cores or more we now have 2 GPU worker threads. When those 2 threads are busy rendering, why not using the CPU to render other pending tiles? And the same applies when doing CPU rendering, when all workers are busy, could we use the GPU to render other pending tasks? We tried and turned out to be a good idea, especially in embedded devices. In 291106@main we introduced the hybrid mode, giving priority to GPU or CPU workers depending on the default rendering mode, and also taking into account special cases like on HiDPI, where we are always scaling, and we always prefer the GPU. This improved multiply, images and suits. See images.

Use Skia API for display list implementation

When rendering with Cairo and threaded rendering enabled we use our own implementation of display lists specific to Cairo. When switching to Skia we thought it was a good idea to use the WebCore display list implementation instead, since it’s cross-platform implementation shared with other ports. But we realized this implementation is not yet ready to support multiple threads, because it holds references to WebCore objects that are not thread safe. Main thread might change those objects before they have been processed by painting threads. So, we decided to try to use the Skia API (SkPicture) that supports recording in the main thread and replaying from worker threads. In 292639@main we replaced the WebCore display list usage by SkPicture. This was expected to be a neutral change in terms of performance but it surprisingly improved several MotionMark tests like leaves, multiply and suits. See leaves.

Use Damage to track the dirty region of GraphicsLayer

Every time there’s a change in a GraphicsLayer and it needs to be repainted, it’s notified and the area that changed is included so that we only render the parts of the layer that changed. That’s what we call the layer dirty region. It can happen that when there are many small updates in a layer we end up with lots of dirty regions on every layer flush. We used to have a limit of 32 dirty regions per layer, so that when more than 32 are added we just united them into the first dirty area. This limit was removed because we always unite the dirty areas for the same tiles when processing the updates to prepare the rendering tasks. However, we also tried to avoid handling the same dirty region twice, so every time a new dirty region was added we iterated the existing regions to check if it was already present. Without the 32 regions limit that means we ended up iterating a potentially very long list on every dirty region addition. The damage propagation feature uses a Damage class to efficiently handle dirty regions, so we thought we could reuse it to track the layer dirty region, bringing back the limit but uniting in a more efficient way than using always the first dirty area of the list. It also allowed to remove check for duplicated area in the list. This change was added in 292747@main and improved the performance of MotionMark leaves and multiply tests. See leaves.

Record all dirty tiles of a layer once

After the switch to use SkPicture for the display list implementation, we realized that this API would also allow to record the graphics layer once, using the bounding box of the dirty region, and then replay multiple times on worker threads for every dirty tile. Recording can be a very heavy operation, specially when there are shadows or filters, and it was always done for every tile due to the limitations of the previous display list implementation. In 292929@main we introduced the change with improvements in MotionMark leaves and multiply tests. See multiply.

MotionMark results

I’ve shown here the improvements of these changes in some of the MotionMark tests. I have to say that some of those changes also introduced small regressions in other tests, but the global improvement is still noticeable. Here is a table with the scores of all tests before these improvements and current main branch run by WPE MiniBrowser in a RaspberryPi 4 (64bit).

TestScore July 2024Score April 2025
Multiply501.17684.23
Canvas arcs140.24828.05
Canvas lines1613.933086.60
Paths375.524255.65
Leaves319.31470.78
Images162.69267.78
Suits232.91445.80
Design33.7964.06

What’s next?

There’s still quite a lot of room for improvement, so we are already working on other features and exploring ideas to continue improving the performance. Some of those are:

  • Damage tracking: this feature is already present, but disabled by default because it’s still work in progress. We currently use the damage information to only paint the areas of every layer that changed. But then we always compose a whole frame inside WebKit that is passed to the UI process to be presented on screen. It’s possible to use the damage information to improve both, the composition inside WebKit and the presentation of the composited frame on the screen. For more details about this feature read Pawel’s awesome blog post about it.
  • Use DMA-BUF for tile textures to improve pixel transfer operations: We currently use DMA-BUF buffers to share the composited frame between the web and UI process. We are now exploring the idea of using DMA-BUF also for the textures used by the WebKit compositor to generate the frame. This would allow to improve the performance of pixel transfer operations, for example when doing CPU rendering we need to upload the dirty regions from main memory to a compositor texture on every composition. With DMA-BUF backed textures we can map the buffer into main memory and paint with the CPU directly into the mapped buffer.
  • Compositor synchronization: We plan to try to improve the synchronization of the WebKit compositor with the system vblank and the different sources of composition (painted layers, video layers, CSS animations, WebGL, etc.)

 

by carlos garcia campos at April 21, 2025 08:27 AM

April 16, 2025

Pawel Lampe

Introduction to damage propagation in WPE and GTK WebKit ports

Damage propagation is an optional WPE/GTK WebKit feature that — when enabled — reduces browser’s GPU utilization at the expense of increased CPU and memory utilization. It’s very useful especially in the context of low- and mid-end embedded devices, where GPUs are most often not too powerful and thus become a performance bottleneck in many applications.

Basic definitions #

The only two terms that require explanation to understand the feature on a surface level are the damage and its propagation.

The damage #

In computer graphics, the damage term is usually used in the context of repeatable rendering and means essentially “the region of a rendered scene that changed and requires repainting”.

In the context of WebKit, the above definition may be specialized a bit as WebKit’s rendering engine is about rendering web content to frames (passed further to the platform) in response to changes within a web page. Thus the definition of WebKit’s damage refers, more specifically, to “the region of web page view that changed since previous frame and requires repainting”.

On the implementation level, the damage is almost always a collection of rectangles that cover the changed region. This is exactly the case for WPE and GTK WebKit ports.

To better understand what the above means, it’s recommended to carefully examine the below screenshot of GTK MiniBrowser as it depicts the rendering of the poster circle demo with the damage visualizer activated: GTK MiniBrowser screenshot showing the damage visualization. In the image above, one can see the following elements:

  • the web page view — marked with a rectangle stroked to magenta color,
  • the damage — marked with red rectangles,
  • the browser elements — everything that lays above the rectangle stroked to a magenta color.

What the above image depicts in practice, is that during that particular frame rendering, the area highlighted red (the damage) has changed and needs to be repainted. Thus — as expected — only the moving parts of the demo require repainting. It’s also worth emphasizing that in that case, it’s also easy to see how small fraction of the web page view requires repainting. Hence one can imagine the gains from the reduced amount of painting.

The propagation #

Normally, the job of the rendering engine is to paint the contents of a web page view to a frame (or buffer in more general terms) and provide such rendering result to the platform on every scene rendering iteration — which usually is 60 times per second. Without the damage propagation feature, the whole frame is marked as changed (the whole web page view) always. Therefore, the platform has to perform the full update of the pixels it has 60 times per second.

While in most of the use cases, the above approach is good enough, in the case of embedded devices with less powerful GPUs, this can be optimized. The basic idea is to produce the frame along with the damage information i.e. a hint for the platform on what changed within the produced frame. With the damage provided (usually as an array of rectangles), the platform can optimize a lot of its operations as — effectively — it can perform just a partial update of its internal memory. In practice, this usually means that fewer pixels require updating on the screen.

For the above optimization to work, the damage has to be calculated by the rendering engine for each frame and then propagated along with the produced frame up to its final destination. Thus the damage propagation can be summarized as continuous damage calculation and propagation throughout the web engine.

Damage propagation pipeline #

Once the general idea has been highlighted, it’s possible to examine the damage propagation in more detail. Before reading further, however, it’s highly recommended for the reader to go carefully through the famous “WPE Graphics architecture” article that gives a good overview of the WebKit graphics pipeline in general and which introduces the basic terminology used in that context.

Pipeline overview #

The information on the visual changes within the web page view has to travel a very long way before it reaches the final destination. As it traverses the thread and process boundaries in an orderly manner, it can be summarized as forming a pipeline within the broader graphics pipeline. The image below presents an overview of such damage propagation pipeline:

Damage propagation pipeline overview.

Pipeline details #

This pipeline starts with the changes to the web page view visual state (RenderTree) being triggered by one of many possible sources. Such sources may include:

  • User interactions — e.g. moving mouse cursor around (and hence hovering elements etc.), typing text using keyboard etc.
  • Web API usage — e.g. the web page changing DOM, CSS etc.
  • multimedia — e.g. the media player in a playing state,
  • and many others.

Once the changes are induced for certain RenderObjects, their visual impact is calculated and encoded as rectangles called dirty as they require re-painting within a GraphicsLayer the particular RenderObject maps to. At this point, the visual changes may simply be called layer damage as the dirty rectangles are stored in the layer coordinate space and as they describe what changed within that certain layer since the last frame was rendered.

The next step in the pipeline is passing the layer damage of each GraphicsLayer (GraphicsLayerCoordinated) to the WebKit’s compositor. This is done along with any other layer updates and is mostly covered by the CoordinatedPlatformLayer. The “coordinated” prefix of that name is not without meaning. As threaded accelerated compositing is usually used nowadays, passing the layer damage to the WebKit’s compositor must be coordinated between the main thread and the compositor thread.

When the layer damage of each layer is passed to the WebKit’s compositor, it’s stored in the TextureMapperLayer that corresponds to the given layer’s CoordinatedPlatformLayer. With that — and with all other layer-level updates — the WebKit’s compositor can start computing the frame damage i.e. damage that is the final damage to be passed to the very end of the pipeline.

The first step to building frame damage is to process the layer updates. Layer updates describe changes of various layer properties such as size, position, transform, opacity, background color, etc. Many of those updates have a visual impact on the final frame, therefore a portion of frame damage must be inferred from those changes. For example, a layer’s transform change that effectively changes the layer position means that the layer visually disappears from one place and appears in the other. Thus the frame damage has to account for both the layer’s old and new position.

Once the layer updates are processed, WebKit’s compositor has a full set of information to take the layer damage of each layer into account. Thus in the second step, WebKit’s compositor traverses the tree formed out of TextureMapperLayer objects and collects their layer damages. Once the layer damage of a certain layer is collected, it’s transformed from the layer coordinate space into a global coordinate space so that it can be added to the frame damage directly.

After those two steps, the frame damage is ready. At this point, it can be used for a couple of extra use cases:

Eventually — regardless of extra uses — the WebKit’s compositor composes the frame and sends it (a handle to it) to the UI Process along with frame damage using the IPC mechanism.

In the UI process, there are basically two options determining frame damage destiny — it can be either consumed or ignored — depending on the platform-facing implementation. At the moment of writing:

Once the frame damage is consumed, it means that it reached the platform and thus the pipeline ends for that frame.

Damage propagation pipeline details.

Current status of the implementation #

At the moment of writing, the damage propagation feature is run-time-disabled by default (PropagateDamagingInformation feature flag) and compile-time enabled by default for GTK and WPE (with new platform API) ports. Overall, the feature works pretty well in the majority of real-world scenarios. However, there are still some uncovered code paths that lead to visual glitches. Therefore it’s fair to say the feature is still a work in progress. The work, however, is pretty advanced. Moreover, the feature is set to a testable state and thus it’s active throughout all the layout test runs on CI. Not only the feature is tested by every layout test that tests any kind of rendering, but it also has quite a lot of dedicated layout tests. Not to mention the unit tests covering the Damage class.

In terms of functionalities, when the feature is enabled it:

  • activates the damage propagation pipeline and hence propagates the damage up to the platform,
  • activates additional WebKit-compositor-level optimizations.

Damage propagation #

When the feature is enabled, the main goal is to activate the damage propagation pipeline so that eventually the damage can be provided to the platform. However, in reality, a substantial part of the pipeline is always active regardless of the features being enabled or compiled. This part of the pipeline ends before the damage reaches CoordinatedPlatformLayer and is always active because it was used for layer-level optimizations for a long time. More specifically — this part of the pipeline existed long before the damage propagation feature and was using layer damage to optimize the layer painting to the intermediate surfaces.

Because of the above, when the feature is enabled, only the part of the pipeline that starts with CoordinatedPlatformLayer is activated. It is, however, still a significant portion of the pipeline and therefore it implies additional CPU/memory costs.

WebKit’s compositor optimizations #

When the feature is activated and the damage flows through the WebKit’s compositor, it creates a unique opportunity for the compositor to utilize that information and reduce the amount of painting/compositing it has to perform. At the moment of writing, the GTK/WPE WebKit’s compositor is using the damage to optimize the following:

  • to apply global glScissor to define the smallest possible clipping rect for all the painting it does — thus reducing the amount of painting,
  • to reduce the amount of painting when compositing the tiles of the layers using tiled backing stores.

Detailed descriptions of the above optimizations are well beyond the scope of this article and thus will be provided in one of the next articles on the subject of damage propagation.

Trying it out #

As mentioned in the above sections, the feature only works in the GTK and the new-platform-API-powered WPE ports. This means that:

  • In the case of GTK, one can use MiniBrowser or any up-to-date GTK-WebKit-derived browser to test the feature.
  • In the case of WPE with the new WPE platform API the cog browser cannot be used as it uses the old API. Therefore, one has to use MiniBrowser with the --use-wpe-platform-api argument to activate the new WPE platform API.

Moreover, as the feature is run-time-disabled by default, it’s necessary to activate it. In the case of MiniBrowser, the switch is --features=+PropagateDamagingInformation.

Building & running the GTK MiniBrowser #

For quick testing, it’s highly recommended to use the latest revision of WebKit@main with wkdev SDK container and with GTK port. Assuming one has set up the container, the commands to build and run GTK’s MiniBrowser are as follows:

# building:
./Tools/Scripts/build-webkit --gtk --release

# running with visualizer
WEBKIT_SHOW_DAMAGE=1 \
Tools/Scripts/run-minibrowser \
--gtk --release --features=+PropagateDamagingInformation \
'https://webkit.org/blog-files/3d-transforms/poster-circle.html'

# running without visualizer
Tools/Scripts/run-minibrowser \
--gtk --release --features=+PropagateDamagingInformation \
'https://webkit.org/blog-files/3d-transforms/poster-circle.html'

Building & running the WPE MiniBrowser #

Alternatively, a WPE port can be used. Assuming some Wayland display is available, the commands to build and run the MiniBrowser are the following:

# building:
./Tools/Scripts/build-webkit --wpe --release

# running with visualizer
WEBKIT_SHOW_DAMAGE=1 \
Tools/Scripts/run-minibrowser \
--wpe --release --use-wpe-platform-api --features=+PropagateDamagingInformation \
'https://webkit.org/blog-files/3d-transforms/poster-circle.html'

# running without visualizer
Tools/Scripts/run-minibrowser \
--wpe --release --use-wpe-platform-api --features=+PropagateDamagingInformation \
'https://webkit.org/blog-files/3d-transforms/poster-circle.html'

Trying various URLs #

While any URL can be used to test the feature, below is a short list of recommendations to check:

It’s also worth mentioning that WEBKIT_SHOW_DAMAGE=1 environment variable disables damage-driven GTK/WPE WebKit’s compositor optimizations and therefore some glitches that are seen without the envvar, may not be seen when it is set. The URL to this presentation is a great example to explore various glitches that are yet to be fixed. To trigger them, it’s enough to navigate around the presentation using top/right/down/left arrows.

Coming up next #

This article was meant to scratch the surface of the broad, damage propagation topic. While it focused mostly on introducing basic terminology and describing the damage propagation pipeline in more detail, it briefly mentioned or skipped completely the following aspects of the feature:

  • the problem of storing the damage information efficiently,
  • the damage-driven optimizations of the GTK/WPE WebKit’s compositor,
  • the most common use cases for the feature,
  • the benchmark results on desktop-class and embedded devices.

Therefore, in the next articles, the above topics will be examined to a larger extent.


References #

  1. The new WPE platform API is still not released and thus it’s not yet officially announced. Some information on it, however, is provided by this presentation prepared for a WebKit contributors meeting.
  2. The platform that the WebKit renders to depends on the WebKit port:
    • in case of GTK port, the platform is GTK so the rendering is done to GtkWidget,
    • in case of WPE port with new WPE platform API, the platform is one of the following:
      • wayland — in that case rendering is done to the system’s compositor,
      • DRM — in that case rendering is done directly to the screen,
      • headless — in that case rendering is usually done into memory buffer.

April 16, 2025 12:00 AM

April 14, 2025

Igalia WebKit Team

WebKit Igalia Periodical #21

Update on what happened in WebKit in the week from April 7 to April 14.

Cross-Port 🐱

It is now possible to customize the limit of the content resources kept for the network page of WebInspector.

Reduce the amount of data kept for network page of WebInspector after eviction in case the memory limit is exceeded.

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

On the WebRTC front, basic support for Rapid Synchronization was added, along with a couple of spec coverage improvements (https://commits.webkit.org/293567@main, https://commits.webkit.org/293569@main).

Fixed some upstream regressions in OMG.

Dispatch a "canceled" error event for all queued utterances in case of SpeechSynthesis.

Support for the Camera desktop portal was added recently, it will benefit mostly Flatpak apps using WebKitGTK, such as GNOME Web, for access to capture devices, which is a requirement for WebRTC support.

JavaScriptCore 🐟

The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.

Fixed a corner case in BBQJIT.

Work continued on porting the in-place wasm interpreter (IPInt) to 32-bits.

We have been working on bringing the Temporal implementation in JSC up to the current spec, and a step towards that goal was implemented in WebKit PR #43849. This PR changes how calendar annotation key parsing works; it doesn't change anything observable, but sets the groundwork for parsing calendar critical flags and unknown annotations.

Releases 📦️

The recent releases of WebKitGTK and WPE WebKit 2.48 introduced a number of improvements to performance, reduced resource usage, better support for web platform features and standards, multimedia, and more!

Read more about these updates in the freshly published articles for WebKitGTK, and WPE WebKit.

Community & Events 🤝

Support for Display device permission requests was recently added in GNOME Web, this is a requirement for Screen Capture handling on the User-Agent side.

Pawel Lampe published a blog post on the damage propagation feature. This feature reduces browser's GPU utilization at the expense of increased CPU and memory utilization in the WPE and GTK WebKit ports.

Our efforts to bring GstWebRTC support to WebKitGTK and WPEWebKit also include direct contributions to GStreamer. We recently improved WebRTC spec compliance in webrtcbin, by making the SDP mid attribute optional in offers and answers.

That’s all for this week!

by Igalia WebKit Team at April 14, 2025 07:41 PM

April 11, 2025

Delan Azabani

Generative AI in Servo

Servo has shown that we can build a browser with a modern, parallel layout engine in a fraction of the cost of the big incumbents, thanks to our powerful tooling, our strong community, and our thorough documentation. But we can, and should, build Servo without generative AI tools like GitHub Copilot.

This post is my personal opinion, not necessarily representative of Servo or my colleagues at Igalia. I hope it makes a difference.

I’m the lead author of our monthly updates and the Servo book, a member of the Technical Steering Committee, and a coauthor of our current AI policy (permalink). That policy was inspired by Gentoo’s AI policy, and has in turn inspired the AI policies of Loupe and Amaranth.

Recently the TSC voted in favour of two proposals that relax our ban on AI contributions. This was a mistake, and it was also a mistake to wait until after we had made our decision to seek community feedback (see § On governance). § Your feedback made it clear that those proposals are the wrong way forward for Servo.

Correction (2025-04-12)

A previous version of this post highlighted a logic error in the AI-assisted patch we used as a basis for those two proposals. This error was made in a non-AI-assisted part of the patch.

I call on the TSC to explicitly reaffirm that generative AI tools like Copilot are not welcome in Servo, and make it clear that we intend to keep it that way indefinitely, in both our policy and the community, so we can start rebuilding trust. It’s not enough to say oops, sorry, we will not be moving forward with these proposals.

Like any logic written by humans, this policy does have some unintended consequences. Our intent was to ban AI tools that generate bullshit [a] in inscrutable ways, including GitHub Copilot and ChatGPT. But there are other tools that use similar underlying technology in more useful and less problematic ways (see § Potential exceptions). Reviewing these tools for use in Servo should be a community-driven process.

We should not punish contributors for honest mistakes, but we should make our policy easier to follow. Some ways to do this include documenting the tools that are known to be allowed and not allowed, documenting how to turn off features that are not allowed, and giving contributors a way to declare that they’ve read and followed the policy.

The declaration would be a good place to provide a dated link to the policy, giving contributors the best chance to understand the policy and knowingly follow it (or violate it). This is not perfect, and it won’t always be easy to enforce, but it should give contributors and maintainers a foundation of trust.


Potential exceptions

Proposals for exceptions should start in the community, and should focus on a specific tool used for a specific purpose. If the proposal is for a specific kind of tool, it must come with concrete examples of which tools are to be allowed. Much of the harm being caused by generative AI in the world around us comes from people using open-ended tools that are not fit for any purpose, or even treating them like they are AGI.

The goal of these discussions would be to understand:

  • the underlying challenges faced by contributors
  • how effective the tool is for the purpose
  • how well the tool and purpose mitigate the issues in the policy
  • whether there are any existing or alternative solutions
  • whether those solutions have problems that need to be addressed

Sometimes the purpose may need to be constrained to mitigate the issues in the policy. Let’s look at a couple of examples.

For some tasks like speech recognition [b] and machine translation [c] [d], tools with large language models and transformers are the state of the art (other than humans). This means those tools may be probabilistic tools, and strictly speaking, they may be generative AI tools, because the models they use are generative models. Generative AI does not necessarily mean “AI that generates bullshit in inscrutable ways”.

Speech recognition can be used in a variety of ways. If plumbed into ChatGPT, it will have all of the same problems as ChatGPT. If used for automatic captions, it can make videos and calls accessible to people that can’t hear well (myself included), but it can also infantilise us by censoring profanities and make serious errors that cause real harm. If deployed for that purpose by an online video platform, it can undermine the labour of human transcribers and lower the overall quality of captions.

If used as an input method, it would be a clear win for accessibility. My understanding of speech input tools is that they have a clear (if configurable) mapping from the things you say to the text they generate or the edits they make, so they may be a good fit.

In that case, maintainer burden and correctness and security would not be an issue, because the author is in complete control of what they write. Copyright issues seem less of a concern to me, since these tools operate on such a small scale (words and symbols) that they are unlikely to reproduce a copyrightable amount of text verbatim, but I am not a lawyer. As for ethical issues, these tools are generally trained once then run on the author’s device. When used as an input method, they are not being used to undermine labour or justify layoffs. I’m not sure about the process of training their models.

Machine translation can also be used in a variety of ways. If deployed by a language learning app, it can ruin the quality of your core product, but hey, then you can lay off those pesky human translators. If used to localise your product, your users will finally be able to compress to postcode file. If used to localise your docs, it can make your docs worse than useless unless you take very careful precautions. What if we allowed contributors to use machine translation to communicate with each other, but not in code commits, documentation, or any other work products?

Deployed carelessly, they will waste the reader’s time, and undermine the labour of actual human translators who would otherwise be happy to contribute to Servo. If constrained to collaboration, it would still be far from perfect, but it may be worthwhile.

Maintainer burden should be mitigated, because this won’t change the amount or kind of text that needs to be reviewed. Correctness and security too, because this won’t change the text that can be committed to Servo. I can’t comment on the copyright issues, because I am not a lawyer. The ethical issues may be significantly reduced, because this use case wasn’t a market for human translators in the first place.


Your feedback

I appreciate the feedback you gave on the Fediverse, on Bluesky, and on Reddit. I also appreciate the comments on GitHub from several people who were more on the favouring side of the proposal, even though we reached different conclusions in most cases. One comment argued that it’s possible to use AI autocomplete safely by accepting the completions one word at a time.

That said, the overall consensus in our community was overwhelmingly clear, including among many of those who were in favour of the proposals. None of the benefits of generative AI tools are worth the cost in community goodwill [e].

Much of the dissent on GitHub was already covered by our existing policy, but there were quite a few arguments worth highlighting.

Speech-to-text input is ok [f] [g].

Machine translation is generally not useful or effective for technical writing [h] [i] [j]. It can be, if some precautions are taken [k]. It may be less ethically encumbered than generative AI tools [l]. Client-side machine translation is ok [m]. Machine translation for collaboration is ok [n] [o].

The proposals. Proposal 1 is ill-defined [p]. Proposal 2 has an ill-defined distinction between autocompletes and “full” code generation [q] [r] [s].

Documentation is just as technical as code [u]. Wrong documentation is worse than no documentation [v] [w] [x]. Good documentation requires human context [y] [z].

GitHub Copilot is not a good tool for answering questions [ab]. It isn’t even that good of a programming tool [ac]. Using it may be incompatible with the DCO [ad]. Using it could make us depend on Microsoft to protect us against legal liability [ae].

Correctness. Generative AI code is wrong at an alarming rate [af]. Generative AI tools will lie to us with complete confidence [ag]. Generative AI tools (and users of those tools) cannot explain their reasoning [ah] [ai]. Humans as supervisors are ill-equipped to deal with the subtle errors that generative AI tools make [aj] [ak] [al] [am]. Even experts can easily be misled by these tools [an]. Typing is not the hard part of programming [ao], as even some of those in favour have said:

If I could offload that part of the work to copilot, I would be left with more energy for the challenging part.

Project health. Partially lifting the ban will create uncertainty that increases maintainer burden for all contributions [ap] [aq]. Becoming dependent on tools with non-free models is risky [ar]. Generative AI tools may not be fair use [as][at]. Outside of Servo, people have spent so much time cleaning up after LLM-generated mess [au].

Material. Servo contributor refuses to spend time cleaning up after LLM-generated mess [av]. Others will stop donating [aw] [ax] [ay] [az] [ba] [bb] [bc] [bd] [be] [bf] [bg], will stop contributing [bh], will not start donating [bi], will not start contributing [bj] [bk], or will not start promoting [bl] the project.

Broader context. Allowing AI contributions is a bad signal for the project’s relationship with the broader AI movement [bm] [bn] [bo]. The modern AI movement is backed by overwhelming capital interests, and must be opposed equally strongly [bp]. People often “need” GitHub or Firefox, but no one “needs” Servo, so we can and should be held to a higher standard [bq]. Rejection of AI is only credible if the project rejects AI contributions [br]. We can attract funding from AI-adjacent parties without getting into AI ourselves [bs], though that may be easier said than done [bt].


On governance

Several people have raised concerns about how Servo’s governance could have led to this decision, and some have even suspected foul play. But like most discussions in the TSC, most of the discussion around AI contributions happened async on Zulip, and we didn’t save anything special for the synchronous monthly public calls. As a result, whenever the discussion overflowed the sync meeting, we just continued it internally, so the public minutes were missing the vast majority of the discussion (and the decisions). These decisions should probably have happened in public.

Our decisions followed the TSC’s usual process, with a strong preference for resolving disagreements by consensus rather than by voting, but we didn’t have any consistent structure for moving from one to the other. This may have made the decision process prone to being blocked and dominated by the most persistent participants.

Contrast this with decision making within Igalia, where we also prefer consensus before voting, but the consensus process is always used to inform proposals that are drafted by more than one person and then always voted on. Most polls are “yes” or “no” by majority, and only a few polls for the most critical matters allow vetoing. This ensures that proposals have meaningful support before being considered, and if only one person is strongly against something, they are heard but they generally can’t single-handedly block the decision with debate.

The rules are actually more complex than just by majority. There’s clear advice on what “yes”, “no”, and “abstain” actually mean, they take into account abstaining and undecided voters, there are set time limits and times to contact undecided voters, and they provide for a way to abort a poll if the wording of the proposal is ill-formed.

We had twenty years to figure out all those details, and one of the improvements above only landed a couple of months ago.

We also didn’t have any consistent structure for community consultation, so it wasn’t clear how or when we should seek feedback. A public RFC process may have helped with this, and would also help us collaborate on and document other decisions.

More personally, I did not participate in the extensive discussion in January and February that helped move consensus in the TSC towards allowing the non-code and Copilot exceptions until fairly late. Some of that was because I was on leave, including for the vote on the initial Copilot “experiments”, but most of it was that I didn’t have the bandwidth. Doing politics is hard, exhausting work, and there’s only so much of it you can do, even when you’re not wearing three other hats.


a white and grey cat named Diffie, poking her head out through a sliding door

April 11, 2025 12:00 PM

April 09, 2025

Eric Meyer

CSS Naked Day 2025

I’m a little (okay, a lot) late to it, but meyerweb is now participating in CSS Naked Day  —  I’ve removed the site’s styles, except in cases where pages have embedded CSS, which I’m not going to do a find-and-replace to try to suppress.  So if I embedded a one-off CSS Grid layout, like on the Toolbox page, that will still be in force.  Also, cached files with CSS links could take a little time to clear out.  Otherwise, you should get 1990-style HTML.  Enjoy!

(The site’s design will return tomorrow, or whenever I remember [or am prodded] to restore it.)


Have something to say to all that? You can add a comment to the post, or email Eric directly.

by Eric Meyer at April 09, 2025 05:22 PM

Tiago Vignatti

Organizações Sustentáveis: Lições da Igalia

Esta é uma versão traduzida para o português do texto original publicado em inglês.

Na Igalia, a gente trabalha com o desenvolvimento de navegadores da Internet, tipo Chrome e Safari. Na verdade, atuamos com as tecnologias por trás desses navegadores, que permitem que os sites tenham uma boa aparência e funcionem corretamente, como HTML, CSS e JavaScript—os blocos de construção de todas as aplicações da Internet.

Em tudo o que fazemos, tentamos dar grande ênfase à responsabilidade social. Isso significa que o nosso foco vai além do lucro, priorizando ações que geram um impacto positivo na sociedade. Além disso, a Igalia é construída com base em valores de igualdade e transparência, profundamente enraizados em nossa estrutura organizacional. Estes compromissos com valores e responsabilidade social moldam os princípios fundamentais que orientam o nosso trabalho.

by Author at April 09, 2025 02:05 PM

April 07, 2025

Igalia WebKit Team

WebKit Igalia Periodical #20

Update on what happened in WebKit in the week from March 31 to April 7.

Cross-Port 🐱

Graphics 🖼️

By default we divide layers that need to be painted into 512x512 tiles, and only paint the tiles that have changed. We record each layer/tile combination into a SkPicture and replay the painting commands in worker threads, either on the CPU or the GPU. A change was landed to improve the algorithm, by recording the changed area of each layer into a single SkPicture, and for each tile replay the same picture, but clipped to the tile dimensions and position.

WPE WebKit 📟

WPE Platform API 🧩

New, modern platform API that supersedes usage of libwpe and WPE backends.

A WPE Platform-based implementation of Media Queries' Interaction Media Features, supporting pointer and hover-related queries, has landed in WPE WebKit.

When using the Wayland backend, this change exposes the current state of pointing devices (mouse and touchscreen), dynamically reacting to changes such as plugging or unplugging. When the new WPEPlatform API is not used, the previous behaviour, defined at build time, is still used.

WPE Android 🤖

Adaptation of WPE WebKit targeting the Android operating system.

A number of fixes have been merged to fix and improve building WPE WebKit for Android. This is part of an ongoing effort to make it possible to build WPE-Android using upstream WebKit without needing additional patches.

The example MiniBrowser included with WPE-Android has been fixed to handle edge-to-edge layouts on Android 15.

That’s all for this week!

by Igalia WebKit Team at April 07, 2025 09:49 PM

April 02, 2025

Tiago Vignatti

Sustainable Organizations: Lessons from Igalia

At Igalia, we work on developing Internet browsers such as Chrome and Safari. In fact, we work with the technologies behind these browsers that allow websites to look good and function correctly, such as HTML, CSS, and JavaScript—the building blocks of all Internet applications.

In everything we do, we try to place a strong emphasis on social responsibility. This means that our focus goes beyond profit, prioritizing actions that generate a positive impact on society. Igalia is built on values of equality and transparency, which are deeply embedded in our organizational structure. These commitments to values ​​and social responsibility shape the fundamental principles that guide our work.

by Author at April 02, 2025 05:52 PM

March 31, 2025

Igalia WebKit Team

WebKit Igalia Periodical #19

Update on what happened in WebKit in the week from March 24 to March 31.

Cross-Port 🐱

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

ManagedMediaSource was enabled in WPE WebKit and WebKitGTK.

JavaScriptCore 🐟

The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.

The JSC jitless Wasm tests now work again on ARMv7.

Doing a 32-bit build on ARM64 hardware now works with GCC 11.x as well.

Graphics 🖼️

Landed a change that improves the painting of tile fragments in the compositor if damage propagation is enabled and if the tiles sizes are bigger than 256x256. In those cases, less GPU is utilized when damage allows.

The GTK and WPE ports no longer use DisplayList to serialize the painting commands and replay them in worker threads, but SkPictureRecorder/SkPicture from Skia. Some parts of the WebCore painting system, especially font rendering, are not thread-safe yet, and our current cross-thread use of DisplayList makes it harder to improve the current architecture without breaking GTK and WPE ports. This motivated the search for an alternative implementation.

Community & Events 🤝

Sysprof is now able to filter samples by marks. This allows for statistically relevant data on what's running when a specific mark is ongoing, and as a consequence, allows for better data analysis. You can read more here.

That’s all for this week!

by Igalia WebKit Team at March 31, 2025 07:58 PM

Brian Kardell

Breakouts 2025: Collective

Breakouts 2025: Collective

The other day was W3C Breakouts Day: Effectively, an opportunity for people to create sessions to talk about or work on anything related to the Web. We do this twice a year. Once entirely virtually (last week) and then again during hybrid meetings in association with W3C's biggest event of the year, called "TPAC".

This year there were 4 blocks of sessions with 4-5 sessions competing concurrently for the same time slot. There were 2 in the morning that I'd chosen, but then realized that I had calendar conflicts with other meetings and wound up missing them.

In the afternoon there were 3 sessions proposed by Igalians - unfortunately two of them simulteneously:

There were actually several in that timeslot that I wanted to attend but the choice was pretty straightforward: I'd attend mine and Eric's. Unfortunately I did a bad job chairing and forgot to assign a scribe or record a presentation.

Eric's session told the story of SVG and how, despite it getting a lot of love from developers and designers and other working group participants, it just didn't get a lot of love or investment from the browser vendors themselves. Amelia Belamy-Royd's (former editor/chair) told us about how things ultimately stalled out around 2020 with so much left undone, burnout, disappointments, and a W3C working group that was almost closed up with a note last year before being sort of "rescued" at the last minute with a very targeted rechartering. However, that rechartering still hasn't resulted in much - no new meetings, etc. We talked about how Igalia has been investing, along with companies like Wix, in trying to help move things forward. Then the question was: How can we move it forward, together? You can read the minutes for more, but one idea was a collective around SVG.

I mention this because while might they seem at first like pretty different topics, Eric's session and my own are really just two things raising the same overarching question: Why have we built the infrastructure of the whole web ecosystem to be fundamentally dependent on the funding and prioritization of a few private organizations?. And - how can we fix that?.

My session presented some things we've done, or tried so far.

Things we're trying...

First, just moving from proprietary projects to open source has helped a bit. Instead of one organization being 100% responsible, the stewards themselves today contribute "only" ~80 to 95% of the commits.

My company Igalia is among the top few committers to all of the web engine projects every year for at least half a decade, and we have many ways that we try to diversify work upstream. What's interesting about attempts to diversify funding is that there is a kind of a spectrum here relating to both the scope of what we're tackling and how well understood the approach is...

one sponsor/single issue collective/whole project

On the left is a single sponsor with very specific/measurable tasks. This could be a list of bugs, or implementation of a pretty stable feature in a single engine. You can contract Igalia for stuff like this - The :has() implementation and proof-of-concept that moved the needle in Chromium, for example, was sponsored by EyeO. CSS Grid in Blink and WebKit was sponsored by Bloomberg Tech. We know how to do this. We've got lots of practice.

Moving slightly more 'right', we have things like our Open Priortization which is trying to decide how to prioritize and share a funding burden. We know less of how to do this, we learned things with the experiment, but we've not repeated it because it involved a lot of "extra" work.

Then there's our efforts with MathML, funded by grants, then partnerships, then a collective aimed at generally advancing and maintaining MathML support in browsers. That's now involving more decision making - there's not a tight list of tasks or even a specific browser. You need to take in money to find out how much budget you have, and then try to figure out how to use it. In this case, there's a steering committee made up of the biggest funders. Every 6 months or so there is a report of the work that was done submitted for payment. Is that a great model?

We also have Wolvic, which attempted to have partnerships and a collective, with a model where if you put in a kind of minimum amount every 6 months then you would get invited to a town hall to discuss how we'd prioritize the spending of available funds. Is that a great model? The thing I like about it is that it binds things to some kind of reality. It's great if someone gave $500, for example, but if 10 other orgs also did so and they all want different things, none of which we can do with the available funds... At least they can all help decide what to do. In that model, the input is still only informative - it's ultimately Igalia who decides.

Or, you might have seen a new thing hosted at the Linux Foundation for funding work in Chromium. That's more formal, but kind of similar in challenges. Unfortunately there was no one attending our sessions who could talk about it - I know a bit more, but I'm not going to break news on my blog :). We'll have a podcast about it when we can.

Or, a little further right there's Open Web Docs (OWD), which I'm pleased does seem to be succeeding at sponsoring some real work and doing real prioritization. Florian from OWD was there and was able to give us some real thoughts. They too are drawing now from grants like Soverign Tech Fund. They produce as part of this an Impact Report.

Or, maybe all the way to the right you have a general browser fund like Servo. There it is the Technical Steering Committee that decides what to do with the funds.

A lot of this thinking is currently still pretty esoteric because, for example, through the collective, Servo isn't taking in enough money for even a single full time employee. I also mentioned our work with Interop, and the fact that even with that level of effort to priortize, it is very difficult to not wind up taking resources away from actual work in order to figure out which work to potentially do! It's hard to not add admin costs that eat away at the overall progress. But that's exactly why we should be talking about these things now. How can we make this work?

The Actual Discussion

As I said, we didn't have a scribe, but there was some discussion I can attempt to summarize from the queue I can still see in IRC.

Q. Do you think the W3C has a role to play?

That's why I brought it here. My own desire would be yes. There are some venues where working groups and interest groups are encouraged to pool funds and so on - but the W3C isn't currently one of them and I'm unsure how others feel. It's a thing I have been trying to pitch for a few years now that we could figure out.

The Wolvic model that I described above was born from thoughts around this. What if W3C groups had the ability to put money into a common fund and then at the end of the year they could decide how to deploy it. Maybe sometimes that would be for tools. Maybe sometimes that would be for fixing bugs. Maybe sometimes that would be for new implementation, or last implementation, or supporting spec authorship. But it would always be grounded on something real about collaboration that puts at some degree of power beyond the implementers. Maybe that would mean that print CSS would inch along at a snail's pace - but at least it's better than not at all, and it shows a way. At least it doesn't seem like the group isn't even picking up the phone.

Q. Are engines like Servo and Ladybird filing implementation reports? WPT?

Basically... Not exactly. The main thrust is that both engines do make themselves available on Web Platform Tests and closely track their scores. That said, the models differ quite a bit in their approach and Ladybird, for example, generally is more focused from the outside in. It's possible they only start with 25% of the standard required to load a certain page. So, at what point would they submit an impact report? Both projects do currently open issues when they find implementation related questions or problems, and both have the ability to expand the test suites in response to those issues. So, effectively: Nobody has time for that level of formality, but they're following the spirit of it.

Q. Doesn't it help to have a thing you can measure if you approach someone to ask for money?

Yes, that is the whole point of my spectrum above. The further right you move, the more of a question this is. On our podcast, for example, we've asked people "Why don't you donate to Mozilla?" and the main answer given is always "I don't know what they'll spend it on, but probably not things I would agree with". The more people putting in, the more there is to balance or require trust. Currently a lot of these work for grants or more specific/finite problems which do reports of success - like the one I provided above for MathML or the implementation report in OWD.

But does it mean there shouldn't be more general funds? Currently the general funds come from default search and wind up with a single org to decide - is it what we want?

So, that's all that was discussed. All in all, it was pretty good - but I'm slightly disappointed there wasn't a little more discussion of how we could collectively govern or priortize, or even if we all agreed that that's something that we want. I'd love to hear any of your thoughts!

March 31, 2025 04:00 AM

March 28, 2025

André Almeida

Linux 6.14, an almost forgotten release

Linux 6.14 is the second release of 2025, and as usual Igalia took part on it. It’s a very normal release, except that it was release on Monday, instead of the usual Sunday release that has been going on for years now. The reason behind this? Well, quoting Linus himself:

I’d like to say that some important last-minute thing came up and delayed things.

But no. It’s just pure incompetence.

But we did not forget about it, so here’s our Linux 6.14 blog post!


A part of the development cycle for this release happened during late December, when a lot of maintainers and developers were taking their deserved breaks. As a result of this, this release contains less changes than usual as stated by LWN as the “lowest level of merge-window activity seen in years”. Nevertheless, some cool features made through this release:

  • NT synchronization primitives: Elizabeth Figura, from Codeweavers, is know from her work around improving Wine sync functions, like mutexes and semaphores. She was one the main collaborators behind the futex_waitv() work and now developed a virtual driver that is more compliant with the precise semantics that the NT kernel exposes. This allows Wine to behave closer to Windows without the need to create new syscalls, since this driver uses ioctl() as the front-end uAPI.
  • RWF_UNCACHED: Linux has two ways of dealing with storage I/O: buffered I/O (usually the preferred one) that stores data in a temporary buffer and regularly syncs the cache data with the device; and direct I/O that doesn’t use cache and always writes/reads synchronously with the storage device. Now a new mixed approach is available: uncached buffered I/O. This method is aimed to have a fast way to write or read data that will not be needed again in the short term. For reading, the device writes data in the buffer and as soon as the user finished reading the buffer, it’s cleared from the cache. For writing, as soon as userspace fills the cache, the device reads it and removes it from the cache. In this way we still have the advantage of using a fast cache but reducing the cache pressure.
  • amdgpu panic support: AMD developers added kernel panic support for amdgpu driver, “which displays a pretty user friendly message on the screen when a Linux kernel panic occurs” instead of just a black screen or a partial dmesg log.

As usual Kernel Newbies provides a very good summary, you should check it for more details: Linux 6.14 changelog. Now let’s jump to see what were the merged contributions by Igalia for this release!

DRM

For the DRM common infrastructure, we helped to land a standardization for DRM client memory usage reporting. Additionally, we contributed to improve and fix bugs found in drivers of AMD, Intel, Broadcom, and Vivante.

AMDGPU

For the AMD driver, we fixed bugs experienced by users of Cosmic Desktop Environment on several AMD hardware versions. One was uncovered with the introduction of overlay cursor mode, and a definition mismatch across the display driver caused a page-fault in the usage of multiple overlay planes. Another bug was related to division by zero on plane scaling. Also, we fixed regressions on VRR and MST generated by the series of changes to migrate AMD display driver from open-coded EDID handling to drm_edid struct.

Intel

For the Intel drivers, we fixed a bug in the xe GPU driver which prevented certain type of workarounds from being applied, helped with the maintainership of the i915 driver, handled external code contributions, maintained the development branch and sent several pull requests.

Raspberry Pi (V3D)

We fixed the GPU resets for the Raspberry Pi 4 as we found out to be broken as per a user bug report.

Also in the V3D driver, the active performance monitor is now properly stopped before being destroyed, addressing a potential use-after-free issue. Additionally, support for a global performance monitor has been added via a new DRM_IOCTL_V3D_PERFMON_SET_GLOBAL ioctl. This allows all jobs to share a single, globally configured perfmon, enabling more consistent performance tracking and paving the way for integration with user-space tools such as perfetto.

A small video demo of perfetto integration with V3D

etnaviv

On the etnaviv side, fdinfo support has been implemented to expose memory usage statistics per file descriptor, enhancing observability and debugging capabilities for memory-related behavior.

sched_ext

Many BPF schedulers (e.g., scx_lavd) frequently call bpf_ktime_get_ns() for tracking tasks’ runtime properties. bpf_ktime_get_ns() eventually reads a hardware timestamp counter (TSC). However, reading a hardware TSC is not performant in some hardware platforms, degrading instructions per cycyle (IPC).

We addressed the performance problem of reading hardware TSC by leveraging the rq clock in the scheduler core, introducing a scx_bpf_now() function for BPF schedulers. Whenever the rq clock is fresh and valid, scx_bpf_now() provides the rq clock, which is already updated by the scheduler core, so it can reduce reading the hardware TSC. Using scx_bpf_now() reduces the number of reading hardware TSC by 50-80% (e.g., 76% for scx_lavd).

Assorted kernel fixes

Continuing our efforts on cleaning up kernel bugs, we provided a few fixes that address issues reported by syzbot with the goal of increasing stability and security, leveraging the fuzzing capabilities of syzkaller to bring to the surface certain bugs that are hard to notice otherwise. We’re addressing bug reports from different kernel areas, including drivers and core subsystems such as the memory manager. As part of this effort, several fixes were done for the probe path of the rtlwifi driver.

Check the complete list of Igalia’s contributions for the 6.14 release

Authored (38)

Changwoo Min

Christian Gmeiner

Luis Henriques

Maíra Canal

Melissa Wen

Ricardo Cañuelo Navarro

Rodrigo Siqueira

Thadeu Lima de Souza Cascardo

Tvrtko Ursulin

Reviewed (36)

André Almeida

Christian Gmeiner

Iago Toral Quiroga

Jose Maria Casanova Crespo

Luis Henriques

Maíra Canal

Tvrtko Ursulin

Tested (2)

André Almeida

Christian Gmeiner

Acked (1)

Iago Toral Quiroga

Maintainer SoB (6)

Maíra Canal

Tvrtko Ursulin

March 28, 2025 12:00 AM

March 27, 2025

Igalia Compilers Team

Summary of the February 2025 TC39 plenary

code { word-break: normal; }

The February TC39 meeting in Seattle wrapped up with significant updates and advancements in ECMAScript, setting an exciting trajectory for the language's evolution. Here are the key highlights, proposal advancements, and lively discussions from the latest plenary.

Proposals advancing to Stage 4 🎉 #

The following proposals advanced to stage 4 early in the meeting, officially becoming a part of ECMAScript 2025. Congratulations to the people who shepherded them through the standardization process!

  • Float16Array: a typed array that uses 16-bit floating point values, mostly for interfacing with other systems that need 16-bit float arrays.

    • Champions: Leo Balter, Kevin Gibbons
  • RegExp.escape(): Sanitizes a string so that it can be used as a string literal pattern for the RegExp constructor.

    • Champions: Kevin Gibbons, Jordan Harband
  • Redeclarable global eval vars simplifies the mental model of global properties. It's no longer an error to redeclare a var or function global property with a let or const of the same name.

    • Champion: Shu-yu Guo

Stage changes #

import defer reaches Stage 3 igalia logo #

Now with full test262 coverage, the import defer proposal advanced to stage 3, without changes since its previous presentation. This is the signal for implementors to go ahead and implement it. This means that the proposal is likely to appear soon in browsers!

  • Champion: Nicolò Ribaudo igalia logo

Math.clamp reaches Stage 1 #

To clamp a number x to an interval [a, b] means to produce a value no smaller than a and no greater than b (returning x if x is in the interval). Oliver Medhurst presented a neat little proposal to add this feature to JS's Math standard library object. And Oliver was able to convince the committee to advance the discussion to stage 1.

  • Champion: Oliver Medhurst

Error stack accessor reaches stage 2 #

Instances of Error and its subclasses have a stack property that returns a string representing the stack trace. However, this property is not specified, and previous attempts to define it in the spec did not get far because different JS engines have different string representations for the stack trace, and implementations can't converge on one behavior because there's code in the wild that does browser detection to know how to parse the format.

In December it was decided that specifying the presence of a stack property should be split off of the error stack proposal. This new error stack accessor proposal was first presented in this plenary, where it reached stage 2. The proposal achieves some amount of browser alignment on some details (e.g. is stack an own property? is it a getter/setter pair?), while also providing a specified base on which other proposals and web specs can build, but it leaves the stack trace string implementation-defined.

  • Champions: Jordan Harband, Mark S. Miller

Proposal Number.isSafeNumeric #

New TC39 contributor ZiJian Liu offered a suggestion for tackling a problem routinely faced by JS programmers who work closely with numbers: "Am I sure that this numeric string S, if interpreted as a JS number, is going to be exactly preserved?"

The proposal is a new method on Numbers, isSafeNumeric, that would allow us to check this in advance. Essentially, ZiJian is trying to delimit a safe space for JS numbers. The discussion was heated, with many in the committee not sure what it actually means for a numeric string to be "preserved", and whether it can even be solved at all. Others thought that, although there may be no solution, it's worth advancing the proposal to stage 1 to begin to explore the space. Ultimately, the proposal did not advance to stage 1, but that doesn't mean it's the end—this topic may well come back in a sharper, more clearly defined form later on.

Temporal igalia logo #

Temporal, the upcoming proposal for better date and time support in JS, has been seeing a surge of interest in the last few weeks because of a complete implementation being available in Firefox Nightly. Folks seem to be looking forward to using it in their codebases!

Our colleague Philip Chimento presented a status update. Firefox is at ~100% conformance with just a handful of open questions, and the Ladybird browser is the next closest to shipping a full implementation, at 97% conformance with the test suite.

The committee also reached consensus on making a minor change to the proposal which relaxed the requirements on JS engines when calculating lunar years far in the future or past.

ShadowRealm igalia logo #

Philip also presented a status update on the ShadowRealm proposal. ShadowRealm is a mechanism that lets you execute JavaScript code synchronously, in a fresh, isolated environment. This has a bunch of useful applications such as running user-supplied plugins without letting them destabilize your app, or prevention of supply chain attacks in your dependencies.

We think we have resolved all of the open questions on the TC39 side, but what remains is to gauge the interest in implementing the web integration parts. We had a lively discussion on what kinds of use cases we'd like to see in TC39 versus what kinds of use cases the web platform world would like to see.

  • Champions: Dave Herman, Caridy Patiño, Mark S. Miller, Leo Balter, Rick Waldron, Chengzhong Wu

Decorators update #

Implementations of the decorators proposal are in progress: the Microsoft Edge team has an almost complete implementation on top of Chromium's V8 engine, and Firefox's implementation is in progress.

Although there are two implementations in progress, it has been stated that none of the three major browsers want to be the first one to ship among them, leaving the future of the proposal uncertain.

  • Champions: Kristen Hewell Garett

ArrayBuffers #

There was some progress on proposals related to ArrayBuffers. One topic was about the Immutable ArrayBuffer proposal, which allows creating ArrayBuffers in JS from read-only data, and in some cases allows zero-copy optimizations. The proposal advanced to stage 2.7.

  • Champions: Mark S. Miller, Peter Hoddie, Richard Gibson, Jack Works

In light of that, the committee considered whether or not it made sense to withdraw the Limited ArrayBuffer proposal (read-only views of mutable buffers). It was not withdrawn and remains at stage 1.

  • Champion: Jack Works

Don’t call well-known Symbol methods for RegExp on primitive values #

The well-known symbols Symbol.match, Symbol.matchAll, Symbol.replace, Symbol.search and Symbol.split allow an arbitrary object to be passed as the argument to the corresponding string methods and behave like a custom RegExp. However, these methods don't check that the argument is an object, so you could make "foo bar".split(" ") have arbitrary behavior by setting String.prototype[Symbol.split].

This is an issue especially for Node.js and Deno, since a lot of their internal code is written in JavaScript. They use primordials to guard their internal code from userland monkeypatching, but guarding against overriding the matching behavior of strings could lead to performance issues.

The proposed solution was to have the relevant string methods only look for these well-known symbols if the argument is an object, rather than doing so for all primitives other than null or undefined. This is technically a breaking change, but it's not expected to lead to web compatibility issues in the wild because of how niche these symbols are. Given that, the committee reached consensus on making this change.

Future direction for Records and Tuples #

The Records and Tuples proposal has been stuck at stage 2 for long time, due to significant concerns around unrealistic performance expectations. The committee again discussed the proposal, and how to rewrite it to introduce some of its capabilities to the language without falling into the same performance risks.

You can read more details on GitHub, but the summary is that Records and Tuples might become:

  • objects, rather than primitives
  • shallowly immutable, rather than enforcing deep immutability
  • using an equals() method rather than relying on === for recursive comparison
  • have special handling in Map/Set, easily allowing multi-value keys.
  • Champions: Ashley Claymore

A unified vision for measure and decimal igalia logo #

In discussions surrounding the topic of decimal, it has become increasingly clear that it overlaps with the measure proposal (possibly to be rechristened as amount -- watch this space) to such an extent that it might make sense to consider both proposals in a unified way. That may or may not mean that the proposals get literally merged together (although that could be a path forward). Ultimately, the committee wasn't in favor of merging the proposals, though there were concerns that, if they were kept separate, one proposal might advance without the other advancing. As usual with all discussions of decimal, the discussion overflowed into the next day, with Shane Carr of Google presenting his own sketch of how the unity of decimal and measure might happen.

  • Champions (Decimal): Jesse Alama igalia logo , Jirka Maršík, Andrew Paprocki
  • Champion (Measure): Ben Allen igalia logo

Error.captureStackTrace #

Since at least 2015, V8 has exposed a non-standard API called Error.captureStackTrace() to expose an Error-like stack property on any arbitrary object. It also allows passing a function or constructor as its second argument, to skip any stack frames after the last call to that function, which can be used to hide implementation details that won't be useful to the user.

Although this API was V8-internal for so long, in 2023 JSC shipped an implementation of this API, and now SpiderMonkey is working on one. And since the V8 and JSC implementations have some differences, this is now being brought up as a TC39 proposal to settle on some exact behavior, which is now stage 1.

This proposal is only about Error.captureStackTrace(), and it does not attempt to specify V8's Error.prepareStackTrace() API, which allows customizing the stack trace string representation. This API is still V8-only, and there don't seem to be any plans to implement it elsewhere.

  • Champion: Matthew Gaudet

The "fixed" and "stable" object integrity traits #

The Stabilize proposal is exploring adding a new integrity trait for objects, similar to Object.preventExtension, Object.seal and Object.freeze.

A fixed object is an object whose properties can be safely introspected without triggering side effects, except for when triggering a getter through property access. In addition to that, it's also free from what we call the "override mistake": a non-writeable property doesn't prevent the same property from being set on objects that inherit from it.

const plainObject = Object.defineProperty({}, "x", {
value: 1,
writable: false,
});

const fixedObject = Object.defineProperty({}, "x", {
value: 1,
writable: false,
});
Object.fix(fixedObject);

const inheritPlain = { __proto__: plainObject };
inheritPlain.x = 3; // doesn't work!

const inheritFixed = { __proto__: fixedObject };
inheritFixed.x = 3; // sets `inheritFixed.x` to `3`, while leaving `fixedObject.x` as `1`

An object that is both fixed and frozen is called stable.

The proposal was originally also exploring one more integrity trait, to prevent code from defining new private fields on an existing object through the return override trick. This has been removed from this proposal, and instead we are exploring changing the behavior of Object.preventExtensions to also cover this case.

  • Champions: Mark S. Miller, Chip Morningstar, Richard Gibson, Mathieu Hofman

Curtailing the power of "thenables" #

Once upon a time, async functions did not exist, and neither did promises. For a long time, the only way to do asynchronous work in JS was with callbacks, resulting in "callback hell". Promises first appeared in userland libraries, which eventually converged into one single interoperable API shape called Promises/A+.

When JS added the Promise built-in in ES6, it followed Promises/A+, which included a way to interoperate with other promise libraries. You might know that resolving a promise p1 with a value which is a different promise p2 will not immediately fulfill p1, but it will wait until p2 is resolved and have the same fulfilled value or rejection reason as p2. For compatibility with promise libraries, this doesn't only work for built-in promises, but for any object that has a .then method (called "thenables").

In the time since Promise was added as a built-in, however, it has become clear that thenables are a problem, because it's easy for folks working on the JS engines to forget they exist, resulting in JS code execution happening in unexpected places inside the engine. In fact, even objects fully created within the engine can end up being thenables, since you can set Object.prototype.then to a function. This has led to a number of security vulnerabilities in JS engines, including one last year involving async generators that needed fixes in the JS specification.

It is not feasible to completely get rid of thenables because pre-ES6 promise libraries are still being used to some extent. However, this proposal is about looking for ways to change their behavior so that these bugs can be avoided. It just became stage 1, meaning the possible solutions are still in the process of being explored, but some proposed ideas were:

  • Make it impossible to set Object.prototype.then.
  • Ignore thenables for some internal operations.
  • Change the definition of thenable so that having a then method on Object.prototype and other fundamental built-in objects and prototypes doesn't count, while it would count for regular user-created objects.
  • During the discussion in plenary, it was mentioned that userland JS also runs into issues with thenables when the call to .then leads to reentrancy (that is, if it calls back into the code that called it). If all engine vulnerabilities caused by thenables are related to reentrancy, then both issues could be solved at once. But it does not seem like that is the case, and solving the reentrancy issue might be harder.
  • Champion: Matthew Gaudet

Stable Formatting update #

Eemeli Aro from Mozilla presented an update on stable formatting. This proposal aims to add a "stable" locale for addressing some of the weaknesses of our current model of localization. Most importantly, people need an entirely separate code path for either unlocalized or machine-readable use cases which hurts in that it adds complexity to the interface, and it distracts users away from the patterns they ought to be using for their interfaces. This is relevant for use-cases such as testing (especially snapshot testing).

Temporal already made some strides in this direction by keeping the API surface consistent while allowing users to specify the ISO8601 calendar or the UTC timezone instead of relying on localizable alternatives. This proposal would add a "null" locale either in the form of the literal null value in JavaScript or using the "zxx" pattern commonly used in the Internationalization world, in order to provide a stable formatting output so users could write their interface once and just use this specific locale to achieve their desired result.

In the meeting, Eemeli presented their proposal for various formats that should be a part of this stable locale and the committee expressed a preference for the "zxx" locale instead of null with some concerns regarding null being too similar to undefined.

  • Champion: Eemeli Aro

Intl Locale Info API #

The Intl Locale Info API proposal, which is very close to done, was brought back to the committee perhaps for the last time before Stage 4. The notable change was to remove minimal days from the API due to the lack of strong use cases for it. Finally, there were discussions about the final remaining open questions, especially those that would block implementations. These are planned to be fixed shortly before the proposal goes to Stage 4.

  • Champion: Frank Tang

Community event igalia logo #

On Thursday evening after the meeting adjourned, the committee members traveled 2 blocks down the road to a SeattleJS meetup kindly hosted by DocuSign at their HQ. A number of committee members gave presentations on TC39-related topics. Two of these were by our colleagues Nicolò Ribaudo, who gave an introduction to the deferred imports proposal, and Philip Chimento, who gave a tour of the Temporal API.

TG5 Workshop igalia logo #

After the plenary ended on Thursday, the discussion continued on Friday with a session of the TG5 part of TC39, which is dedicated to research aspects of JavaScript. Our colleague Jesse Alama presented on Formalizing JS decimal numbers with the Lean proof assistant. There were a number of other presentations - a report on user studies of the MessageFormat 2.0, studies on TC39 proposals, A parser generator template literal tag generating template literal tags and "uncanny valleys" in language design.

March 27, 2025 12:00 AM

March 26, 2025

Georges Stavracas

A Sysprof enhancement

I’ve blogged in the past about how WebKit on Linux integrates with Sysprof, and provides a number of marks on various metrics. At the time that was a pretty big leap in WebKit development since it gave use a number of new insights, and enabled various performance optimizations to land.

But over time we started to notice some limitations in Sysprof. We now have tons of data being collected (yay!) but some types of data analysis were pretty difficult yet. In particular, it was difficult to answer questions like “why does render times increased after 3 seconds?” or “what is the CPU doing during layout?”

In order to answer these questions, I’ve introduced a new feature in Sysprof: filtering by marks.

  • Screenshot of Sysprof's Marks view with the cursor hovering a mark, and the context menu with
  • Screenshot of Sysprof's Time Profiler view showing filtered CPU samples

Hopefully people can use this new feature to provide developers with more insightful profiling data! For example if you spot a slowdown in GNOME Shell, you open Sysprof, profile your whole system, and filter by the relevant Mutter marks to demonstrate what’s happening there.

Here’s a fancier video (with music) demonstrating the new feature:

Enjoy!

by Georges Stavracas at March 26, 2025 07:46 PM

March 25, 2025

Stephen Chenney

Find-in-Page Highlight Styling

The browser find-in-page feature is intended to allow users to identify where a search term appears in the page. Browsers highlight the locations of the string using highlights, typically one color for the active match and another color for the other matches. Both Chrome and Firefox have this behavior, and problems arise when the default browser color offers poor contrast in the page, or can be easily confused with other highlighted content.

Safari works around this problem by applying an overlay and painting search highlights on top. But what can be done in Chrome and Firefox?

The newly available ::search-text pseudo highlight provides styling for the find-in-page highlight using CSS. Within a ::search-text rule you can use properties for colors (text and background), text decorations such as underlines, and shadows. A companion ::search-text:current selector matches the active find-in-page result and can be styled separately.

As an example, let’s color find-in-page results green and add a red underline for the active match:

<style>
:root::search-text {
color: green;
}
:root::search-text:current {
color: green;
text-decoration: red solid underline;
}
</style>
<p>Find find in this example of find-in-page result styling.</p>

In general, the find-in-page markers should be consistent across the entire page, so we recommend that you define the ::search-text properties on the root. All other elements will inherit the pseudo class through the highlight inheritence mechanism.

Note that if you do not specify ::serach-text:current, the active match will use the same styling as inactive matches. In practice, it is best to always provide styles for ::search-text:current when defining ::search-text, and vice versa, with sufficient difference in visual appearance to make it clear to users which is the active match.

This feature is available in Chrome 136.0.7090.0 and later, when “Experimental Web Platform features” is enabled with chrome://flags. It will likely be available to all users in Chrome 138.

Accessibility Concerns #

Modifying the appearance of user agent features, such as find-in-page highlighting, has significant potential to hurt users through poor contrast, small details, or other accessibility problems. Always maintain good contrast with all the backgrounds present on the page. Ensure that ::search-text styling is unambiguous. Include find-in-page actions in your accessibility testing.

Find-in-page highlight styling is beneficial when the native markers pose accessibility problems, such as poor contrast. It’s one of the motivations for providing this feature.

Privacy Concerns #

User find-in-page actions express personal information, so steps have been taken to ensure that sites cannot observe the presence or absence of find-in-page results. Computed style queries in Javascript will report the presence of ::search-text styles regardless of whether or not they are currently rendered for an element.

Thanks #

The implementation of CSS Search Text Features was undertaken by Igalia S.L. funded by Bloomberg L.P.

March 25, 2025 12:00 AM

José Dapena

trace-chrome: easy remote tracing of Chromium

As part of my performance analysis work for LGE webOS, I often had to capture Chrome Traces from an embedded device. So, to make it convenient, I wrote a simple command line helper to obtain the traces remotely, named trace-chrome.

In this blog post will explain why it is useful, and how to use it.

TL;DR #

If you want to read directly about the tool, jump to the how to use section.

Tracing Chromium remotely #

Chromium provides an infrastructure for capturing static tracing data, based on Perfetto. In this blog post I am not going through its architecture or implementation, but focus on how we instruct a trace capture to start and stop, and how to then fetch the results.

Chrome/Chromium provides user interfaces for capturing and analyzing traces locally. This can be done opening a tab and pointing it to the chrome://tracing URL.

The tracing capture UI is quite powerful, and completely implemented in web. This has a downside, though: running the capture UI introduces a significant overhead in several resources (CPU, memory, GPU, …).

This overhead may be even more significant when tracing Chromium or any other Chromium based web runtime in an embedded device, where we have CPU, storage and memory constraints.

Chromium does a great work at minimizing the overhead, by postponing the trace processing as much as possible, and providing a minimal UI when the capture is ongoing. But it may still be too much.

How to avoid this problem?

  • Capturing UI should not run in the system we are tracing. We can run the UI in a different computer to capture the trace.
  • Same about storage, we want it to happen in a different computer.

The solution for both is tracing remotely. Both the user interface for controlling the recording, and the recording storage happen in a different computer.

Setting up remote tracing support #

First, some nomenclature I will use:

  • Target device: it is the one that runs the Chromium web runtime instance we are going to trace.
  • Host device: the device that will run the tracing UI, to configure, start and stop the recording, and to explore the tracing results.

OK, now we know we want to trace remotely the target device Chromium instance. How can we do that? First, we need to connect our tracing tools running in the host to the Chromium instance in the target device.

This is done using the remote debugging port: a multi-purpose HTTP port provided by Chromium. This port is used not only for tracing, it offers access to Chrome Developer Tools.

The Chromium remote debugging port is disabled by default, but it can be enabled using the command line switch --remote-debugging-port=PORT in the target Chromium instance. This will open an HTTP port in the localhost interface, that can be used to connect.

Why localhost? Because this interface does not provide any authentication or encryption. So it is unsafe. It is user responsibility to provide some security (i.e. by using an setting an SSH tunnel between the host and the target device to connect to the remote debugging port).

Capturing traces with chrome://inspect #

Chromium browser provides a solution for tracing remotely. Just opening the URL chrome://inspect in the host device. It provides this user interface:

First, the checkbox for Discover network targets needs to be set.

Then press the Configure… button to set the list of IP addressed and ports where we expect target remote debugging ports to be.

Do not forget to add to the list the end point that is accessible from the host device. I.e. in the case of an SSH tunnel from the host device to the target device port, it needs to be the host side of the tunnel.

For the case we set up the host side tunnel at the port 10101, we will see this:

Then, just pressing the trace link will show the Chromium tracing UI, but connected to the target device Chromium instance.

Capturing traces with trace-chrome #

Over the last 8 years, I have been involved quite often in exploring the performance of Chromium in embedded devices. Specifically for the LGE webOS web stack. In this problem space, Chromium tracing capabilities are handy, providing a developers oriented view of different metrics, including the time spent running known operations in specific threads.

At that time I did not know about chrome://inspect so I really did not have an easy way to collect Chromium traces from a different machine. This is important as one performance analysis principle is that collecting the information should be as lightweight as possible. Running the tracing UI in the same Chromium instance that is analyzed is against that principle.

The solution? I wrote a very simple NodeJS script, that allows to capture a Chromium trace from the command line.

This is convenient for several reasons:

  • No need to launch the full tracing UI.
  • As we completely detach that UI from the capturing step, without an additional step to record the trace to a file, we are not affected on the unstability of the tracing UI handling the captured trace (not a problem usually, but it happens).
  • Easier to repeat tests for specific tracing categories, instead of manually enabling them in the tracing UI.

The script just provides an easy to use command line interface to the already existing chrome-remote-interface NodeJS module.

The project is open source, and available at github.com/jdapena/trace-chrome.

How to use trace-chrome #

Now, the instructions to use trace-chrome. The tool depends on having a working NodeJS environment in the host.

Installation #

First, clone the Github repository in the host:

git clone github.com:jdapena/trace-chrome

Then, install the dependencies. To do this, you need to have a working NodeJS environment.

cd trace-chrome
npm install

Running #

Now it is possible to try the tool. To get the command line help just run:

$ bin/trace-chrome --help
Usage: trace-chrome [options]

Options:
-H, --host <host> Remote debugging protocol host (default: "localhost")
-p, --port <port> Remote debugging protocool port (default: "9876")
-s, --showcategories Show categories
-O, --output <path> Output file (default: "")
-c, --categories <categories> Set categories (default: "")
-e, --excludecategories <categories> Exclude categories (default: "")
--systrace Enable systrace
--memory_dump_mode <mode> Memory dump mode (default: "")
--memory_dump_interval <interval_in_ms> Memory dump interval in ms (default: 2000)
--dump_memory_at_stop
-h, --help display help for command

To connect to a running Chromium instance remote debugging port, the --host and --port parameters need to be used. In the examples I am going to use the port 9999 and the host localhost.

Warning

Note that, in this case, the parameter --host refers to the network address of the remote debugging port access point. It is not referring to the host machine where we run the script.

Getting the tracing categories #

First, to check which tracing categories are available, we can use the option --showcategories:

bin/trace-chrome --host localhost --port 9999 --showcategories

We will obtain a list like this:

AccountFetcherService
Blob
CacheStorage
Calculators
CameraStream
...

Recording a session #

Now, the most important step: recording a Chromium trace. To do this, we will provide a list of categories (parameter --categories), and a file path to record the trace (parameter --output):

bin/trace-chrome --host localhost --port 9999 \
--categories "blink,cc,gpu,renderer.scheduler,sequence_manager,v8,toplevel,viz" \
--output js_and_rendering.json

This will start recording. To stop recording, just press <Ctrl>+C, and the trace will be transferred and stored to the provided file path.

Tip

Which categories to use? Good presets for certain problem scopes can be obtained in Chrome. Just open chrome://tracing, press the Record button, and play with the predefined settings. In the bottom you will see the list of categories to pass for each of them.

Opening the trace file #

Now the tracing file has been obtained, it can be opened from Chrome or Chromium running in host: load in a tab the URL chrome://tracing and press the Load button.

Tip

The traces are completely standalone. So they can be loaded in any other computer without any additional artifact. This is useful, as those traces can be shared among developers or uploaded to a ticket tracker.

But, if you want to do that, do not forget to compress first with gzip to make the trace smaller. chrome://tracing can open the compressed traces directly.

Capturing memory infra dumps #

The script also supports periodical recording of the memory-infra system. This captures periodical dumps of the state of memory, with specific instrumentation in several categories.

To use it, add the category disabled-by-default-memory-infra, and pass the following parameters to configure the capture:

  • --memory_dump_mode <background|detailed|simple>: level of detail. background is designed to have almost no impact in execution, running very fast. light mode shows a few entries, while detailed is unlimited, and provides the most complete information.
  • --memory_dump_interval: the interval in miliseconds between snapshots.

Using npx #

For convenience, it is also possible to use trace-chrome with npx. It will install the script and the dependencies in the NPM cache, and run from them:

npx jdapena/trace-chrome --help

Examples #

  1. Record a trace of the categories for the Web Developer mode in Chrome Tracing UI:
bin/trace-chrome --host HOST --port PORT \ 
--categories "blink,cc,netlog,renderer.scheduler,sequence_manager,toplevel,v8" \
--output web_developer.json
  1. Record memory infrastructure snapshots every 10 seconds:
bin/trace-chrome --host HOST --port PORT \
--categories "disabled-by-default-memory-infra" --memory_dump_mode detailed \
--memory_dump_interval 10000 --output memory_infra.json

Wrapping up #

trace-chrome is a very simple tool, just providing a convenient command line interface for interacting with remote Chromium instances. It is specially useful for tracing embedded devices.

It has been useful for me for years, in a number of platforms, from Windows to Linux, from desktop to low end devices.

Try it!

References #

March 25, 2025 12:00 AM

March 24, 2025

Igalia WebKit Team

WebKit Igalia Periodical #18

Update on what happened in WebKit in the week from March 17 to March 24.

Cross-Port 🐱

Limited the amount data stored for certain elements of WebKitWebViewSessionState. This results in memory savings, and avoids oddly large objects which resulted in web view state being restored slowly.

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

Reduced parsing overhead in incoming WebRTC video streams by reducing excessive tag events at startup and by avoiding the plugging of parser elements for already-parsed streams.

JavaScriptCore 🐟

The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.

Fixed an integer overflow when using wasm/gc on 32-bits.

Graphics 🖼️

Landed a change that fixes a few scenarios where the damage was not generated on layer property changes.

Releases 📦️

WebKitGTK 2.48.0 and WPE WebKit 2.48.0 have been released. While they may not look as exciting as the 2.46 series, which introduced the use of Skia for painting, they nevertheless includes half a year of improvements. This development cycle focused on reworking internals, which brings modest performance improvements for all kinds of devices, but most importantly cleanups which will enable further improvements going forward.

For those who need longer to integrate newer releases, which we know can be a longer process for embedded device distrihytos, we have also published WPE WebKit 2.46.7 with a few stability and security fixes.

Accompanying these releases there is security advisory WSA-2025-0002 (GTK, WPE), which covers the solved security issues. Crucially, all three contain the fix for an issue known to be exploited in the wild, and therefore we strongly encourage updating.

As usual, bug reports are always welcome at the WebKit Bugzilla.

libsoup 3.6.5 has been released with bug fixes and security improvements.

That’s all for this week!

by Igalia WebKit Team at March 24, 2025 07:50 PM

March 20, 2025

Jesse Alama

Leaning In! 2025 from the organizer’s point of view

Lean­ing In! 2025 from the or­ga­niz­er’s point of view

Lean­ing In! 2025 has come and gone. How did it go?

The in­spi­ra­tion for do­ing Lean­ing In! came from the tu­to­r­i­al at BOBKonf 2024 by Joachim Bre­it­ner and David Chris­tiansen. The tu­to­r­i­al room was full; in fact, it was over­full and not every­one who want­ed to at­tend could at­tend. I’d kept my eye on Lean from its ear­li­est days but lost the thread for a long time. The im­age I had of Lean came from its ver­sion 1 and 2 days, when the project was still close­ly aligned the aims of ho­mo­topy type the­o­ry. I didn’t know about Lean ver­sion 3. So when I opened my eyes and woke up, I was in the cur­rent era of Lean (ver­sion 4), with a great lan­guage, hu­mon­gous stan­dard li­brary, and pret­ty sen­si­bile tool­ing. I was on board right away. As an or­ga­niz­er of Rack­et­fest, I had some ex­pe­ri­ence putting to­geth­er (small) con­fer­ences, so I thought I’d give it a go with Lean.

I an­nounced the con­fer­ence a few months ago, so there wasn’t all that much time to find speak­ers and plan. Still, we had 33 peo­ple in the room. When I first start­ed plan­ning the work­shop, I thought there’d only be 10-12 peo­ple. This was my first time or­ga­niz­ing a Lean work­shop of any sort, so my ini­tial ex­pec­ta­tions were very mod­est. I booked a fair­ly small room at Spielfeld for that. Af­ter some en­cour­age­ment from Joachim, who po­lite­ly sug­gest­ed that 10-12 might be a bit too small, I re­quest­ed a some­what larg­er room, for up to 20 peo­ple. But as reg­is­tra­tions kept com­ing in, I need­ed to rene­go­ti­ate with Spielfeld. Ul­ti­mate­ly, they put us in their largest room (a more ap­pro­pri­ate­ly sized room ex­ists but had al­ready been booked). The room we were in was some­what too big, but I’m glad we had the space.

Lean is a de­light­ful mix of pro­gram ver­i­fi­ca­tion and math­e­mat­ics for­mal­iza­tion. That was re­flect­ed in the pro­gram. We had three talks,

that, I’d say, were def­i­nite­ly more in the com­put­er sci­ence camp. With Lean, it’s not so clear at times. Lukas’s talk was mo­ti­vat­ed by some ap­pli­ca­tions com­ing from com­put­er sci­ence but the top­ic makes sense on its own and could have been tak­en up by a math­e­mati­cian. The open­ing talk, Re­cur­sive de­f­i­n­i­tions, by Joachim Bre­it­ner, was about the in­ter­nals of Lean it­self, so I think it doesn’t count as a talk on for­mal­iz­ing math­e­mat­ics. But it sort of was, in the sense that it was about the log­ic in the Lean ker­nel. It was com­put­er sci­ence-y, but it wasn’t real­ly about us­ing Lean, but more about bet­ter un­der­stand­ing how Lean works un­der the hood.

It is clear that math­e­mat­ics for­mal­iza­tion in Lean is very much ready for re­search lev­el math­e­mat­ics. The math­e­mat­ics li­brary is very well de­vel­oped, and Lean is fast enough, with good enough tool­ing, to en­able math­e­mati­cians to do se­ri­ous stuff. We are light years past noodling about the Peano ax­ioms or How do I for­mal­ize a group?. I have a guy feel­ing that we may be ap­proach­ing a point in the near fu­ture wher Lean might be­come a com­mon way of do­ing math­e­mat­ics.

What didn’t go so well

The part of the event that prob­a­bly didn’t go quite as I had planned was the Proof Clin­ic in the af­ter­noon. The in­ten­tion of the proof clin­ic was to take ad­van­tage of the fact that many of us had come to Berlin to meet face-to-face, and there were sev­er­al ex­perts in the room. Let’s work to­geth­er! If there’s any­thing you’re stuck on, let’s talk about it and make some progress, to­day. Think of it as a sort of mi­cro-un­con­fer­ence (just one hour long) with­in a work­shop.

That sounds good, but I didn’t pre­pare the at­ten­dees well enough. I only start­ed adding top­ics to the list of po­ten­tial dis­cus­sion items in the morn­ing, and I was the only one adding them. Pri­vate­ly, I had a few dis­cus­sion items in my back pock­et, but they were in­tend­ed just to get the con­ver­sa­tion go­ing. My idea was that once we prime the pump, we’ll have all sorts of things to talk about.

That’s not quite what hap­pened. We did, ul­ti­mate­ly, dis­cuss a few in­ter­est­ing things but it took a while for us to warm up. Also, do­ing the proof clin­ic as a sin­gle large group might not have been the best idea. Per­haps we should have split up into groups and tried to work to­geth­er that way.

I also learned that sev­er­al at­ten­dees don’t use Zulip, so my as­sump­tion that Zulip is the one and only way for peo­ple to com­mu­ni­cate about Lean wasn’t quite right. I could have done bet­ter com­mu­ni­ca­tion with at­ten­dees in ad­vance to make sure that we co­or­di­nate dis­cus­sion in Zulip, in­stead of sim­ply as­sum­ing that, of course, every­one is there.

The fu­ture

Will there be an­oth­er edi­tion of Lean­ing In!

Yes, I think so. It's a lot of work to or­ga­nize a con­fer­ence (and there's al­ways more to do, even when you know that there's a lot!). But the com­mu­ni­ty ben­e­fits are clear. Stay tuned!

March 20, 2025 08:43 AM

March 19, 2025

Jesse Alama

Announcing decimal128: JavaScript implementation of Decimal128

An­nounc­ing dec­i­mal128: JavaScript im­ple­men­ta­tion of Dec­i­mal128

I’m hap­py to an­nounce dec­i­mal128.js, an NPM pack­age I made for sim­u­lat­ing IEEE 754 Dec­i­mal128 num­bers in JavaScript.

(This is my first NPM pack­age. I made it in Type­Script; it’s my first go at the lan­guage.)

What?

Dec­i­mal128 is an IEEE stan­dard for float­ing-point dec­i­mal num­bers. These num­bers aren’t the bi­na­ry float­ing-point num­bers that you know and love (?), but dec­i­mal num­bers. You know, the kind we learn about be­fore we’re even ten years old. In the bi­na­ry world, things like 0.1 + 0.2 aren’t ex­act­ly* equal to 0.3, and cal­cu­la­tions like 0.7 * 1.05 work out to ex­act­ly 0.735. These kinds of num­bers are what we use when do­ing all sorts of every­day cal­cu­la­tions, es­pe­cial­ly those hav­ing to do with mon­ey.

Dec­i­mal128 en­codes dec­i­mal num­bers into 128 bits. It is a fixed-width en­cod­ing, un­like ar­bi­trary-pre­ci­sion num­bers, which, of course, re­quire an ar­bi­trary amount of space. The en­cod­ing can rep­re­sent of num­bers with up to 34 sig­nif­i­cant dig­its and an ex­po­nent of –6143 to 6144. That is a tru­ly vast amount of space if one keeps the in­tend­ed use cas­es in­volv­ing hu­man-read­able and -writable num­bers (read: mon­ey) in mind.

Why?

I’m work­ing on ex­tend­ing the JavaScript lan­guage with dec­i­mal num­bers (pro­pos­al-dec­i­mal). One of the de­sign de­ci­sions that has to be made there is whether to im­ple­ment ar­bi­trary-pre­ci­sion dec­i­mal num­bers or to im­ple­ment some kind of ap­prox­i­ma­tion there­of, with Dec­i­mal128 be­ing the main con­tender. As far as I could tell, there was no im­ple­men­ta­tion of Dec­i­mal128 in JavaScript, so I made one.

The in­ten­tion isn’t to sup­port the full Dec­i­mal128 stan­dard, nor should one ex­pect to achieve the per­for­mance that, say, a C/C++ li­brary would give you in user­land JavaScript. (To say noth­ing of hav­ing ma­chine-na­tive dec­i­mal in­struc­tions, which is tru­ly ex­ot­ic.) The in­ten­tion is to give JavaScript de­vel­op­ers some­thing that gen­uine­ly strives to ap­prox­i­mate Dec­i­mal128 for JS pro­grams.

In par­tic­u­lar, the hope is that this li­brary of­fers the JS com­mu­ni­ty a chance to get a feel for what Dec­i­mal128 might be like.

How to use

Just do

$ npm in­stall dec­i­mal128

and start us­ing the pro­vid­ed Dec­i­mal128 class.

Is­sues?

If you find any bugs or would like to re­quest a fea­ture, just open an is­sue and I’ll get on it.

March 19, 2025 10:11 AM

The decimals around us: Cataloging support for decimal numbers

The dec­i­mals around us: Cat­a­loging sup­port for dec­i­mal num­bers

Dec­i­mals num­bers are a data type that aims to ex­act­ly rep­re­sent dec­i­mal num­bers. Some pro­gram­mers may not know, or ful­ly re­al­ize, that, in most pro­gram­ming lan­guages, the num­bers that you en­ter look like dec­i­mal num­bers but in­ter­nal­ly are rep­re­sent­ed as bi­na­ry—that is, base-2—float­ing-point num­bers. Things that are to­tal­ly sim­ple for us, such as 0.1, sim­ply can­not be rep­re­sent­ed ex­act­ly in bi­na­ry. The dec­i­mal data type—what­ev­er its stripe or fla­vor—aims to rem­e­dy this by giv­ing us a way of rep­re­sent­ing and work­ing with dec­i­mal num­bers, not bi­na­ry ap­prox­i­ma­tions there­of. (Wikipedia has more.)

To help with my work on adding dec­i­mals to JavaScript, I've gone through a list of pop­u­lar pro­gram­ming lan­guages, tak­en from the 2022 Stack­Over­flow de­vel­op­er sur­vey. What fol­lows is a brief sum­ma­ry of where these lan­guages stand re­gard­ing dec­i­mals. The in­ten­tion is to keep things sim­ple. The pur­pose is:

  1. If a lan­guage does have dec­i­mals, say so;
  2. If a lan­guage does not have dec­i­mals, but at least one third-par­ty li­brary ex­ists, men­tion it and link to it. If a dis­cus­sion is un­der­way to add dec­i­mals to the lan­guage, link to that dis­cus­sion.

There is no in­ten­tion to fil­ter out an lan­guage in par­tic­u­lar; I'm just work­ing with a slice of lan­guages found in in the Stack­Over­flow list linked to ear­li­er. If a lan­guage does not have dec­i­mals, there may well be mul­ti­ple third-part dec­i­mal li­braries. I'm not aware of all li­braries, so if I have linked to a mi­nor li­brary and ne­glect to link to a more high-pro­file one, please let me know. More im­por­tant­ly, if I have mis­rep­re­sent­ed the ba­sic fact of whether dec­i­mals ex­ists at all in a lan­guage, send mail.

C

C does not have dec­i­mals. But they're work­ing on it! The C23 stan­dard (as in, 2023) stan­dard pro­pos­es to add new fixed bit-width data types (32, 64, and 128) for these num­bers.

C#

C# has dec­i­mals in its un­der­ly­ing .NET sub­sys­tem. (For the same rea­son, dec­i­mals also ex­ist in Vi­su­al Ba­sic.)

C++

C++ does not have dec­i­mals. But—like C—they're work­ing on it!

Dart

Dart does not have dec­i­mals. But a third-par­ty li­brary ex­ists.

Go

Go does not have dec­i­mals, but a third-par­ty li­brary ex­ists.

Java

Java has dec­i­mals.

JavaScript

JavaScript does not have dec­i­mals. We're work­ing on it!

Kotlin

Kotlin does not have dec­i­mals. But, in a way, it does: since Kotlin is run­ning on the JVM, one can get dec­i­mals by us­ing Java's built-in sup­port.

PHP

PHP does not have dec­i­mals. An ex­ten­sion ex­ists and at least one third-par­ty li­brary ex­ists.

Python

Python has dec­i­mals.

Ruby

Ruby has dec­i­mals. De­spite that, there is some third-par­ty work to im­prove the built-in sup­port.

Rust

Rust does not have dec­i­mals, but a crate ex­ists.

SQL

SQL has dec­i­mals (it is the DEC­I­MAL data type). (Here is the doc­u­men­ta­tion for, e.g., Post­greSQL, and here is the doc­u­men­ta­tion for MySQL.)

Swift

Swift has dec­i­mals

Type­Script

Type­Script does not have dec­i­mals. How­ev­er, if dec­i­mals get added to JavaScript (see above), Type­Script will prob­a­bly in­her­it dec­i­mals, even­tu­al­ly.

March 19, 2025 10:10 AM

Here’s how to unbreak floating-point math in JavaScript

Here’s how to un­break float­ing-point math in JavaScript

Be­cause com­put­ers are lim­it­ed, they work in a fi­nite range of num­bers, name­ly, those that can be rep­re­sent­ed straight­for­ward­ly as fixed-length (usu­al­ly 32 or 64) se­quences of bits. If you’ve only got 32 or 64 bits, it’s clear that there are only so many num­bers you can rep­re­sent, whether we’re talk­ing about in­te­gers or dec­i­mals. For in­te­gers, it’s clear that there’s a way to ex­act­ly rep­re­sent math­e­mat­i­cal in­te­gers (with­in the fi­nite do­main per­mit­ted by 32 or 64 bits). For dec­i­mals, we have to deal with the lim­its im­posed by hav­ing only a fixed num­ber of bits: most dec­i­mal num­bers can­not be ex­act­ly rep­re­sent­ed. This leads to headaches in all sorts of con­texts where dec­i­mals arise, such as fi­nance, sci­ence, en­gi­neer­ing, and ma­chine learn­ing.

It has to do with our use of base 10 and the com­put­er’s use of base 2. Math strikes again! Ex­act­ness of dec­i­mal num­bers isn’t an ab­struse, edge case-y prob­lem that some math­e­mati­cians thought up to poke fun at pro­gram­mers en­gi­neers who aren’t blessed to work in an in­fi­nite do­main. Con­sid­er a sim­ple ex­am­ple. Fire up your fa­vorite JavaScript en­gine and eval­u­ate this:

1 + 2 === 3

You should get true. Duh. But take that ex­am­ple and work it with dec­i­mals:

0.1 + 0.2 === 0.3

You’ll get false.

How can that be? Is float­ing-point math bro­ken in JavaScript? Short an­swer: yes, it is. But if it’s any con­so­la­tion, it’s not just JavaScript that’s bro­ken in this re­gard. You’ll get the same re­sult in all sorts of oth­er lan­guages. This isn’t wat. This is the un­avoid­able bur­den we pro­gram­mers bear when deal­ing with dec­i­mal num­bers on ma­chines with lim­it­ed pre­ci­sion.

Maybe you’re think­ing OK, but if that’s right, how in the world do dec­i­mal num­bers get han­dled at all? Think of all the fi­nan­cial ap­pli­ca­tions out there that must be do­ing the wrong thing count­less times a day. You’re quite right! One way of get­ting around odd­i­ties like the one above is by al­ways round­ing. So in­stead of work­ing with, say, this is by han­dling dec­i­mal num­bers as strings (se­quences of dig­its). You would then de­fine op­er­a­tions such as ad­di­tion, mul­ti­pli­ca­tion, and equal­i­ty by do­ing el­e­men­tary school math, dig­it by dig­it (or, rather, char­ac­ter by char­ac­ter).

So what to do?

Num­bers in JavaScript are sup­posed to be IEEE 754 float­ing-point num­bers. A con­se­quence of this is, ef­fec­tive­ly, that 0.1 + 0.2 will nev­er be 0.3 (in the sense of the === op­er­a­tor in JavaScript). So what can be done?

There’s an npm li­brary out there, dec­i­mal.js, that pro­vides sup­port for ar­bi­trary pre­ci­sion dec­i­mals. There are prob­a­bly oth­er li­braries out there that have sim­i­lar or equiv­a­lent func­tion­al­i­ty.

As you might imag­ine, the is­sue un­der dis­cus­sion is old. There are workarounds us­ing a li­brary.

But what about ex­tend­ing the lan­guage of JavaScript so that the equa­tion does get val­i­dat­ed? Can we make JavaScript work with dec­i­mals cor­rect­ly, with­out us­ing a li­brary?

Yes, we can.

Aside: Huge in­te­gers

It’s worth think­ing about a sim­i­lar is­sue that also aris­es from the finite­ness of our ma­chines: ar­bi­trar­i­ly large in­te­gers in JavaScript. Out of the box, JavaScript didn’t sup­port ex­treme­ly large in­te­gers. You’ve got 32-bit or (more like­ly) 64-bit signed in­te­gers. But even though that’s a big range, it’s still, of course, lim­it­ed. Big­Int, a pro­pos­al to ex­tend JS with pre­cise­ly this kind of thing, reached Stage 4 in 2019, so it should be avail­able in pret­ty much every JavaScript en­gine you can find. Go ahead and fire up Node or open your brows­er’s in­spec­tor and plug in the num­ber of nanosec­onds since the Big Bang:

13_787_000_000_000n // years
* 365n              // days
* 24n               // hours
* 60n               // min­utes
* 60n               // sec­onds
* 1000n             // mil­lisec­onds
* 1000n             // mi­crosec­onds
* 1000n             // nanosec­onds

(Not a sci­en­ti­cian. May not be true. Not in­tend­ed to be a fac­tu­al claim.)

Adding big dec­i­mals to the lan­guage

OK, enough about big in­te­gers. What about adding sup­port for ar­bi­trary pre­ci­sion dec­i­mals in JavaScript? Or, at least, high-pre­ci­sion dec­i­mals? As we see above, we don’t even need to wrack our brains try­ing to think of com­pli­cat­ed sce­nar­ios where a ton of dig­its af­ter the dec­i­mal point are need­ed. Just look at 0.1 + 0.2 = 0.3. That’s pret­ty low-pre­ci­sion, and it still doesn’t work. Is there any­thing anal­o­gous to Big­Int for non-in­te­ger dec­i­mal num­bers? No, not as a li­brary; we al­ready dis­cussed that. Can we add it to the lan­guage, so that, out of the box—with no third-par­ty li­brary—we can work with dec­i­mals?

The an­swer is yes. Work is pro­ceed­ing on this mat­ter, but things re­main to un­set­tled. The rel­e­vant pro­pos­al is BigDec­i­mal. I’ll be work­ing on this for a while. I want to get big dec­i­mals into JavaScript. There are all sorts of is­sues to re­solve, but they’re def­i­nite­ly re­solv­able. We have ex­pe­ri­ence with ar­bi­trary pre­ci­sion arith­metic in oth­er lan­guages. It can be done.

So yes, float­ing-point math is bro­ken in JavaScript, but help is on the way. You’ll see more from me here as I tack­le this in­ter­est­ing prob­lem; stay tuned!

March 19, 2025 10:10 AM

Binary floats can let us down! When close enough isn't enough

Bi­na­ry floats can let us down! When close enough isn't enough

If you've played Mo­nop­oly, you'll know abuot the Bank Er­ror in Your Fa­vor card in the Com­mu­ni­ty Chest. Re­mem­ber this?

Card from the game Monopoly: Bank error in your favor!

A bank er­ror in your fa­vor? Sweet! But what if the bank makes an er­ror in its fa­vor? Sure­ly that's just as pos­si­ble, right?

I'm here to tell you that if you're do­ing every­day fi­nan­cial cal­cu­la­tions—noth­ing fan­cy, but in­volv­ing mon­ey that you care about—then you might need to know that us­ing bi­na­ry float­ing point num­bers, then some­thing might be go­ing wrong. Let's see how bi­na­ry float­ing-point num­bers might yield bank er­rors in your fa­vor—or the bank's.

In a won­der­ful pa­per on dec­i­mal float­ing-point num­bers, Mike Col­ishaw gives an ex­am­ple.

Here's how you can re­pro­duce that in JavaScript:

(1.05 * 0.7).to­Pre­ci­sion(2);
# 0.73

Some pro­gram­mers might not be aware of this, but many are. By point­ing this out I'm not try­ing to be a smar­ty­pants who knows some­thing you don't. For me, this ex­am­ple il­lus­trates just how com­mon this sort of er­ror might be.

For pro­gram­mers who are aware of the is­sue, one typ­i­cal ap­proache to deal­ing with it is this: Nev­er work with sub-units of a cur­ren­cy. (Some cur­ren­cies don't have this is­sue. If that's you and your prob­lem do­main, you can kick back and be glad that you don't need to en­gage in the fol­low­ing sorts of headaches.) For in­stance, when work­ing with US dol­lars of eu­ros, this ap­proach man­dates that one nev­er works with eu­ros and cents, but only with cents. In this set­ting, dol­lars ex­ist only as an ab­strac­tion on top of cents. As far as pos­si­ble, cal­cu­la­tions nev­er use floats. But if a float­ing-point num­ber threat­ens to come up, some form of round­ing is used.

An­oth­er aproach for a pro­gram­mer is to del­e­gate fi­nan­cial cal­cu­la­tions to an ex­ter­nal sys­tem, such as a re­la­tion­al data­base, that na­tive­ly sup­ports prop­er dec­i­mal cal­cu­la­tions. One dif­fi­cul­ty is that even if one del­e­gates these cal­cu­la­tions to an ex­ter­nal sys­tem, if one lets a float­ing-point val­ue flow int your pro­gram, even a val­ue that can be trust­ed, it may be­come taint­ed just by be­ing im­port­ed into a lan­guage that doesn't prop­er­ly sup­port dec­i­mals. If, for in­stance, the re­sult of a cal­cu­la­tion done in, say, Post­gres, is ex­act­ly 0.1, and that flows into your JavaScript pro­gram as a num­ber, it's pos­si­ble that you'll be deal­ing with a con­t­a­m­i­nat­ed val­ue. For in­stance:

(0.1).to­Pre­ci­sion(25)
# 0.1000000000000000055511151

This ex­am­ple, ad­mit­ted­ly, re­quires quite a lot of dec­i­mals (19!) be­fore the ugly re­al­i­ty of the sit­u­a­tion rears its head. The re­al­i­ty is that 0.1 does not, and can­not, have an ex­act rep­re­sen­ta­tion in bi­na­ry. The ear­li­er ex­am­ple with the cost of a phone call is there to raise your aware­ness of the pos­si­bil­i­ty that one doesn't need to go 19 dec­i­mal places be­fore one starts to see some weird­ness show­ing up.

There are all sorts of ex­am­ples of this. It's ex­ceed­ing­ly rare for a dec­i­mal num­ber to have an ex­act rep­re­sen­ta­tion in bi­na­ry. Of the num­bers 0.1, 0.2, …, 0.9, only 0.5 can be ex­act­ly rep­re­sent­ed in bi­na­ry.

Next time you look at a bank state­ment, or a bill where some tax is cal­cu­lat­ed, I in­vite you to ask how that was cal­cu­lat­ed. Are they us­ing dec­i­mals, or floats? Is it cor­rect?

I'm work­ing on the dec­i­mal pro­pos­al for TC39 to try to work what it might be like to add prop­er dec­i­mal num­bers to JavaScript. There are a few very in­ter­est­ing de­grees of free­dom in the de­sign space (such as the pre­cise datatype to be used to rep­re­sent these kinds of num­ber), but I'm op­ti­mistic that a rea­son­able path for­ward ex­ists, that con­sen­sus be­tween JS pro­gram­mers and JS en­gine im­ple­men­tors can be found, and even­tu­al­ly im­ple­ment­ed. If you're in­ter­est­ed in these is­sues, check out the README in the pro­pos­al and get in touch!

March 19, 2025 10:09 AM

Getting started with Lean 4, your next programming language

Get­ting start­ed with Lean 4, your next pro­gram­ming lan­guage

I had the plea­sure of learn­ing about Lean 4 with David Chris­tiansen and Joachim Bre­it­ner at their tu­to­r­i­al at BOBKonf 2024. I‘m plan­ning on do­ing a cou­ple of for­mal­iza­tions with Lean and would love to share what I learn as a to­tal new­bie, work­ing on ma­cOS.

Need­ed tools

I‘m on ma­cOS and use Home­brew ex­ten­sive­ly. My sim­ple go-to ap­proach to find­ing new soft­ware is to do brew search lean. This re­vealed lean as well as sur­face elan. Run­ning brew info lean showed me that that pack­age (at the time I write this) in­stalls Lean 3. But I know, out-of-band, that Lean 4 is what I want to work with. Run­ning brew info elan looked bet­ter, but the out­put re­minds me that (1) the in­for­ma­tion is for the elan-init pack­age, not the elan cask, and (2) elan-init con­flicts with both the elan and the afore­men­tioned lean. Yikes! This strikes me as a po­ten­tial prob­lem for the com­mu­ni­ty, be­cause I think Lean 3, though it still works, is pre­sum­ably not where new Lean de­vel­op­ment should be tak­ing place. Per­haps the Home­brew for­mu­la for Lean should be up­dat­ed called lean3, and a new lean4 pack­age should be made avail­able. I‘m not sure. The sit­u­a­tion seems less than ide­al, but in short, I have been suc­cess­ful with the elan-init pack­age.

Af­ter in­stalling elan-init, you‘ll have the elan tool avail­able in your shell. elan is the tool used for main­tain­ing dif­fer­ent ver­sions of Lean, sim­i­lar to nvm in the Node.js world or pyenv.

Set­ting up a blank pack­age

When I did the Lean 4 tu­to­r­i­al at BOB, I worked en­tire­ly with­in VS Code and cre­at­ed a new stand­alone pack­age us­ing some in-ed­i­tor func­tion­al­i­ty. At the com­mand line, I use lake init to man­u­al­ly cre­ate a new Lean pack­age. At first, I made the mis­take of run­ning this com­mand, as­sum­ing it would cre­ate a new di­rec­to­ry for me and set up any con­fig­u­ra­tion and boil­er­plate code there. I was sur­prised to find, in­stead, that lake init sets things up in the cur­rent di­rec­to­ry, in ad­di­tion to cre­at­ing a sub­di­rec­to­ry and pop­u­lat­ing it. Us­ing lake --help, I read about the lake new com­mand, which does what I had in mind. So I might sug­gest us­ing lake new rather than lake init.

What‘s in the new di­rec­to­ry? Do­ing tree foo­bar re­veals

foo­bar
├── Foo­bar
│   └── Ba­sic.lean
├── Foo­bar.lean
├── Main.lean
├── lake­file.lean
└── lean-tool­chain

Tak­ing a look there, I see four .lean files. Here‘s what they con­tain:

Main.lean
im­port «Foo­bar»

def main : IO Unit :=
  IO.print­ln s!"Hel­lo, {hel­lo}!"
Foo­bar.lean
-- This mod­ule serves as the root of the `Foo­bar` li­brary.
-- Im­port mod­ules here that should be built as part of the li­brary.
im­port «Foo­bar».Ba­sic
Foo­bar/Ba­sic.lean
def hel­lo := "world"
lake­file.lean
im­port Lake
open Lake DSL

pack­age «foo­bar» where
  -- add pack­age con­fig­u­ra­tion op­tions here

lean_lib «Foo­bar» where
  -- add li­brary con­fig­u­ra­tion op­tions here

@[de­fault_tar­get]
lean_exe «foo­bar» where
  root := `Main

It looks like there‘s a lit­tle mod­ule struc­ture here, and a ref­er­ence to the iden­ti­fi­er hel­lo, de­fined in Foo­bar/Ba­sic.lean and made avail­able via Foo­bar.lean. I’m not go­ing to touch lake­file.lean for now; as a new­bie, it looks scary enough that I think I’ll just stick to things like Ba­sic.lean.

There‘s also an au­to­mat­i­cal­ly cre­at­ed .git there, not shown in the di­rec­to­ry out­put above.

Now what?

Now that you‘ve got Lean 4 in­stalled and set up a pack­age, you‘re ready to dive in to one of the of­fi­cial tu­to­ri­als. The one I‘m work­ing through is David‘s Func­tion­al Pro­gram­ming in Lean. There‘s all sorts of ad­di­tion­al things to learn, such as all the dif­fer­ent lake com­mands. En­joy!

March 19, 2025 10:09 AM

Announcing a polyfill for the TC39 decimal proposal

An­nounc­ing a poly­fill for the TC39 dec­i­mal pro­pos­al

I’m hap­py to an­nounce that the dec­i­mal pro­pos­al—a pro­posed ex­ten­sion of JavaScript to sup­port dec­i­mal num­bers—is now avail­able as an NPM pack­age called pro­pos­al-dec­i­mal!

(Ac­tu­al­ly, it has been avail­able for some time, made avail­able not long af­ter we de­cid­ed to pur­sue IEEE 754 Dec­i­mal128 as a data mod­el for the dec­i­mal pro­pos­al rather than some al­ter­na­tives. The old pack­age was—and still is—avail­able un­der a dif­fer­ent name—dec­i­mal128—but I’ll be sun­set­ting that pack­age in fa­vor of the new one an­nounced here. If you’ve been us­ing dec­i­mal128, you can con­tin­ue to use it, but you’ll prob­a­bly want to switch to pro­pos­al-dec­i­mal.)

To use pro­pos­al-dec­i­mal in your project, in­stall the NPM pack­age. If you’re look­ing to use this code in Node.js or oth­er JS en­gines that sup­port ESM, you'll want to im­port the code like this:

im­port { Dec­i­mal128 } from 'pro­pos­al-dec­i­mal';
con­st x = new Dec­i­mal128("0.1");
// etc.

For use in a brows­er, the file dist/Dec­i­mal128.mjs con­tains the Dec­i­mal128 class and all its in­ter­nal de­pen­den­cies in a sin­gle file. Use it like this:

<script type="mod­ule">
im­port { Dec­i­mal128 } from 'path/to/Dec­i­mal128.mjs';
con­st x = new Dec­i­mal128("0.1");
// keep rock­ing dec­i­mals!
</script>

The in­ten­tion of this poly­fill is to track the spec text for the dec­i­mal pro­pos­al. I can­not rec­om­mend this pack­age for pro­duc­tion use just yet, but it is us­able and I’d love to hear any ex­pe­ri­ence re­ports you may have. We’re aim­ing to be as faith­ful as pos­si­ble to the spec, so we don’t aim to be blaz­ing­ly fast. That said, please do re­port any wild de­vi­a­tions in per­for­mance com­pared to oth­er dec­i­mal li­braries for JS as an is­sue. Any crash­es or in­cor­rect re­sults should like­wise be re­port­ed as an is­sue.

En­joy!

March 19, 2025 10:09 AM

March 18, 2025

Manuel Rego

Two new nice additions by Igalia on the last Safari Technology Preview.

March 18, 2025 12:00 AM

March 17, 2025

Igalia WebKit Team

WebKit Igalia Periodical #17

Update on what happened in WebKit in the week from March 10 to March 17.

Cross-Port 🐱

Web Platform 🌐

Updated button activation behaviour and type property reflection with command and commandfor. Also aligned popovertarget behaviour with latest specification.

Fixed reflection of command IDL property.

Implemented the trusted-types-eval keyword for the script-src CSP directive.

Implemented accessibility handling for command buttons.

Enabled command and commandfor in preview.

JavaScriptCore 🐟

The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.

Fixed an integer overflow in JSC (only happens on 32-bit systems with lots of RAM).

Graphics 🖼️

Fixed theming issues in WPE/WebKitGTK with vertical writing-modes.

That’s all for this week!

by Igalia WebKit Team at March 17, 2025 08:31 PM

March 12, 2025

Brian Kardell

11ty Math

11ty Math

Markdown, LaTex and ASCII - who writes HTML?

Very nearly none of the content that we encounter online is entirely hand authored from opening doctype to closing HTML tag - it's assembled. We have layouts, and includes and templates and so on. And, most of the actual content that we produce and consume is written in some more familiar or easier to write shorthand. For most of the people reading this, it's probably mostly markdown.

But did you know that lots of the places that you use markdown like GitHub and GitLab, and Visual Studio support embedding mathematical expressions written in LaTex surrounded by $ (inline) or $$ (block)? Those are then transformed for you to rendered Math with MathML?

It got me thinking that we should have a kind of similarly standard easy setup for 11ty. It would be a huge win to process it on the server, MathML will render natively, fast, without FOUC. It will be accessible, styleable, scale appropriately with text-size and zoom and so on.

The super interesting thing to note about most of the tools where you can use markup is that so many of them are built on common infrastructure: markdown-it. The architectural pattern of markdown-it allows people to write plugins, and if you're looking to match those above, you can do it pretty easily with the @mdit/plugin-katex:

/* eleventy.config.js */
const markdownIt = require("markdown-it");

module.exports = async function (eleventyConfig) {

  const { katex } = (await import("@mdit/plugin-katex"));
  eleventyConfig.setLibrary(
  	"md", 
  	markdownIt().use(katex, {output: "mathml"})
  );

}

And... That's it. Now you can embed LaTex math in your markdown just as you can in those other places and it will do the work of generating fast, native, accessible, and styleable MathML...

Some math  $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ whee.

Yields...

<p>Some math <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mfrac><msup><mi>x</mi><mn>2</mn></msup><msup><mi>a</mi><mn>2</mn></msup></mfrac><mo>+</mo><mfrac><msup><mi>y</mi><mn>2</mn></msup><msup><mi>b</mi><mn>2</mn></msup></mfrac><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1</annotation></semantics></math></span> whee.</p>

Which your browser renders as...

Some math inline x2a2+y2b2=1\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 whee.

Surrounding the math part alone with $$ instead yields block math, which renders as..

x2a2+y2b2=1\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1

Mathematical fonts are critical for rendering. We almost have universally good default math fonts, but some operating systems (eyes Android disapprovingly) still don't. However, like me you can include some CSS to help. On this page I've included
<link  href="https://fred-wang.github.io/MathFonts/STIX/mathfonts.css" />
.
You can read a lot more about this on MDN.

AsciiMath Math / Mathup

While LaTeX is by far the most common way that people author math, there are people who prefer AsciiMath. Rúnar Berg Baugsson Sigríðarson (@runarberg) has a nice explanation as to why they prefer to not use TeX.

His mathup package seems pretty nice (an AsciiMath dialect more than just AsciiMath), and there is also a corresponding markdown-it-math which is similarly easy to use...

/* eleventy.config.js */
const markdownIt = require("markdown-it");

module.exports = async function (eleventyConfig) {
  const markdownItMath = 
  	(await import('markdown-it-math')).default;

  eleventyConfig.setLibrary(
    "md", 
    markdownIt().use(markdownItMath)
  );
}

Then, you can embed AsciiMath in your markdown, fenced by the same $ or $$ and it will generate some nice MathML. For example...

$$
e = sum_(n=0)^oo 1/n!
$$

Will be transformed at build time and render in your browser as...

e=∑n=0∞1n!

Make sure you get markdown-it-math 5.0.0-rc.0 or above or this won't work. You might also consider including their stylesheet.

markdown-it-math also supports a nice pattern for easily integrating other engines for embedded transformations like Ron Kok's Temml or Fred Wang's TeXZilla.

Unicode Math

There is also Unicode Math, which Murray Sargent III developed and had integrated into all of the Microsoft products. It's pretty nifty too if you ask me. This repo has a nice comparison of the three.

Unfortunately there is no npm module for it (yet), so for now, unfortunately that remains an open wish.

So, that's it. Enjoy. Mathify your static sites.

Before you go... The work of Math rendering in browsers is severely under funded. Almost none of the effort or funding over the last 30 years to make this possible has come from browser vendors, but rather from individual contributors and those willing to help fund them. If you appreciate the importance of this work, please consider helping to support the work with a small monthly donation, and please help us to publicly lobby implementers to invest in it.

March 12, 2025 04:00 AM

Víctor Jáquez

Using pre-commit in GStreamer

Recently, GStreamer development story integrated the usage of pre-commit. pre-commit is a Git hook script that chain different linters, checkers, validators, formatters, etc., that are executed at git commit. This script is in Python. And there’s other GStreamer utility in Python: hotdoc

The challenge is that Debian doesn’t allow to install Python packages through pip, they have to be installed as Debian packages or inside virtual environments, such as venv.

So, instead of activating a virtual environment when I work in GStreamer, let’s just use direnv to activate it automatically.

Here’s a screencast of what I did to setup a Python virtual environment, within direnv, and installing pre-commit, hotdoc and gst-indent-1.0.

UPDATE: Tim told me that wit pipx we can do the same without the venv hassle.

https://mastodon.social/@tp_muller@fosstodon.org/114150178786863565

March 12, 2025 12:00 AM

March 11, 2025

Ricardo García

Device-Generated Commands at Vulkanised 2025

A month ago I attended Vulkanised 2025 in Cambridge, UK, to present a talk about Device-Generated Commands in Vulkan. The event was organized by Khronos and took place in the Arm Cambridge office. The talk I presented was similar to the one from XDC 2024, but instead of being a lightning 5-minutes talk, I had 25-30 minutes to present and I could expand the contents to contain proper explanations of almost all major DGC concepts that appear in the spec.

I attended the event together with my Igalia colleagues Lucas Fryzek and Stéphane Cerveau, who presented about lavapipe and Vulkan Video, respectively. We had a fun time in Cambridge and I can sincerely recommend attending the event to any Vulkan enthusiasts out there. It allows you to meet Khronos members and people working on both the specification and drivers, as well as many other Vulkan users from a wide variety of backgrounds.

The recordings for all sessions are now publicly available, and the one for my talk can be found embedded below. For those of you preferring slides and text, I’m also providing a transcription of my presentation together with slide screenshots further down.

In addition, at the end of the video there’s a small Q&A section but I’ve always found it challenging to answer questions properly on the fly and with limited time. For this reason, instead of transcribing the Q&A section literally, I’ve taken the liberty of writing down the questions and providing better answers in written form, and I’ve also included an extra question that I got in the hallways as bonus content. You can find the Q&A section right after the embedded video.

Vulkanised 2025 recording

* { padding: 0; margin: 0; overflow: hidden; } html, body { height: 100%; } img, span { /* All elements take the whole iframe width and are vertically centered. */ position: absolute; width: 100%; top: 0; bottom: 0; margin: auto; } span { /* This mostly applies to the play button. */ height: 1.5em; text-align: center; font-family: sans-serif; font-size: 500%; color: white; } Video: Vulkanised 2025: Device-Generated Commands in Vulkan "> * { padding: 0; margin: 0; overflow: hidden; } html, body { height: 100%; } img, span { /* All elements take the whole iframe width and are vertically centered. */ position: absolute; width: 100%; top: 0; bottom: 0; margin: auto; } span { /* This mostly applies to the play button. */ height: 1.5em; text-align: center; font-family: sans-serif; font-size: 500%; color: white; } Video: Vulkanised 2025: Device-Generated Commands in Vulkan " >

Questions and answers with longer explanations

Question: can you give an example of when it’s beneficial to use Device-Generated Commands?

There are two main use cases where DGC would improve performance: on the one hand, many times game engines use compute pre-passes to analyze the scene they want to draw and prepare some data for that scene. This includes maybe deciding LOD levels, discarding content, etc. After that compute pre-pass, results would need to be analyzed from the CPU in some way. This implies a stall: the output from that compute pre-pass needs to be transferred to the CPU so the CPU can use it to record the right drawing commands, or maybe you do this compute pre-pass during the previous frame and it contains data that is slightly out of date. With DGC, this compute dispatch (or set of compute dispatches) could generate the drawing commands directly, so you don’t stall or you can use more precise data. You also save some memory bandwidth because you don’t need to copy the compute results to host-visible memory.

On the other hand, sometimes scenes contain so much detail and geometry that recording all the draw calls from the CPU takes a nontrivial amount of time, even if you distribute this draw call recording among different threads. With DGC, the GPU itself can generate these draw calls, so potentially it saves you a lot of CPU time.

Question: as the extension makes heavy use of buffer device addresses, what are the challenges for tools like GFXReconstruct when used to record and replay traces that use DGC?

The extension makes use of buffer device addresses for two separate things. First, it uses them to pass some buffer information to different API functions, instead of passing buffer handles, offsets and sizes. This is not different from other APIs that existed before. The VK_KHR_buffer_device_address extension contains APIs like vkGetBufferOpaqueCaptureAddressKHR, vkGetDeviceMemoryOpaqueCaptureAddressKHR that are designed to take care of those cases and make it possible to record and reply those traces. Contrary to VK_KHR_ray_tracing_pipeline, which has a feature to indicate if you can capture and replay shader group handles (fundamental for capture and replay when using ray tracing), DGC does not have any specific feature for capture-replay. DGC does not add any new problem from that point of view.

Second, the data for some commands that is stored in the DGC buffer sometimes includes device addresses. This is the case for the index buffer bind command, the vertex buffer bind command, indirect draws with count (double indirection here) and ray tracing command. But, again, the addresses in those commands are buffer device addresses. That does not add new challenges for capture and replay compared to what we already had.

Question: what is the deal with the last token being the one that dispatches work?

One minor detail from DGC, that’s important to remember, is that, by default, DGC respects the order in which sequences appear in the DGC buffer and the state used for those sequences. If you have a DGC buffer that dispatches multiple draws, you know the state that is used precisely for each draw: it’s the state that was recorded before the execute-generated-commands call, plus the small changes that a particular sequence modifies like push constant values or vertex and index buffer binds, for example. In addition, you know precisely the order of those draws: executing the DGC buffer is equivalent, by default, to recording those commands in a regular command buffer from the CPU, in the same order they appear in the DGC buffer.

However, when you create an indirect commands layout you can indicate that the sequences in the buffer may run in an undefined order (this is VK_INDIRECT_COMMANDS_LAYOUT_USAGE_UNORDERED_SEQUENCES_BIT_EXT). If the sequences could dispatch work and then change state, we would have a logical problem: what do those state changes affect? The sequence that is executed right after the current one? Which one is that? We would not know the state used for each draw. Forcing the work-dispatching command to be the last one is much easier to reason about and is also logically tight.

Naturally, if you have a series of draws on the CPU where, for some of them, you change some small bits of state (e.g. like disabling the depth or stencil tests) you cannot do that in a single DGC sequence. For those cases, you need to batch your sequences in groups with the same state (and use multiple DGC buffers) or you could use regular draws for parts of the scene and DGC for the rest.

Question from the hallway: do you know what drivers do exactly at preprocessing time that is so important for performance?

Most GPU drivers these days have a kernel side and a userspace side. The kernel driver does a lot of things like talking to the hardware, managing different types of memory and buffers, talking to the display controller, etc. The kernel driver normally also has facilities to receive a command list from userspace and send it to the GPU.

These command lists are particular for each GPU vendor and model. The packets that form it control different aspects of the GPU. For example (this is completely made-up), maybe one GPU has a particular packet to modify depth buffer and test parameters, and another packet for the stencil test and its parameters, while another GPU from another vendor has a single packet that controls both. There may be another packet that dispatches draw work of all kinds and is flexible to accomodate the different draw commands that are available on Vulkan.

The Vulkan userspace driver translates Vulkan command buffer contents to these GPU-specific command lists. In many drivers, the preprocessing step in DGC takes the command buffer state, combines it with the DGC buffer contents and generates a final command list for the GPU, storing that final command list in the preprocess buffer. Once the preprocess buffer is ready, executing the DGC commands is only a matter of sending that command list to the GPU.

Talk slides and transcription

Slide 1: Device-Generated Commands in Vulkan title

Hello, everyone! I’m Ricardo from Igalia and I’m going to talk about device-generated commands in Vulkan.

Slide 2: About Me

First, some bits about me. I have been part of the graphics team at Igalia since 2019. For those that don’t know us, Igalia is a small consultancy company specialized in open source and my colleagues in the graphics team work on things such as Mesa drivers, Linux kernel drivers, compositors…​ that kind of things. In my particular case the focus of my work is contributing to the Vulkan Conformance Test Suite and I do that as part of a collaboration between Igalia and Valve that has been going on for a number of years now. Just to highlight a couple of things, I’m the main author of the tests for the mesh shading extension and device-generated commands that we are talking about today.

Slide 3: What are Device-Generated Commands

So what are device-generated commands? So basically it’s a new extension, a new functionality, that allows a driver to read command sequences from a regular buffer: something like, for example, a storage buffer, instead of the usual regular command buffers that you use. The contents of the DGC buffer could be filled from the GPU itself. This is what saves you the round trip to the CPU and, that way, you can improve the GPU-driven rendering process in your application. It’s like one step ahead of indirect draws and dispatches, and one step behind work graphs. And it’s also interesting because device-generated commands provide a better foundation for translating DX12. If you have a translation layer that implements DX12 on top of Vulkan like, for example, Proton, and you want to implement ExecuteIndirect, you can do that much more easily with device generated commands. This is important for Proton, which Valve uses to run games on the Steam Deck, i.e. Windows games on top of Linux.

Slide 4: Naïve CPU-based Approach

If we set aside Vulkan for a moment, and we stop thinking about GPUs and such, and you want to come up with a naive CPU-based way of running commands from a storage buffer, how do you do that? Well, one immediate solution we can think of is: first of all, I’m going to assign a token, an identifier, to each of the commands I want to run, and I’m going to store that token in the buffer first. Then, depending on what the command is, I want to store more information.

For example, if we have a sequence like we see here in the slide where we have a push constant command followed by dispatch, I’m going to store the token for the push constants command first, then I’m going to store some information that I need for the push constants command, like the pipeline layout, the stage flags, the offset and the size. Then, after that, depending on the size that I said I need, I am going to store the data for the command, which is the push constant values themselves. And then, after that, I’m done with it, and I store the token for the dispatch, and then the dispatch size, and that’s it.

But this doesn’t really work: this is not how GPUs work. A GPU would have a hard time running commands from a buffer if we store them this way. And this is not how Vulkan works because in Vulkan you want to provide as much information as possible in advance and you want to make things run in parallel as much as possible, and take advantage of the GPU.

Slide 5: VK_EXT_device_generated_commands

So what do we do in Vulkan? In Vulkan, and in the Vulkan VK_EXT_device_generated_commands extension, we have this central concept, which is called the Indirect Commands Layout. This is the main thing, and if you want to remember just one thing about device generated commands, you can remember this one.

The indirect commands layout is basically like a template for a short sequence of commands. The way you build this template is using the tokens and the command information that we saw colored red and green in the previous slide, and you build that in advance and pass that in advance so that, in the end, in the command buffer itself, in the buffer that you’re filling with commands, you don’t need to store that information. You just store the data for each command. That’s how you make it work.

And the result of this is that with the commands layout, that I said is a template for a short sequence of commands (and by short I mean a handful of them like just three, four or five commands, maybe 10), the DGC buffer can be pretty large, but it does not contain a random sequence of commands where you don’t know what comes next. You can think about it as divided into small chunks that the specification calls sequences, and you get a large number of sequences stored in the buffer but all of them follow this template, this commands layout. In the example we had, push constant followed by dispatch, the contents of the buffer would be push constant values, dispatch size, push content values, dispatch size, many times repeated.

Slide 6: Restricted Command Selection

The second thing that Vulkan does to be able to make this work is that we limit a lot what you can do with device-generated commands. There are a lot of things you cannot do. In fact, the only things you can do are the ones that are present in this slide.

You have some things like, for example, update push constants, you can bind index buffers, vertex buffers, and you can draw in different ways, using mesh shading maybe, you can dispatch compute work and you can dispatch raytracing work, and that’s it. You also need to check which features the driver supports, because maybe the driver only supports device-generated commands for compute or ray tracing or graphics. But you notice you cannot do things like start render passes or insert barriers or bind descriptor sets or that kind of thing. No, you cannot do that. You can only do these things.

Slide 7: Indirect Commands Layout

This indirect commands layout, which is the backbone of the extension, specifies, as I said, the layout for each sequence in the buffer and it has additional restrictions. The first one is that it must specify exactly one token that dispatches some kind of work and it must be the last token in the sequence. You cannot have a sequence that dispatches graphics work twice, or that dispatches computer work twice, or that dispatches compute first and then draws, or something like that. No, you can only do one thing with each DGC buffer and each commands layout and it has to be the last one in the sequence.

And one interesting thing that also Vulkan allows you to do, that DX12 doesn’t let you do, is that it allows you (on some drivers, you need to check the properties for this) to choose which shaders you want to use for each sequence. This is a restricted version of the bind pipeline command in Vulkan. You cannot choose arbitrary pipelines and you cannot change arbitrary states but you can switch shaders. For example, if you want to use a different fragment shader for each of the draws in the sequence, you can do that. This is pretty powerful.

Slide 8: Indirect Commands Layout creation structure

How do you create one of those indirect commands layout? Well, with one of those typical Vulkan calls, to create an object that you pass these CreateInfo structures that are always present in Vulkan.

And, as you can see, you have to pass these shader stages that will be used, will be active, while you draw or you execute those indirect commands. You have to pass the pipeline layout, and you have to pass in an indirect stride. The stride is the amount of bytes for each sequence, from the start of a sequence to the next one. And the most important information of course, is the list of tokens: an array of tokens that you pass as the token count and then the pointer to the first element.

Slide 9: Indirect Commands Layout layout token struct

Now, each of those tokens contains a bit of information and the most important one is the type, of course. Then you can also pass an offset that tells you how many bytes into the sequence for the start of the data for that command. Together with the stride, it tells us that you don’t need to pack the data for those commands together. If you want to include some padding, because it’s convenient or something, you can do that.

Slide 10: Indirect Commands Layout token data

And then there’s also the token data which allows you to pass the information that I was painting in green in other slides like information to be able to run the command with some extra parameters. Only a few tokens, a few commands, need that. Depending on the command it is, you have to fill one of the pointers in the union but for most commands they don’t need this kind of information. Knowing which command it is you just know you are going to find some fixed data in the buffer and you just read that and process that.

Slide 11: Indirect Execution Sets

One thing that is interesting, like I said, is the ability to switch shaders and to choose which shaders are going to be used for each of those individual sequences. Some form of pipeline switching, or restricted pipeline switching. To do that you have to create something that is called Indirect Execution Sets.

Each of these execution sets is like a group or an array, if you want to think about it like that, of pipelines: similar pipelines or shader objects. They have to share something in common, which is that all of the state in the pipeline has to be identical, basically. Only the shaders can change.

When you create these execution sets and you start adding pipelines or shaders to them, you assign an index to each pipeline in the set. Then, you pass this execution set beforehand, before executing the commands, so that the driver knows which set of pipelines you are going to use. And then, in the DGC buffer, when you have this pipeline token, you only have to store the index of the pipeline that you want to use. You create the execution set with 20 pipelines and you pass an index for the pipeline that you want to use for each draw, for each dispatch, or whatever.

Slide 12: Indirect Execution Sets creation structures

The way to create the execution sets is the one you see here, where we have, again, one of those CreateInfo structures. There, we have to indicate the type, which is pipelines or shader objects. Depending on that, you have to fill one of the pointers from the union on the top right here.

If we focus on pipelines because it’s easier on the bottom left, you have to pass the maximum pipeline count that you’re going to store in the set and an initial pipeline. The initial pipeline is what is going to set the template that all pipelines in the set are going to conform to. They all have to share essentially the same state as the initial pipeline and then you can change the shaders. With shader objects, it’s basically the same, but you have to pass more information for the shader objects, like the descriptor set layouts used by each stage, push-constant information…​ but it’s essentially the same.

Slide 13: Indirect Execution Sets update instructions

Once you have that execution set created, you can use those two functions (vkUpdateIndirectExecutionSetPipelineEXT and vkUpdateIndirectExecutionSetShaderEXT) to update and add pipelines to that execution set. You need to take into account that you have to pass a couple of special creation flags to the pipelines, or the shader objects, to tell the driver that you may use those inside an execution set because the driver may need to do something special for them. And one additional restriction that we have is that if you use an execution set token in your sequences, it must appear only once and it must be the first one in the sequence.

Slide 14: Recap So Far

The recap, so far, is that the DGC buffer is divided into small chunks that we call sequences. Each sequence follows a template that we call the Indirect Commands Layout. Each sequence must dispatch work exactly once and you may be able to switch the set of shaders we used with with each sequence with an Indirect Execution Set.

Slide 15: Executing Work with DGC

Wow do we go about actually telling Vulkan to execute the contents of a specific buffer? Well, before executing the contents of the DGC buffer the application needs to have bound all the needed states to run those commands. That includes descriptor sets, initial push constant values, initial shader state, initial pipeline state. Even if you are going to use an Execution Set to switch shaders later you have to specify some kind of initial shader state.

Slide 16: Executing Work with DGC function call and info structure

Once you have that, you can call this vkCmdExecuteGeneratedCommands. You bind all the state into your regular command buffer and then you record this command to tell the driver: at this point, execute the contents of this buffer. As you can see, you typically pass a regular command buffer as the first argument. Then there’s some kind of boolean value called isPreprocessed, which is kind of confusing because it’s the first time it appears and you don’t know what it is about, but we will talk about it in a minute. And then you pass a relatively larger structure containing information about what to execute.

In that GeneratedCommandsInfo structure, you need to pass again the shader stages that will be used. You have to pass the handle for the Execution Set, if you’re going to use one (if not you can use the null handle). Of course, the indirect commands layout, which is the central piece here. And then you pass the information about the buffer that you want to execute, which is the indirect address and the indirect address size as the buffer size. We are using buffer device address to pass information.

And then we have something again mentioning some kind of preprocessing thing, which is really weird: preprocess address and preprocess size which looks like a buffer of some kind (we will talk about it later). You have to pass the maximum number of sequences that you are going to execute. Optionally, you can also pass a buffer address for an actual counter of sequences. And the last thing that you need is the max draw count, but you can forget about that if you are not dispatching work using draw-with-count tokens as it only applies there. If not, you leave it as zero and it should work.

Slide 17: Executing Work with DGC preprocessing fields highlight

We have a couple of things here that we haven’t talked about yet, which are the preprocessing things. Starting from the bottom, that preprocess address and size give us a hint that there may be a pre-processing step going on. Some kind of thing that the driver may need to do before actually executing the commands, and we need to pass information about the buffer there.

The boolean value that we pass to the command ExecuteGeneratedCommands tells us that the pre-processing step may have happened before so it may be possible to explicitly do that pre-processing instead of letting the driver do that at execution time. Let’s take a look at that in more detail.

Slide 18: Preprocess Buffer

First of all, what is the pre-process buffer? The pre-process buffer is auxiliary space, a scratch buffer, because some drivers need to take a look at how the command sequence looks like before actually starting to execute things. They need to go over the sequence first and they need to write a few things down just to be able to properly do the job later to execute those commands.

Once you have the commands layout and you have the maximum number of sequences that you are going to execute, you can call this vkGetGeneratedCommandMemoryRequirementsEXT and the driver is going to tell you how much space it needs. Then, you can create a buffer, you can allocate the space for that, you need to pass a special new buffer usage flag (VK_BUFFER_USAGE_2_PREPROCESS_BUFFER_BIT_EXT) and, once you have that buffer, you pass the address and you pass a size in the previous structure.

Slide 19: Explicit Preprocessing

Now the second thing is that we have the possibility of ding this preprocessing step explicitly. Explicit pre-processing is something that’s optional, but you probably want to do that if you care about performance because it’s the key to performance with some drivers.

When you use explicit pre-processing you don’t want to (1) record the state, (2) call this vkPreProcessGeneratedCommandsEXT and (3) call vkExecuteGeneratedCommandsEXT. That is what implicit pre-processing does so this doesn’t give you anything if you do it this way.

This is designed so that, if you want to do explicit pre-processing, you’re going to probably want to use a separate command buffer for pre-processing. You want to batch pre-processing calls together and submit them all together to keep the GPU busy and to give you the performance that you want. While you submit the pre-processing steps you may be still preparing the rest of the command buffers to enqueue the next batch of work. That’s the key to doing pre-processing optimally.

You need to decide beforehand if you are going to use explicit pre-processing or not because, if you’re going to use explicit preprocessing, you need to pass a flag when you create the commands layout, and then you have to call the function to preprocess generated commands. If you don’t pass that flag, you cannot call the preprocessing function, so it’s an all or nothing. You have to decide, and you do what you want.

Slide 20: Explicit Preprocessing (continued)

One thing that is important to note is that preprocessing needs to know and has to have the same state, the same contents of the input buffers as when you execute so it can run properly.

The video contains a cut here because the presentation laptop ran out of battery.

Slide 21: Explicit Preprocessing (continued) state command buffer

If the pre-processing step needs to have the same state as the execution, you need to have bound the same pipeline state, the same shaders, the same descriptor sets, the same contents. I said that explicit pre-processing is normally used using a separate command buffer that we submit before actual execution. You have a small problem to solve, which is that you would need to record state twice: once on the pre-process command buffer, so that the pre-process step knows everything, and once on the execution, the regular command buffer, when you call execute. That would be annoying.

Instead of that, the pre-process generated commands function takes an argument that is a state command buffer and the specification tells you: this is a command buffer that needs to be in the recording state, and the pre-process step is going to read the state from it. This is the first time, and I think the only time in the specification, that something like this is done. You may be puzzled about what this is exactly: how do you use this and how do we pass this?

Slide 22: Explicit Preprocessing (continued) state command buffer ergonomics

I just wanted to get this slide out to tell you: if you’re going to use explicit pre-processing, the ergonomic way of using it and how we thought about using the processing step is like you see in this slide. You take your main command buffer and you record all the state first and, just before calling execute-generated-commands, the regular command buffer contains all the state that you want and that preprocess needs. You stop there for a moment and then you prepare your separate preprocessing command buffer passing the main one as an argument to the preprocess call, and then you continue recording commands in your regular command buffer. That’s the ergonomic way of using it.

Slide 23: Synchronization from filling the DGC buffer to reading from it

You do need some synchronization at some steps. The main one is that, if you generate the contents of the DGC buffer from the GPU itself, you’re going to need some synchronization: writes to that buffer need to be synchronized with something else that comes later which is executing or reading those commands from from the buffer.

Depending on if you use explicit preprocessing you can use the pipeline stage command-pre-process which is new and pre-process-read or you synchronize that with the regular device-generated-commands-execution which was considered part of the regular draw-indirect-stage using indirect-command-read access.

Slide 24: Synchronization (continued) from explicit preprocessing to execution

If you use explicit pre-processing you need to make sure that writes to the pre-process buffer happen before you start reading from that. So you use these just here (VK_PIPELINE_STAGE_COMMAND_PREPROCESS_BIT_EXT, VK_ACCESS_COMMAND_PREPROCESS_WRITE_BIT_EXT) to synchronize processing with execution (VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT, VK_ACCESS_INDIRECT_COMMAND_READ_BIT) if you use explicit preprocessing.

Slide 25: Quick How-To

The quick how-to: I just wanted to get this slide out for those wanting a reference that says exactly what you need to do. All the steps that I mentioned here about creating the commands layout, the execution set, allocating the preprocess buffer, etc. This is the basic how-to.

Slide 26: Thanks for watching!

And that’s it. Thanks for watching! Questions?

March 11, 2025 04:30 PM

Stéphane Cerveau

Vulkanised 2025

Vulkanised

UK calling #

Long time no see this beautiful grey sky, roast beef on sunday and large but full packed pub when there is a football or a rugby game (The rose team has been lucky this year, grrr).

It was a delightful journey in the UK starting with my family visiting London including a lot (yes a lot…) of sightviews in a very short amount of time. But we managed to fit everything in. We saw the changing of the guards, the Thames river tide on a boat, Harry Potter gift shops and the beautiful Arsenal stadium with its legendary pitch, one of the best of England.

It was our last attraction in London and now it was time for my family to go to Standsted back home and me to Cambridge and its legendary university.

To start the journey in Cambridge, first I got some rest on Monday in the hotel to face the hail of information I will get during the conference. This year, Vulkanised took place on Arm’s campus, who kindly hosted the event, providing everything we needed to feel at home and comfortable.

The first day, we started with an introduction from Ralph Potter, the Vulkan Working Group Chair at Khronos, who introduced the new 1.4 release and all the extensions coming along including “Vulkan Video”. Then we could start this conference with my favorite topic, decoding video content with Vulkan Video. And the game was on! There was a presentation every 30 minutes including a neat one from my colleague at Igalia Ricardo Garcia about Device-Generated Commands in Vulkan and a break every 3 presentations. It took a lot of mental energy to keep up with all the topics as each presentation was more interesting than the last. During the break, we had time to relax with good coffee, delicious cookies, and nice conversations.

The first day ended up with a tooling demonstrations from LunarG, helping us all to understand and tame the Vulkan beast. The beast is ours now!

As I was not in the best shape due to a bug I caught on Sunday, I decided to play it safe and went to the hotel just after a nice indian meal. I had to prepare myself for the next day, where I would present “Vulkan Video is Open: Application Showcase”.

Vulkan Video is Open: Application showcase ! #

First Srinath Kumarapuram from Nvidia gave a presentation about the new extensions made available during 2024 by the Vulkan Video TSG. It started with a brief timeline of the video extensions from the initial h26x decoding to the latest VP9 decode coming this year including the 2024 extensions such as the AV1 codec. Then he presented more specific extensions such as VK_KHR_video_encode_quantization_map, VK_KHR_video_maintenance2 released during 2024 and coming in 2025, VK_KHR_video_encode_intra_refresh. He mentioned that the Vulkan toolbox now completely supports Vulkan Video, including the Validation Layers, Vulkan Profiles, vulkaninfo or GFXReconstruct.

After some deserved applause for a neat presentation, it was my time to be on stage.

During this presentation I focused on the Open source ecosystem around Vulkan Video. Indeed Vulkan Video ships with a sample app which is totally open along with the regular Conformance Test Suite. But that’s not all! Two major frameworks now ship with Vulkan Video support: GStreamer and FFmpeg.

Before this, I started by talking about Mesa, the open graphics library. This library which is totally open provides drivers which support Vulkan Video extensions and allow applications to run Vulkan Video decode or encode. The 3 major chip vendors are now supported. It started in 2022 with RADV, a userspace driver that implements the Vulkan API on most modern AMD GPUs. This driver supports all the vulkan video extensions except the lastest ones such as VK_KHR_video_encode_quantization_map or VK_KHR_video_maintenance2 but this they should be implemented sometime in 2025. Intel GPUs are now supported with the ANV driver, this driver also supports the common video extensions such as h264/5 and AV1 codec. The last driver to gain support was at the end of 2024 where several of the Vulkan Video extensions were introduced to NVK, a Vulkan driver for NVIDIA GPUs. This driver is still experimental but it’s possible to decode H264 and H265 content as well as its proprietary version. This completes the offering of the main GPUs on the market.

Then I moved to the applications including GStreamer, FFmpeg and Vulkan-Video-Samples. In addition to the extensions supported in 2025, we talked mainly about the decode conformance using Fluster. To compare all the implementations, including the driver, the version and the framework, a spreadsheet can be found here. In this spreadsheet we summarize the 3 supported codecs (H264, H265 and AV1) with their associated test suites and compare their implemententations using Vulkan Video (or not, see results for VAAPI with GStreamer). GStreamer, my favorite playground, can now decode H264 and H265 since 1.24 and recently got the support for AV1 but the merge request is still under review. It supports more than 80% of the H264 test vectors for the JVT-AVC_V1 and 85% of the H265 test vectors in JCT-VC-HEVC_V1. FFMpeg is offering better figures passing 90% of the tests. It supports all the avaliable codecs including all of the encoders as well. And finally Vulkan-Video-Samples is the app that you want to use to support all codecs for both encode and decode, but its currently missing support for mesa drivers when it comes to use Fluster decode tests…

Vulkanised on the 3rd day #

During the 3rd day, we had interesting talks as well demonstrating the power of Vulkan, from Blender, a free and open-source 3D computer graphics software tool switching progressively to Vulkan, to the implementation of 3D a game engine using Rust, or compute shaders in Astronomy. My other colleague at Igalia, Lucas Fryzek, also had a presentation on Mesa with Lavapipe: a Mesa’s Software Renderer for Vulkan which allows you to have a hardware free implementation of Vulkan and to validate extensions in a simpler way. Finally, we finished this prolific and dense conference with Android and its close collaboration with Vulkan.

If you are interested in 3D graphics, I encourage you to attend future Vulkanised editions, which are full of passionate people. And if you can not attend you can still watch the presentation online.

If you are interested in the Vulkan Video presentation I gave, you can catch up the video here:

Or follow our Igalia live blog post on Vulkan Video:

https://blogs.igalia.com/vjaquez/vulkan-video-status/

As usual, if you would like to learn more about Vulkan, GStreamer or any other open multimedia framework, please feel free to contact us!

March 11, 2025 12:00 AM

March 10, 2025

Igalia WebKit Team

WebKit Igalia Periodical #16

Update on what happened in WebKit in the week from March 3 to March 10.

Cross-Port 🐱

Web Platform 🌐

Forced styling to field-sizing: fixed when an input element is auto filled, and added support for changing field-sizing dynamically.

Fixed an issue where the imperative popover APIs didn't take into account the source parameter for focus behavior.

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

Fixed YouTube breakage on videos with advertisements. The fix prevents scrolling to the comments section when the videos are fullscreened, but having working video playback was considered more important for now.

Graphics 🖼️

Fixed re-layout issues for form controls with the experimental field-sizing implementation.

Landed a change that improves the quality of damage rectangles and reduces the amount of painting done in the compositor in some simple scenarios.

Introduce a hybrid threaded rendering mode, scheduling tasks to both the CPU and GPU worker pools. By default we use CPU-affine rendering on WPE, and GPU-affine rendering on the GTK port, saturating the CPU/GPU worker pool first, before switching to the GPU/CPU.

Infrastructure 🏗️

We have recently enabled automatic nightly runs of WPT tests with WPE for the Web Platform Tests (WPT) dashboard. If you click on the “Edit” button at the wpt.fyi dashboard now there is the option to select WPE.

For example, one may compare the results for WPE to other browsers or examine the differences between the WPE and GTK ports.

These nightly runs happen now daily on the TaskCluster CI sponsored by Mozilla (Thanks to James Graham!). If you want to run WPT tests with WPE WebKit locally, there are instructions at the WPT documentation.

That’s all for this week!

by Igalia WebKit Team at March 10, 2025 10:59 PM

March 07, 2025

Andy Wingo

whippet lab notebook: untagged mallocs, bis

Earlier this week I took an inventory of how Guile uses the Boehm-Demers-Weiser (BDW) garbage collector, with the goal of making sure that I had replacements for all uses lined up in Whippet. I categorized the uses into seven broad categories, and I was mostly satisfied that I have replacements for all except the last: I didn’t know what to do with untagged allocations: those that contain arbitrary data, possibly full of pointers to other objects, and which don’t have a header that we can use to inspect on their type.

But now I do! Today’s note is about how we can support untagged allocations of a few different kinds in Whippet’s mostly-marking collector.

inside and outside

Why bother supporting untagged allocations at all? Well, if I had my way, I wouldn’t; I would just slog through Guile and fix all uses to be tagged. There are only a finite number of use sites and I could get to them all in a month or so.

The problem comes for uses of scm_gc_malloc from outside libguile itself, in C extensions and embedding programs. These users are loathe to adapt to any kind of change, and garbage-collection-related changes are the worst. So, somehow, we need to support these users if we are not to break the Guile community.

on intent

The problem with scm_gc_malloc, though, is that it is missing an expression of intent, notably as regards tagging. You can use it to allocate an object that has a tag and thus can be traced precisely, or you can use it to allocate, well, anything else. I think we will have to add an API for the tagged case and assume that anything that goes through scm_gc_malloc is requesting an untagged, conservatively-scanned block of memory. Similarly for scm_gc_malloc_pointerless: you could be allocating a tagged object that happens to not contain pointers, or you could be allocating an untagged array of whatever. A new API is needed there too for pointerless untagged allocations.

on data

Recall that the mostly-marking collector can be built in a number of different ways: it can support conservative and/or precise roots, it can trace the heap precisely or conservatively, it can be generational or not, and the collector can use multiple threads during pauses or not. Consider a basic configuration with precise roots. You can make tagged pointerless allocations just fine: the trace function for that tag is just trivial. You would like to extend the collector with the ability to make untagged pointerless allocations, for raw data. How to do this?

Consider first that when the collector goes to trace an object, it can’t use bits inside the object to discriminate between the tagged and untagged cases. Fortunately though the main space of the mostly-marking collector has one metadata byte for each 16 bytes of payload. Of those 8 bits, 3 are used for the mark (five different states, allowing for future concurrent tracing), two for the precise field-logging write barrier, one to indicate whether the object is pinned or not, and one to indicate the end of the object, so that we can determine object bounds just by scanning the metadata byte array. That leaves 1 bit, and we can use it to indicate untagged pointerless allocations. Hooray!

However there is a wrinkle: when Whippet decides the it should evacuate an object, it tracks the evacuation state in the object itself; the embedder has to provide an implementation of a little state machine, allowing the collector to detect whether an object is forwarded or not, to claim an object for forwarding, to commit a forwarding pointer, and so on. We can’t do that for raw data, because all bit states belong to the object, not the collector or the embedder. So, we have to set the “pinned” bit on the object, indicating that these objects can’t move.

We could in theory manage the forwarding state in the metadata byte, but we don’t have the bits to do that currently; maybe some day. For now, untagged pointerless allocations are pinned.

on slop

You might also want to support untagged allocations that contain pointers to other GC-managed objects. In this case you would want these untagged allocations to be scanned conservatively. We can do this, but if we do, it will pin all objects.

Thing is, conservative stack roots is a kind of a sweet spot in language run-time design. You get to avoid constraining your compiler, you avoid a class of bugs related to rooting, but you can still support compaction of the heap.

How is this, you ask? Well, consider that you can move any object for which we can precisely enumerate the incoming references. This is trivially the case for precise roots and precise tracing. For conservative roots, we don’t know whether a given edge is really an object reference or not, so we have to conservatively avoid moving those objects. But once you are done tracing conservative edges, any live object that hasn’t yet been traced is fair game for evacuation, because none of its predecessors have yet been visited.

But once you add conservatively-traced objects back into the mix, you don’t know when you are done tracing conservative edges; you could always discover another conservatively-traced object later in the trace, so you have to pin everything.

The good news, though, is that we have gained an easier migration path. I can now shove Whippet into Guile and get it running even before I have removed untagged allocations. Once I have done so, I will be able to allow for compaction / evacuation; things only get better from here.

Also as a side benefit, the mostly-marking collector’s heap-conservative configurations are now faster, because we have metadata attached to objects which allows tracing to skip known-pointerless objects. This regains an optimization that BDW has long had via its GC_malloc_atomic, used in Guile since time out of mind.

fin

With support for untagged allocations, I think I am finally ready to start getting Whippet into Guile itself. Happy hacking, and see you on the other side!

by Andy Wingo at March 07, 2025 01:47 PM

March 04, 2025

Abhijeet Kandalkar

Testing WebXR on Windows

WebXR on Windows

It started with my need to debug Chromium’s implementation of OpenXR. I wanted to understand how Chromium interfaces with OpenXR APIs. However, I noticed that only the Android and Windows ports of Chromium currently support OpenXR bindings. Since I needed to debug a desktop implementation, Windows was the only viable option. Additionally, I did not have access to a physical XR device, so I explored whether a simulator or emulator environment could be used to test WebXR support for websites.

Understanding WebXR and OpenXR

Before diving into implementation details, it’s useful to understand what WebXR and OpenXR are and how they differ.

WebXR is a web standard that enables immersive experiences, such as Virtual Reality (VR) and Augmented Reality (AR), in web browsers. It allows developers to create XR content using JavaScript and run it directly in a browser without requiring platform-specific applications.

OpenXR is a cross-platform API standard developed by the Khronos Group, designed to unify access to different XR hardware and software. It provides a common interface for VR and AR devices, ensuring interoperability across different platforms and vendors.

The key difference is that WebXR is a high-level API used by web applications to access XR experiences, whereas OpenXR is a low-level API used by platforms and engines to communicate with XR hardware. WebXR implementations, such as the one in Chromium use OpenXR as the backend to interact with different XR runtimes.

Chromium OpenXR Implementation

Chromium’s OpenXR implementation, which interacts with the platform-specific OpenXR runtime, is located in the device/vr/ directory. WebXR code interacts with this device/vr/ OpenXR implementation, which abstracts WebXR features across multiple platforms.

WebXR ---> device/vr/ ---> OpenXR API ---> OpenXR runtime

Installing OpenXR Runtime

To run OpenXR on Windows, you need to install an OpenXR runtime. You can download and install OpenXR Tools for Windows Mixed Reality from the Microsoft App Store:

OpenXR Tools for Windows Mixed Reality

If it is not available on your machine, you can enable it from the OpenXR Runtime tab in the application.

Installing Microsoft Mixed Reality Simulator

To set up a simulated environment for WebXR testing, follow these steps:

  1. Install Mixed Reality Portal from the Microsoft App Store.
  2. Follow the official Microsoft guide on enabling the Mixed Reality simulator: Using the Windows Mixed Reality Simulator

If you encounter hardware compatibility errors, refer to the troubleshooting steps in the guide below.

https://www.thewindowsclub.com/disable-hardware-requirement-checks-for-mixed-reality-portal

Connecting Chromium to OpenXR Implementation

Chromium provides a flag to select the OpenXR implementation.

  1. Open Chrome and navigate to:
    chrome://flags/#webxr-runtime
    
  2. Set the flag to OpenXR.

This enables Chromium to use the OpenXR runtime for WebXR applications.

Launch WebVR application

Launch chromium and Open : https://immersive-web.github.io/webxr-samples/immersive-vr-session.html

output

CallStack

When we call navigator.xr.requestSession("immersive-vr"); from Javascript, below call stack get triggered.

callstack

Conclusions

With this setup, you can explore and debug WebXR applications on Windows even without a physical VR headset. The combination of Chromium’s OpenXR implementation and Microsoft’s Mixed Reality Simulator provides a practical way to test WebXR features and interactions.

If you’re interested in further experimenting, try developing a simple WebXR scene to validate your setup! Additionally, we plan to post more about Chromium’s architecture on OpenXR and will link those posts here once they are ready.

March 04, 2025 06:30 PM

Andy Wingo

whippet lab notebook: on untagged mallocs

Salutations, populations. Today’s note is more of a work-in-progress than usual; I have been finally starting to look at getting Whippet into Guile, and there are some open questions.

inventory

I started by taking a look at how Guile uses the Boehm-Demers-Weiser collector‘s API, to make sure I had all my bases covered for an eventual switch to something that was not BDW. I think I have a good overview now, and have divided the parts of BDW-GC used by Guile into seven categories.

implicit uses

Firstly there are the ways in which Guile’s run-time and compiler depend on BDW-GC’s behavior, without actually using BDW-GC’s API. By this I mean principally that we assume that any reference to a GC-managed object from any thread’s stack will keep that object alive. The same goes for references originating in global variables, or static data segments more generally. Additionally, we rely on GC objects not to move: references to GC-managed objects in registers or stacks are valid across a GC boundary, even if those references are outside the GC-traced graph: all objects are pinned.

Some of these “uses” are internal to Guile’s implementation itself, and thus amenable to being changed, albeit with some effort. However some escape into the wild via Guile’s API, or, as in this case, as implicit behaviors; these are hard to change or evolve, which is why I am putting my hopes on Whippet’s mostly-marking collector, which allows for conservative roots.

defensive uses

Then there are the uses of BDW-GC’s API, not to accomplish a task, but to protect the mutator from the collector: GC_call_with_alloc_lock, explicitly enabling or disabling GC, calls to sigmask that take BDW-GC’s use of POSIX signals into account, and so on. BDW-GC can stop any thread at any time, between any two instructions; for most users is anodyne, but if ever you use weak references, things start to get really gnarly.

Of course a new collector would have its own constraints, but switching to cooperative instead of pre-emptive safepoints would be a welcome relief from this mess. On the other hand, we will require client code to explicitly mark their threads as inactive during calls in more cases, to ensure that all threads can promptly reach safepoints at all times. Swings and roundabouts?

precise tracing

Did you know that the Boehm collector allows for precise tracing? It does! It’s slow and truly gnarly, but when you need precision, precise tracing nice to have. (This is the GC_new_kind interface.) Guile uses it to mark Scheme stacks, allowing it to avoid treating unboxed locals as roots. When it loads compiled files, Guile also adds some sliced of the mapped files to the root set. These interfaces will need to change a bit in a switch to Whippet but are ultimately internal, so that’s fine.

What is not fine is that Guile allows C users to hook into precise tracing, notably via scm_smob_set_mark. This is not only the wrong interface, not allowing for copying collection, but these functions are just truly gnarly. I don’t know know what to do with them yet; are our external users ready to forgo this interface entirely? We have been working on them over time, but I am not sure.

reachability

Weak references, weak maps of various kinds: the implementation of these in terms of BDW’s API is incredibly gnarly and ultimately unsatisfying. We will be able to replace all of these with ephemerons and tables of ephemerons, which are natively supported by Whippet. The same goes with finalizers.

The same goes for constructs built on top of finalizers, such as guardians; we’ll get to reimplement these on top of nice Whippet-supplied primitives. Whippet allows for resuscitation of finalized objects, so all is good here.

misc

There is a long list of miscellanea: the interfaces to explicitly trigger GC, to get statistics, to control the number of marker threads, to initialize the GC; these will change, but all uses are internal, making it not a terribly big deal.

I should mention one API concern, which is that BDW’s state is all implicit. For example, when you go to allocate, you don’t pass the API a handle which you have obtained for your thread, and which might hold some thread-local freelists; BDW will instead load thread-local variables in its API. That’s not as efficient as it could be and Whippet goes the explicit route, so there is some additional plumbing to do.

Finally I should mention the true miscellaneous BDW-GC function: GC_free. Guile exposes it via an API, scm_gc_free. It was already vestigial and we should just remove it, as it has no sensible semantics or implementation.

allocation

That brings me to what I wanted to write about today, but am going to have to finish tomorrow: the actual allocation routines. BDW-GC provides two, essentially: GC_malloc and GC_malloc_atomic. The difference is that “atomic” allocations don’t refer to other GC-managed objects, and as such are well-suited to raw data. Otherwise you can think of atomic allocations as a pure optimization, given that BDW-GC mostly traces conservatively anyway.

From the perspective of a user of BDW-GC looking to switch away, there are two broad categories of allocations, tagged and untagged.

Tagged objects have attached metadata bits allowing their type to be inspected by the user later on. This is the happy path! We’ll be able to write a gc_trace_object function that takes any object, does a switch on, say, some bits in the first word, dispatching to type-specific tracing code. As long as the object is sufficiently initialized by the time the next safepoint comes around, we’re good, and given cooperative safepoints, the compiler should be able to ensure this invariant.

Then there are untagged allocations. Generally speaking, these are of two kinds: temporary and auxiliary. An example of a temporary allocation would be growable storage used by a C run-time routine, perhaps as an unbounded-sized alternative to alloca. Guile uses these a fair amount, as they compose well with non-local control flow as occurring for example in exception handling.

An auxiliary allocation on the other hand might be a data structure only referred to by the internals of a tagged object, but which itself never escapes to Scheme, so you never need to inquire about its type; it’s convenient to have the lifetimes of these values managed by the GC, and when desired to have the GC automatically trace their contents. Some of these should just be folded into the allocations of the tagged objects themselves, to avoid pointer-chasing. Others are harder to change, notably for mutable objects. And the trouble is that for external users of scm_gc_malloc, I fear that we won’t be able to migrate them over, as we don’t know whether they are making tagged mallocs or not.

what is to be done?

One conventional way to handle untagged allocations is to manage to fit your data into other tagged data structures; V8 does this in many places with instances of FixedArray, for example, and Guile should do more of this. Otherwise, you make new tagged data types. In either case, all auxiliary data should be tagged.

I think there may be an alternative, which would be just to support the equivalent of untagged GC_malloc and GC_malloc_atomic; but for that, I am out of time today, so type at y’all tomorrow. Happy hacking!

by Andy Wingo at March 04, 2025 03:42 PM

March 03, 2025

Igalia WebKit Team

WebKit Igalia Periodical #15

Update on what happened in WebKit in the week from February 19 to March 3.

Cross-Port 🐱

Web Platform 🌐

Implemented support for setting returnValue for <dialog> with Invoker Commands.

After fixing an issue with Trusted Types when doing attribute mutation within the default callback, and implementing performance improvements for Trusted Types enforcement, the Trusted Types implementation is now considered stable and has been enabled by default.

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

Landed one fix which, along with previous patches, solved the webKitMediaSrcStreamFlush() crash reported in bug #260455.

Unfortunately, in some pages where the crash previously occurred, now a different blank video bug has been revealed. The cause of this bug is known, but fixing it would cause performance regressions in pages with many video elements. Work is ongoing to find a better solution for both.

The initial support of MP4-muxed WebVTT in-band text tracks is about to be merged, which will bring this MSE feature to the ports using GStreamer. Text tracks for the macOS port of WebKit only landed two weeks ago and we expect there will be issues to iron out in WebKit ports, multiplatform code and even potentially in spec work—we are already aware of a few potential ones.

Note that out-of band text-tracks are well supported in MSE across browsers and commonly used. On the other hand, no browsers currently ship with in-band text track support in MSE at this point.

Support for MediaStreamTrack.configurationchange events was added, along with related improvements in the GStreamer PipeWire plugin. This will allow WebRTC applications to seamlessly handle default audio/video capture changes.

Graphics 🖼️

Continued improving the support for handling graphics damage:

  • Added support for validating damage rectangles in Layout Tests.

  • Landed a change that adds layout tests covering the damage propagation feature.

  • Landed a change that fixes damage rectangles on layer resize operations.

  • Landed a change that improves damage rectangles produced by scrolling so that they are clipped to the parent container.

The number of threads used for painting with the GPU has been slightly tweaked, which brings a measurable performance improvement in all kinds of devices with four or mores processor cores.

Releases 📦️

The stable branch for the upcoming 2.48.x stable release series of the GTK and WPE ports has been created. The first preview releases from this branch are WebKitGTK 2.47.90 and WPE WebKit 2.47.90. People willing to report issues and help with stabilization are encouraged to test them and report issues in Bugzilla.

Community & Events 🤝

Published a blog post that presents an opinionated approach to the work with textual logs obtained from WebKit and GStreamer.

That’s all for this week!

by Igalia WebKit Team at March 03, 2025 02:15 PM

February 28, 2025

Javier Fernández

Can I use Secure Curves in the Web Platform?

Long story short, yes, it’s possible to use the Ed25519 and X25519 algorithms through the Web Cryptography API exposed by the major browser engines: Blink (Chrome, Edge, Brave, …), WebKit (Safari) and Gecko (Firefox).

However, despite the hard work during the last year we haven’t been able to ship Ed25519 in Chrome. It’s still available behind the Experimental Web Platform Features runtime flag. In this post, I will explain the current blockers and plans for the future regarding this feature.

Although disappointed about the current status of the Ed25519 in Chrome, I’m very satisfied to see the results of our efforts, with the implementation now moving forward in other major web engines and shipping by default for both Ed25119 [1] and X25119 [2].

Finally, I want to remark about the work done to improve interoperability, which is a relevant debt this feature carried the last few years, to ensure applications can realiably use the Curve21559 features in any of the major browsers.

Context

Before analyzing the current status and blockers, it’s important to understand why browser support for this feature matters, why merging the WICG draft into the Web Cryptography API specification is key.

I’ve already written about this in my last post so I’m not going to elaborate too much, but it’s important to describe some of the advantages for the Web Platform users this API has over the current alternatives.

The Ed25519 algorithm for EdDSA signing and the X25519 function for key-agreement offer stronger security and better performance than other algorithms. For instance, the RSA keys are explicitly banned from new features like Web Transport. The smaller key size (32 bytes) and EdDSA signatures (64 bytes) provide advantages in terms of transmission rates, especially in distributed systems and peer-to-peer communications.

The lack of a browser API to use the Curve25519 algorithms have forced web authors to rely on external components, either JS libraries or WASM compiled, which implies a security risk. This situation is especially sad, considering that browsers already have support for these algorithms as part of the TLS 1.3 implementation; it’s just not exposed to web authors.

Web Platform Feature Development Timeline: Key Milestones

To get an idea of of what the time-frame and effort required to develop a web feature like this looks like, lets consider the following milestones:

  1. 2020 – Jan
    • Secure Curves on WebCrypto API repository created in the W3C WICG
    • Intent-To-Prototype request for Blink, sent by Qingsi Wang
    • TAG review request
    • Standard position request for Gecko about X25519
  2. 2020 – Feb
    • Initial implementation by Qingsi Wang of the Ed25519 algorithm for Blink (Abandoned )
  3. 2020 – Sep
    • Intent-To-Prototype request of Ed25519 and X25519 for Blink, sent by Javier Fernandez (Igalia)
    • Design document
    • Standard position request for WebKit about Secure Curves in WebKit
  4. 2022 – October
    • Initial implementation by Javier Fernandez (Igalia) of the Ed25519 algorithm for Blink
  5. 2023 – Jan
    • Initial Implementation by Angela Izquierdo (Apple) of the Ed25519 algorithm for WebKit (Apple port)
  6. 2023 – Feb
  7. 2023 – Mar
    • Initial implementation by Javier Fernandez (Igalia) of the X25519 algorithm for Blink
  8. 2023 – Aug
    • Initial implementation by Javier Fernandez (Igalia) of the Ed25519 algorithm for WebKit (GTK+ port)
    • Initial implementation by Javier Fernandez (Igalia) of the X25519 algorithm for WebKit (GTK+ port)
  9. 2023 – Sep
    • Initial implementation by Javier Fernandez (Igalia) for WebKit of the X25519 algorithm for WebKit (Apple port)
    • Safari STP 178 release shipped with Ed25519
  10. 2024 – March
    • Initial implementation by Anna Weine (Mozilla) for Gecko of the Ed25519 algorithm
  11. 2024 – Jun
    • Initial implementation by Anna Weine (Mozilla) for Gecko of the X25519 algorithm
  12. 2024 – Sep
    • Firefox 130 shipped both Ed25519 and X25519 enabled by default
  13. 2024 – Nov
    • Intent-to-Ship request of X25519 for Blink by Javier Fernandez (Igalia)
  14. 2024 – Dec
    • The Ed25519 and X25519 algorithms are integrated into the Web Cryptography API draft
    • The X25519 algorithm is enabled by default in WebKit (all ports)
  15. 2025 – Jan
    • Safari STP 211 release shipped with X25519
  16. 2025 – Feb
    • Chrome 133 release shipped with X25519 enabled by default

This is a good example of a third-party actor not affiliated with a browser-vendor investing time and money to change the priorities of the companies behind the main browsers to bring an important feature to the Web Platform. It’s been a large effort during 2 years, reaching agreement between 3 different browsers, spec editor and contributors and the W3C Web App Sec WG, which manages the Web Cryptography API specification.

It’s worth mentioning that a large part of the time has been invested in increasing the testing coverage in the Web Platform Tests’ WebCryptoAPI test suite and improving the interoperability between the three main web engines. This effort implies filing bugs in the corresponding browsers, discussing in the Secure Curves WICG about the best approach to address the interop issue and writing tests to ensure we don’t regress in the future.

Unfortunately, a few of these interop issues are the reason why the Ed25519 algorithm has not been shipped when I expected, but I’ll elaborate more in this later in this post.

Current implementation status of the Curve25519 algorithms

The following table provides a high-level overview of the support of the Secure Curve25519 features in some of the main browsers:

If we want to take a broader look at the implementation status of these algorithms, there is a nice table in the issue #20 at the WICG repository:

Interoperability

As I commented before, most of the work done during this year was focused on improving the spec and increasing the test coverage by the WPT suite. The main goal of these efforts was improving the interoperability of the feature between the three main browsers. It’s interesting to compare the results with the data shown in my previous post.

Test results for the generateKey method:

Current

Before

Test results for the deriveBits and deriveKey methods:

Current

Before

Tests results for the importKey and exportKey methods:

Current

Before

Tests results for the sign and verify methods:

Current

Before

Tests results for the wrap and unwrap methods:

Current

Before

Why Chrome didn’t ship the Ed25519 feature ?

The blockers that prevented the Intent-To-Ship request of the Ed25519 are primarily these 2 issues in the spec:

  • Use of randomize EdDSA signatures (issue #28)
  • Rejection of any invalid and small-order points (issue #27)

There are other minor disagreements regarding the Ed25519 specification, like the removal of the “alg” member in the JWK format in the import/export operations. There is a bug 40074061 in Chrome to implement this spec change, but apparently it has not been agreed on by Chrome, and now there is not enough support to proceed. Firefox already implements the specified behavior and there is a similar bug report in WebKit (bug 262613) where it seems there is support to implement the behavior change. However, I’d rather avoid introducing an interop issue and delay the implementation until there is more consensus regarding the spec.

The issue about the use of randomized EdDSA signatures comes from the fact that WebKit’s CryptoKit, the underlaying cryptography component of the WebKit engine, follows that approach in their implementation of the Ed25519 generateKey() operation. It’s been always in the spirit of the Secure Curves spec to rely completely on the corresponding official RFCs. In the case of the Ed25519 algorithm it refers to the RFC8032, where it’s stated the deterministic nature of the Ed25519 keys. However, the CRFG is currently discussing the issue and there is a proposal of defining a Ed25519-bis with randomized signatures.

The small-order issue is more complex, it seems. The spec states clearly that any invalid or small-order point should be rejected during the Verify operation. This behavior is based on the RFC8032 mentioned before. Ideally, the underlying cryptography library should take care of performing these checks, and this has been the approach followed by the three main browsers in the implementation of the whole API; in the case of Chrome, this cryptography library is BoringSSL. The main problem here is that there are differences in how the cryptography libraries implement these checks, and BoringSSL is not an exception. The WPT I have implemented to cover these cases also show interop issues between the three main engines. I’ve filed the bug 697, but it was marked as low priority. The alternative would be to implement the additional checks in the WebCrypto implementation, but Chrome is not very positive about this approach.

The small-order checks has been a request from Mozilla since the initial standard-position request submitted long time ago. This has been stated again in the PR#362 and Apple expressed positive opinions about this as well, so I believe this is going to be the selected approach.

Conclusions

I believe that Secure Curves being added into the Web Cryptography specification is a great achievement for the Web Platform and brings a very powerful feature for web developers. This is especially true in the realm of a decentralized web where Content Addressing is a key concept, which is based on cryptography hashes. Browsers exposing APIs to use Ed25519 and X25519 is going to offer a big advantage for decentralized web applications. I want to thank Daniel Huigens, editor of the Web Cryptography API specification, for his huge effort to address all the spec issues filed during the years, driving discussions always with the aim to reach consensus based resolutions, despite how frustrating the process sometimes is to develop a feature like this until it’s shipped in the browser.

The implementation in the three major browsers is a clear sign of the stability of these features and ensures it will be maintained properly in the future. This includes the effort to keep interoperability of the three implementations, which is crucial for a healthy Web Platform ecosystem and ultimately to the web authors. The fact that we couldn’t ship Ed25519 in Chrome is the only negative aspect of the work done this year but I believe it’s going to be resolved soon.

At Igalia we expect to continue working in 2025 on the Web Cryptography API specification, at least until the blocker issues mentioned before are addressed and we can send the Intent-To-Ship request. We hope to also find the opportunity to contribute to the spec to add new algorithms and also carry out maintenance work; ensuring an interoperable Web Platform has been always a priority for Igalia and an important factor when evaluating the projects we take on as a company.

by jfernandez at February 28, 2025 03:24 PM

February 27, 2025

Alex Bradbury

ccache for LLVM builds across multiple directories

Problem description

If you're regularly rebuilding a large project like LLVM, you almost certainly want to be using ccache. Incremental builds are helpful, but it's quite common to be swapping between different commit IDs and it's very handy to have the build complete relatively quickly without needing any explicit thought on your side. Enabling ccache with LLVM's CMake build system is trivial, but out of the box you will suffer from cache misses if building llvm-project in different build directories, even for an identical commit and identical build settings (and even for identical source directories, i.e. without even considering separate checkouts or separate git work trees):

mkdir -p exp && cd exp
ccache -zC # Clear old ccache
git clone --reference-if-able ~/llvm-project https://github.com/llvm/llvm-project/
cd llvm-project
cmake -G Ninja \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_C_COMPILER=clang \
  -DCMAKE_CXX_COMPILER=clang++ \
  -DLLVM_ENABLE_LLD=ON \
  -DLLVM_TARGETS_TO_BUILD="X86" \
  -DCMAKE_{C,CXX}_COMPILER_LAUNCHER=ccache \
  -B build/a \
  -S llvm
cmake --build build/a
echo "@@@@@@@@@@ Stats after building build/a @@@@@@@@@@"
ccache -s
cmake -G Ninja \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_C_COMPILER=clang \
  -DCMAKE_CXX_COMPILER=clang++ \
  -DLLVM_ENABLE_LLD=ON \
  -DLLVM_TARGETS_TO_BUILD="X86" \
  -DCMAKE_{C,CXX}_COMPILER_LAUNCHER=ccache \
  -B build/b \
  -S llvm
cmake --build build/b
echo "@@@@@@@@@@ Stats after building build/b @@@@@@@@@@"
ccache -s

We see no cache hits:

@@@@@@@@@@ Stats after building build/a @@@@@@@@@@
Cacheable calls:    2252 /   2253 (99.96%)
  Hits:                0 /   2252 ( 0.00%)
    Direct:            0
    Preprocessed:      0
  Misses:           2252 /   2252 (100.0%)
Uncacheable calls:     1 /   2253 ( 0.04%)
Local storage:
  Cache size (GiB):  0.2 / 1024.0 ( 0.02%)
  Cleanups:          256
  Hits:                0 /   2252 ( 0.00%)
  Misses:           2252 /   2252 (100.0%)

@@@@@@@@@@ Stats after building build/b @@@@@@@@@@
Cacheable calls:    4504 /   4506 (99.96%)
  Hits:               71 /   4504 ( 1.58%)
    Direct:            0 /     71 ( 0.00%)
    Preprocessed:     71 /     71 (100.0%)
  Misses:           4433 /   4504 (98.42%)
Uncacheable calls:     2 /   4506 ( 0.04%)
Local storage:
  Cache size (GiB):  0.5 / 1024.0 ( 0.04%)
  Cleanups:          256
  Hits:               71 /   4504 ( 1.58%)
  Misses:           4433 /   4504 (98.42%)

Let's take a look at a build command to check what's going on:

$ ninja -C build/b -t commands lib/Support/CMakeFiles/LLVMSupport.dir/APInt.cpp.o
ccache /usr/bin/clang++ -DGTEST_HAS_RTTI=0 -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I/home/asb/exp/llvm-project/build/b/lib/Support -I/home/asb/exp/llvm-project/llvm/lib/Support -I/home/asb/exp/llvm-project/build/b/include -I/home/asb/exp/llvm-project/llvm/include -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -ffunction-sections -fdata-sections -Werror=global-constructors -O3 -DNDEBUG -std=c++17  -fno-exceptions -funwind-tables -fno-rtti -MD -MT lib/Support/CMakeFiles/LLVMSupport.dir/APInt.cpp.o -MF lib/Support/CMakeFiles/LLVMSupport.dir/APInt.cpp.o.d -o lib/Support/CMakeFiles/LLVMSupport.dir/APInt.cpp.o -c /home/asb/exp/llvm-project/llvm/lib/Support/APInt.cpp

We can see that as LLVM generates header files, it has absolute directories specified in -I within the build directory, which of course differs for build a and build b above, causing a cache miss. Even if there was a workaround for the generated headers, we'd still fail to get cache hits if building from different llvm-project checkouts or worktrees.

Solution

Unsurprisingly, this is a common problem with ccache and it has good documentation on the solution. It advises:

  • Setting the base_dir ccache option to enable ccache's rewriting of absolute to relative paths for any path with that prefix.
  • Setting the absolute_paths_in_stderr ccache option in order to rewrite relative paths in stderr output to absolute (thus avoiding confusing error messages).
    • I have to admit that trialling this alongside forcing an error in a header with # error "forced error" I'm not sure I see a difference for Clang's output when attempting to build LLVM.
  • If compiling with -g, use the -fdebug-prefix-map option.

The ccache changes can be actioned by:

ccache --set-config base_dir=/home
ccache --set-config absolute_paths_in_stderr=true

The -fdebug-prefix-map option can be enabled by setting -DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON in your CMake invocation. However, as noted in the article addendum, this fails to handle cases where the absolute path is embedded in build artefacts (e.g. through asserts). Therefore, -DLLVM_USE_RELATIVE_PATHS_IN_FILES=ON should be used to set -ffile-prefix-map. The flag implies -fdebug-prefix-map, -fmacro-prefix-map, and others such as -fprofile-prefix-map.

Testing the solution

Repeating the cmake and ccache invocations from earlier, we see that build/b had almost a 100% hit rate:

@@@@@@@@@@ Stats after building build/a @@@@@@@@@@
Cacheable calls:    2252 /   2253 (99.96%)
  Hits:                0 /   2252 ( 0.00%)
    Direct:            0
    Preprocessed:      0
  Misses:           2252 /   2252 (100.0%)
Uncacheable calls:     1 /   2253 ( 0.04%)
Local storage:
  Cache size (GiB):  0.2 / 1024.0 ( 0.02%)
  Cleanups:          256
  Hits:                0 /   2252 ( 0.00%)
  Misses:           2252 /   2252 (100.0%)

@@@@@@@@@@ Stats after building build/b @@@@@@@@@@
Cacheable calls:    4504 /   4506 (99.96%)
  Hits:             2251 /   4504 (49.98%)
    Direct:         2251 /   2251 (100.0%)
    Preprocessed:      0 /   2251 ( 0.00%)
  Misses:           2253 /   4504 (50.02%)
Uncacheable calls:     2 /   4506 ( 0.04%)
Local storage:
  Cache size (GiB):  0.2 / 1024.0 ( 0.02%)
  Cleanups:          256
  Hits:             2251 /   4504 (49.98%)
  Misses:           2253 /   4504 (50.02%)

And additionally building llvm from a new worktree (i.e. a different absolute path to the source directory):

git worktree add -b main-wt1 wt1 && cd wt1
cmake -G Ninja \
  -DCMAKE_BUILD_TYPE=Release \
  -DCMAKE_C_COMPILER=clang \
  -DCMAKE_CXX_COMPILER=clang++ \
  -DLLVM_ENABLE_LLD=ON \
  -DLLVM_TARGETS_TO_BUILD="X86" \
  -DCMAKE_{C,CXX}_COMPILER_LAUNCHER=ccache \
  -B build/c \
  -S llvm
cmake --build build/c
echo "@@@@@@@@@@ Stats after building build/c @@@@@@@@@@"
ccache -s

Which results in the following stats (i.e., another close to 100% hit rate):

@@@@@@@@@@ Stats after building build/c @@@@@@@@@@
Cacheable calls:    6756 /   6759 (99.96%)
  Hits:             4502 /   6756 (66.64%)
    Direct:         4502 /   4502 (100.0%)
    Preprocessed:      0 /   4502 ( 0.00%)
  Misses:           2254 /   6756 (33.36%)
Uncacheable calls:     3 /   6759 ( 0.04%)
Local storage:
  Cache size (GiB):  0.2 / 1024.0 ( 0.02%)
  Cleanups:          256
  Hits:             4502 /   6756 (66.64%)
  Misses:           2254 /   6756 (33.36%)

And building with -DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON and -DCMAKE_BUILD_TYPE=Debug (NOTE: See end of article for an addendum that examines the preferred -DLLVM_USE_RELATIVE_PATHS_IN_FILE=ON option):

cmake -G Ninja \
  -DCMAKE_BUILD_TYPE=Debug \
  -DCMAKE_C_COMPILER=clang \
  -DCMAKE_CXX_COMPILER=clang++ \
  -DLLVM_ENABLE_LLD=ON \
  -DLLVM_TARGETS_TO_BUILD="X86" \
  -DCMAKE_{C,CXX}_COMPILER_LAUNCHER=ccache \
  -DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON \
  -B build/debug_a \
  -S llvm
cmake --build build/debug_a
echo "@@@@@@@@@@ Stats after building build/debug_a @@@@@@@@@@"
ccache -s
cmake -G Ninja \
  -DCMAKE_BUILD_TYPE=Debug \
  -DCMAKE_C_COMPILER=clang \
  -DCMAKE_CXX_COMPILER=clang++ \
  -DLLVM_ENABLE_LLD=ON \
  -DLLVM_TARGETS_TO_BUILD="X86" \
  -DCMAKE_{C,CXX}_COMPILER_LAUNCHER=ccache \
  -DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON \
  -B build/debug_b \
  -S llvm
cmake --build build/debug_b
echo "@@@@@@@@@@ Stats after building build/debug_b @@@@@@@@@@"
ccache -s

This results in no hits for debug_a and close to 100% hits for debug_b as expected:

@@@@@@@@@@ Stats after building build/debug_a @@@@@@@@@@
Cacheable calls:    9008 /   9012 (99.96%)
  Hits:             4502 /   9008 (49.98%)
    Direct:         4502 /   4502 (100.0%)
    Preprocessed:      0 /   4502 ( 0.00%)
  Misses:           4506 /   9008 (50.02%)
Uncacheable calls:     4 /   9012 ( 0.04%)
Local storage:
  Cache size (GiB):  3.1 / 1024.0 ( 0.31%)
  Cleanups:          256
  Hits:             4502 /   9008 (49.98%)
  Misses:           4506 /   9008 (50.02%)

@@@@@@@@@@ Stats after building build/debug_b @@@@@@@@@@
Cacheable calls:    11260 /  11265 (99.96%)
  Hits:              6753 /  11260 (59.97%)
    Direct:          6753 /   6753 (100.0%)
    Preprocessed:       0 /   6753 ( 0.00%)
  Misses:            4507 /  11260 (40.03%)
Uncacheable calls:      5 /  11265 ( 0.04%)
Local storage:
  Cache size (GiB):   3.2 / 1024.0 ( 0.31%)
  Cleanups:           256
  Hits:              6753 /  11260 (59.97%)
  Misses:            4507 /  11260 (40.03%)

Limitations

Rewriting paths to relative works in most cases, but you'll still experience cache misses if the location of your build directory relative to the source directory differs. This might happen if you compile directly in build/ in one checkout, but in build/foo in another, or if compiling outside of the llvm-project source tree altogether in one case, but within it (e.g. in build/) in another.

This is normally pretty easy to avoid, but is worth being aware of. For instance I find it helpful on LLVM buildbots I administer to be able to rapidly reproduce a previous build using ccache, but the default source vs build directory layout used during CI is different to what I normally use in day to day development.

Other helpful options

I was going to advertise inode_cache = true, but I see this is enabled by default since I last looked. Otherwise, file_clone = true (docs makes sense for my case where I'm in a filesystem with reflink support (XFS) and have plenty of space.

Addendum: Ensuring cache hits in the presence of asserts builds

An earlier version of this article used -DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON rather than -DLLVM_USE_RELATIVE_PATHS_IN_FILES=ON. The latter is necessary in order to get cache hits in cases where the absolute path would be embedded in the build artefacts by default (e.g. asserts). The following test demonstrates this flag working, and the test scenario is also set up to properly catch cases where the absolute path to the source directory changes (which the debug info test above didn't):

mkdir -p exp && cd exp
git clone --reference-if-able ~/llvm-project https://github.com/llvm/llvm-project/ llvm-project-a
for SUFFIX in b c d; do
  cp -a llvm-project-a llvm-project-$SUFFIX
done

# $1 = build suffix, $2 = extra cmake args
do_build() {
  cd llvm-project-$1
  pwd
  cmake -G Ninja \
    -DCMAKE_BUILD_TYPE=$2 \
    -DLLVM_ENABLE_ASSERTIONS=ON \
    -DCMAKE_C_COMPILER=clang \
    -DCMAKE_CXX_COMPILER=clang++ \
    -DLLVM_ENABLE_LLD=ON \
    -DLLVM_TARGETS_TO_BUILD="X86" \
    -DCMAKE_{C,CXX}_COMPILER_LAUNCHER=ccache \
    $3 \
    -B build/$1 \
    -S llvm &> /dev/null
  cmake --build build/$1
  echo "@@@@@@@@@@ Stats after building $2 mode llvm-project-$1/build/$1 @@@@@@@@@@"
  ccache -s
  cd ..
}

echo "@@@@@@@@@@ Clearing ccache and doing release builds @@@@@@@@@@"
ccache -zC # Clear old ccache
do_build a Release "-DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON"
do_build b Release "-DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON"
do_build c Release "-DLLVM_USE_RELATIVE_PATHS_IN_FILES=ON"
do_build d Release "-DLLVM_USE_RELATIVE_PATHS_IN_FILES=ON"
echo "@@@@@@@@@@ Clearing ccache and doing debug builds @@@@@@@@@@"
ccache -zC
do_build a Debug "-DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON"
do_build b Debug "-DLLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO=ON"
do_build c Debug "-DLLVM_USE_RELATIVE_PATHS_IN_FILES=ON"
do_build d Debug "-DLLVM_USE_RELATIVE_PATHS_IN_FILES=ON"

TODO: try above but debug build. Maybe asserts only contain the path for debug build?


Article changelog
  • 2025-02-27: Initial publication date.

February 27, 2025 12:00 PM

February 26, 2025

Pawel Lampe

Working with WebKit and GStreamer logs in Emacs.

WebKit has grown into a massive codebase throughout the years. To make developers’ lives easier, it offers various subsystems and integrations. One such subsystem is a logging subsystem that offers the recording of textual logs describing an execution of the internal engine parts.

The logging subsystem in WebKit (as in any computer system), is usually used for both debugging and educational purposes. As WebKit is a widely-used piece of software that runs on everything ranging from desktop-class devices up to low-end embedded devices, it’s not uncommon that logging is sometimes the only way for debugging when various limiting factors come into play. Such limiting factors don’t have to be only technical - it may also be that the software runs on some restricted systems and direct debugging is not allowed.

Requirements for efficient work with textual logs #

Regardless of the reasons why logging is used, once the set of logs is produced, one can work with it according to the particular need. From my experience, efficient work with textual logs requires a tool with the following capabilities:

  1. Ability to search for a particular substring or regular expression.
  2. Ability to filter text lines according to the substring or regular expressions.
  3. Ability to highlight particular substrings.
  4. Ability to mark certain lines for separate examination (with extra notes if possible).
  5. Ability to save and restore the current state of work.

While all text editors should be able to provide requirement 1, requirements 2-5 are usually more tricky and text editors won’t support them out of the box. Fortunately, any modern extensible text editor should be able to support requirements 2-5 after some extra configuration.

Setting up Emacs to work with logs #

Throughout the following sections, I use Emacs, the classic “extensible, customizable, free/libre text editor”, to showcase how it can be set up and used to meet the above criteria and to make work with logs a gentle experience.

Emacs, just like any other text editor, provides the support for requirement 1 from the previous section out of the box.

loccur - the minor mode for text filtering #

To support requirement 2, it requires some extra mode. My recommendation for that is loccur - the minor mode that acts just like a classic grep *nix utility yet directly in the editor. The benefit of that mode (over e.g. occur) is that it works in-place. Therefore it’s very ergonomic and - as I’ll show later - it works well in conjunction with bookmarking mode.

Installation of loccur is very simple and can be done from within the built-in package manager:

M-x package-install RET loccur RET

With loccur installed, one can immediately start using it by calling M-x loccur RET <regex> RET. The figure below depicts the example of filtering: loccur - the minor mode for text filtering.

highlight-symbol - the package with utility functions for text highlighting #

To support requirement 3, Emacs also requires the installation of extra module. In that case my recommendation is highlight-symbol that is a simple set of functions that enables basic text fragment highlighting on the fly.

Installation of this module is also very simple and boils down to:

M-x package-install RET highlight-symbol RET

With the above, it’s very easy to get results like in the figure below: highlight-symbol - the package with utility functions for text highlighting. just by moving the cursor around and using C-c h to toggle the highlight of the text at the current cursor position.

bm - the package with utility functions for buffer lines bookmarking #

Finally, to support requirements 4 and 5, Emacs requires one last extra package. This time my recommendation is bm that is quite a powerful set of utilities for text bookmarking.

In this case, installation is also very simple and is all about:

M-x package-install RET bm RET

In a nutshell, the bm package brings some visual capabilities like in the figure below: bm - the package with utility functions for buffer lines bookmarking as well as non-visual capabilities that will be discussed in further sections.

The final configuration #

Once all the necessary modules are installed, it’s worth to spend some time on configuration. With just a few simple tweaks it’s possible to make the work with logs simple and easily reproducible.

To not influence other workflows, I recommend attaching as much configuration as possible to any major mode and setting that mode as a default for files with certain extensions. The configuration below uses a major mode called text-mode as the one for working with logs and associates all the files with a suffix .log with it. Moreover, the most critical commands of the modes installed in the previous sections are binded to the key shortcuts. The one last thing is to enable truncating the lines ((set-default 'truncate-lines t)) and highlighting the line that the cursor is currently at ((hl-line-mode 1)).

(add-to-list 'auto-mode-alist '("\\.log\\'" . text-mode))
(add-hook 'text-mode-hook
(lambda ()
(define-key text-mode-map (kbd "C-c t") 'bm-toggle)
(define-key text-mode-map (kbd "C-c n") 'bm-next)
(define-key text-mode-map (kbd "C-c p") 'bm-previous)
(define-key text-mode-map (kbd "C-c h") 'highlight-symbol)
(define-key text-mode-map (kbd "C-c C-c") 'highlight-symbol-next)
(set-default 'truncate-lines t)
(hl-line-mode 1)
))

WebKit logs case study #

To show what the workflow of Emacs is with the above configuration and modules, some logs are required first. It’s very easy to get some logs out of WebKit, so I’ll additionally get some GStreamer logs as well. For that, I’ll build a WebKit GTK port from the latest revision of WebKit repository. To make the build process easier, I’ll use the WebKit container SDK.

Here’s the build command:

./Tools/Scripts/build-webkit --gtk --debug --cmakeargs="-DENABLE_JOURNALD_LOG=OFF"

The above command disables the ENABLE_JOURNALD_LOG build option so that logs are printed to stderr. This will result in the WebKit and GStreamer logs being bundled together as intended.

Once the build is ready, one can run any URL to get the logs. I’ve chosen a YouTube conformance tests suite from 2021 and selected test case “39. PlaybackRateChange” to get some interesting entries from multimedia-related subsystems:

export GST_DEBUG_NO_COLOR=1
export GST_DEBUG=4,webkit*:7
export WEBKIT_DEBUG=Layout,Media=debug,Events=debug
export URL='https://ytlr-cert.appspot.com/2021/main.html'
./Tools/Scripts/run-minibrowser --gtk --debug --features=+LogsPageMessagesToSystemConsole "${URL}" &> log.log

The commands above reveal some interesting aspects of how to get certain logs. First of all, the commands above specify a few environment variables:

  • GST_DEBUG=4,webkit*:7 - to enable GStreamer logs of level INFO (for all categories) and of level TRACE for the webkit* categories
  • GST_DEBUG_NO_COLOR=1 - to disable coloring of GStreamer logs
  • WEBKIT_DEBUG=Layout,Media=debug,Events=debug - to enable WebKit logs for a few interesting channels.

Moreover, the runtime preference LogsPageMessagesToSystemConsole is enabled to log console output logged by JavaScript code.

The workflow #

Once the logs are collected, one can open them using Emacs and start making sense out of them by gradually exploring the flow of execution. In the below exercise, I intend to understand what happened from the multimedia perspective during the execution of the test case “39. PlaybackRateChange”.

The first step is usually to find the most critical lines that mark more/less the area in the file where the interesting things happen. In that case I propose using M-x loccur RET CONSOLE LOG RET to check what the console logs printed by the application itself are. Once some lines are filtered, one can use bm-toggle command (C-c t) to mark some lines for later examination (highlighted as orange): Effect of filtering and marking some console logs.

For practicing purposes I propose exiting the filtered view M-x loccur RET and trying again to see what events the browser was dispatching e.g. using M-x loccur RET on node node 0x7535d70700b0 VIDEO RET: Effect of filtering and marking some video node events.

In general, the combination of loccur and substring/regexp searches should be very convenient to quickly explore various types of logs along with marking them for later. In case of very important log lines, one can additionally use bm-bookmark-annotate command to add extra notes for later.

Once some interesting log lines are marked, the most basic thing to do is to jump between them using bm-previous (C-c n) and bm-next (C-c p). However, the true power of bm mode comes with the use of M-x bm-show RET to get the view containing only the lines marked with bm-toggle (originally highlighted orange): Effect of invoking bm-show.

This view is especially useful as it shows only the lines deliberately marked using bm-toggle and allows one to quickly jump to them in the original file. Moreover, the lines are displayed in the order they appear in the original file. Therefore it’s very easy to see the unified flow of the system and start making sense out of the data presented. What’s even more interesting, the view contains also the line numbers from the original file as well as manually added annotations if any. The line numbers are especially useful as they may be used for resuming the work after ending the Emacs session - which I’ll describe further in this section.

When the *bm-bookmarks* view is rendered, the only problem left is that the lines are hard to read as they are displayed using a single color. To overcome that problem one can use the macros from the highlight-symbol package using the C-c h shortcut defined in the configuration. The result of highlighting some strings is depicted in the figure below: Highlighting strings in bm-show.

With some colors added, it’s much easier to read the logs and focus on essential parts.

Saving and resuming the session #

On some rare occasions it may happen that it’s necessary to close the Emacs session yet the work with certain log file is not done and needs to be resumed later. For that, the simple trick is to open the current set of bookmarks with M-x bm-show RET and then save that buffer to the file. Personally, I just create a file with the same name as log file yet with .bm prefix - so for log.log it’s log.log.bm.

Once the session is resumed, it is enough to open both log.log and log.log.bm files side by side and create a simple ad-hoc macro to use line numbers from log.log.bm to mark them again in the log.log file: Resuming the session

As shown in the above gif, within a few seconds all the marks are applied in the buffer with log.log file and the work can resume from that point i.e. one can jump around using bm, add new marks etc.

Summary #

Although the above approach may not be ideal for everybody, I find it fairly ergonomic, smooth, and covering all the requirements I identified earlier. I’m certain that editors other than Emacs can be set up to allow the same or very similar flow, yet any particular configurations are left for the reader to explore.

February 26, 2025 12:00 AM

February 25, 2025

Manuel Rego

Nice blog post by my colleague Frédéric Wang about some issues when writing Web Platform Tests https://frederic-wang.fr//2025/02/21/five-testharness-dot-js-mistakes/

February 25, 2025 12:00 AM

February 21, 2025

Frédéric Wang

Stage d'implémentation des normes Web

Igalia recherche des étudiant·e·s pour ses « stages » de développement logiciel 1

Brève description :

  • Contribution aux logiciels libres.
  • En télétravail.
  • 450 heures réparties sur 3 ou 6 mois.
  • Rémunération de 7 000 €.
  • Encadrement par un·e ingénieur·e d’Igalia.
  • Lutter contre les discriminations professionnelles dans le secteur informatique.

Chaque année, je m’occupe du stage « Implémentation des normes Web » qui consiste à modifier les navigateurs (Chromium, Firefox, Safari…) afin d’améliorer le support de technologies Web (HTML, CSS, DOM…). Il faut notamment étudier les spécifications correspondantes et écrire des tests de conformité. Notez bien que ce n’est pas un stage de développement Web mais de développement C++.

Je vous invite à lire My internship with Igalia 1 de ma collègue Delan Azabani pour un exemple concret. Ces dernières années, en plus de la communication par messagerie instantanée, je mets en place des visioconférences hebdomadaires qui se révèlent plutôt efficaces pour permettre aux stagiaires de progresser.

flyer A6

J’ai commencé des cours de LSF depuis quelques mois et assisté à plusieurs spectacles de l’IVT, notamment « Parle plus fort ! » qui décrit avec humour les difficultés des Sourds au travail. Cette année, j’envisage de prendre un·e stagiaire Sourd·e afin de contribuer à une meilleure intégration des Sourds en milieu professionnel. Je pense que ce sera aussi une expérience positive pour mon entreprise et pour moi-même.

Profil recherché :

  • Étudiant·e en informatique niveau licence/master.
  • Résidant en région parisienne (pour faciliter l’encadrement).
  • Pouvant lire/écrire en anglais (et communiquer en LSF).
  • Intéressé·e par les technologies Web.
  • Connaissance de développement C/C++.

Si vous êtes intéressé·e, les candidatures se font ici jusqu’au 4 avril 2025.

  1. Le programme “Coding Experience” d’Igalia ne correspond pas forcément à un stage au sens français du terme. Si vous souhaiteriez en faire un stage conventionné, précisez-le lors de la candidature et nous pourrons trouver une solution.  2

February 21, 2025 11:00 PM

February 20, 2025

Frédéric Wang

Five testharness.js mistakes

Introduction

I recently led a small training session at Igalia where I proposed to find mistakes in five small testharness.js tests I wrote. These mistakes are based on actual issues I found in official web platform tests, or on mistakes I made myself in the past while writing tests, so I believe they would be useful to know. The feedback from my teammates was quite positive, with very good participation and many ideas. They suggested I write a blog post about it, so here it is.

Please read the tests carefully and try to find the mistakes before looking at the proposed fixes…

1. Multiple tests in one loop

We often need to perform identical assertions for a set of similar objects. A good practice is to split such checks into multiple test() calls, so that it’s easier to figure out which of the objects are causing failures. Below, I’m testing the reflected autoplay attribute on the <audio> and <video> elements. What small mistake did I make?

<!DOCTYPE html>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
  ["audio", "video"].forEach(tagName => {
    test(function() {
      let element = document.createElement(tagName);
      assert_equals(element.autoplay, false, "inital value");
      element.setAttribute("autoplay", "autoplay");
      assert_equals(element.autoplay, true, "after setting attribute");
      element.removeAttribute("autoplay");
      assert_equals(element.autoplay, false, "after removing attribute");
    }, "Basic test for HTMLMediaElement.autoplay.");
  });
</script>

Proposed fix

Each loop iteration creates one test, but they all have have the name "Basic test for HTMLMediaElement.autoplay.". Because this name identifies the test in various places (e.g. failure expectations), it must be unique to be useful. These tests will even cause a “Harness status: Error” with the message “duplicate test name”.

One way to solve that is to move the loop iteration into the test(), which will fix the error but won’t help you with fine-grained failure reports. We can instead use a different description for each iteration:

       assert_equals(element.autoplay, true, "after setting attribute");
       element.removeAttribute("autoplay");
       assert_equals(element.autoplay, false, "after removing attribute");
-    }, "Basic test for HTMLMediaElement.autoplay.");
+    }, `Basic test for HTMLMediaElement.autoplay (${tagName} element).`);
   });
 </script>

2. Cleanup between tests

Sometimes, it is convenient to reuse objects (e.g. DOM elements) for several test() calls, and some cleanup may be necessary. For instance, in the following test, I’m checking that setting the class attribute via setAttribute() or setAttributeNS() is properly reflected on the className property. However, I must clear the className at the end of the first test(), so that we can really catch the failure in the second test() if, for example, setAttributeNS() does not modify the className because of an implementation bug. What’s wrong with this approach?

<!DOCTYPE html>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<div id="element"></div>
<script>
  test(function() {
    element.setAttribute("class", "myClass");
    assert_equals(element.className, "myClass");
    element.className = "";
  }, "Setting the class attribute via setAttribute().");

  test(function() {
    element.setAttributeNS(null, "class", "myClass");
    assert_equals(element.className, "myClass");
    element.className = "";
  }, "Setting the class attribute via setAttributeNS().");
</script>

Proposed fix

In general, it is difficult to guarantee that a final cleanup is executed. In this particular case, for example, if the assert_equals() fails because of bad browser implementation, then an exception is thrown and the rest of the function is not executed.

Fortunately, testharness.js provides a better way to perform cleanup after a test execution:

-  test(function() {
+  function resetClassName() { element.className = ""; }
+
+  test(function(t) {
+    t.add_cleanup(resetClassName);
     element.setAttribute("class", "myClass");
     assert_equals(element.className, "myClass");
-    element.className = "";
   }, "Setting the class attribute via setAttribute().");

-  test(function() {
+  test(function(t) {
+    t.add_cleanup(resetClassName);
     element.setAttributeNS(null, "class", "myClass");
     assert_equals(element.className, "myClass");
-    element.className = "";
   }, "Setting the class attribute via setAttributeNS().");

3. Checking whether an exception is thrown

Another very frequent test pattern involves checking whether a Web API throws an exception. Here, I’m trying to use DOMParser.parseFromString() to parse a small MathML document. The HTML spec says that it should throw a TypeError if one specifies the MathML MIME type. The second test() asserts that the rest of the try branch is not executed and that the correct exception type is found in the catch branch. Is this approach correct? Can the test be rewritten in a better way?

<!DOCTYPE html>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script>
  const parser = new DOMParser();
  const mathmlSource = `<?xml version="1.0"?>
<math xmlns="http://www.w3.org/1998/Math/MathML">
  <mfrac>
    <mn>1</mn>
    <mi>x</mi>
  </mfrac>
</math>`;

  test(function() {
    let doc = parser.parseFromString(mathmlSource, "application/xml");
    assert_equals(doc.documentElement.tagName, "math");
  }, "DOMParser's parseFromString() accepts application/xml");

  test(function() {
    try {
      parser.parseFromString(mathmlSource, "application/mathml+xml");
      assert_unreached();
    } catch(e) {
      assert_true(e instanceof TypeError);
    }
  }, "DOMParser's parseFromString() rejects application/mathml+xml");
</script>

Proposed fix

If the assert_unreached() is executed because of an implementation bug with parseFromString(), then the assertion will actually throw an exception. That exception won’t be a TypeError, so the test will still fail because of the assert_true(), but the failure report will look a bit confusing.

In this situation, it’s better to use the assert_throws_* APIs:

   test(function() {
-    try {
-      parser.parseFromString(mathmlSource, "application/mathml+xml");
-      assert_unreached();
-    } catch(e) {
-      assert_true(e instanceof TypeError);
-    }
+    assert_throws_js(TypeError,
+        _ => parser.parseFromString(mathmlSource, "application/mathml+xml"));
   }, "DOMParser's parseFromString() rejects application/mathml+xml");

4. Waiting for an event listener to be called

The following test verifies a very basic feature: clicking a button triggers the registered event listener. We use the (asynchronous) testdriver API test_driver.click() to emulate that user click, and a promise_test() call to wait for the click event listener to be called. The test may time out if there is something wrong in the browser implementation, but do you see a risk for flaky failures?

Note: the testdriver API only works when running tests automatically. If you run the test manually, you need to click the button yourself.

<!DOCTYPE html>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<script src="/resources/testdriver.js"></script>
<script src="/resources/testdriver-vendor.js"></script>
<button id="button">Click me to run manually</button>
<script>
  promise_test(function() {
    test_driver.click(button);
    return new Promise(resolve => {
      button.addEventListener("click", resolve);
    });
  }, "Clicking the button triggers registered click event handler.");
</script>

Proposed fix

The problem I wanted to show here is that we are sending the click event before actually registering the listener. The test would likely still work, because test_driver.click() is asynchronous and communication to the test automation scripts is slow, whereas registering the event is synchronous.

But rather than making this kind of assumption, which poses a risk of flaky failures as well as making the test hard to read, I prefer to just move the statement that triggers the event into the Promise, after the listener registration:

 <button id="button">Click me to run manually</button>
 <script>
   promise_test(function() {
-    test_driver.click(button);
     return new Promise(resolve => {
       button.addEventListener("click", resolve);
+      test_driver.click(button);
     });
   }, "Clicking the button triggers registered click event handler.");
 </script>

My colleagues also pointed out that if the promise returned by test_driver.click() fails, then a “Harness status: Error” could actually be reported with “Unhandled rejection”. We can add a catch to handle this case:

 <button id="button">Click me to run manually</button>
 <script>
   promise_test(function() {
-    return new Promise(resolve => {
+    return new Promise((resolve, reject) => {
       button.addEventListener("click", resolve);
-      test_driver.click(button);
+      test_driver.click(button).catch(reject);
     });
   }, "Clicking the button triggers registered click event handler.");
 </script>

5. Dealing with asynchronous resources

It’s very common to deal with asynchronous resources in web platform tests. The following test case verifies the behavior of a frame with lazy loading: it is initially outside the viewport (so not loaded) and then scrolled into the viewport (which should trigger its load). The actual loading of the frame is tested via the window name of /common/window-name-setter.html (should be “spices”). Again, this test may time out if there is something wrong in the browser implementation, but can you see a way to make the test a bit more robust?

Side question: the <div id="log"></div> and add_cleanup() are not really necessary for this test to work, so what’s the point of using them? Can you think of one?

<!DOCTYPE html>
<script src="/resources/testharness.js"></script>
<script src="/resources/testharnessreport.js"></script>
<style>
  #lazyframe { margin-top: 10000px; }
</style>
<div id="log"></div>
<iframe id="lazyframe" loading="lazy"
        src="/common/window-name-setter.html"></iframe>
<script>
  promise_test(function() {
    return new Promise(resolve => {
      window.addEventListener("load", () => {
        assert_not_equals(lazyframe.contentWindow.name, "spices");
        resolve();
      });
    });
  }, "lazy frame not loaded after page load");
  promise_test(t => {
    t.add_cleanup(_ => window.scrollTo(0, 0));
    return new Promise(resolve => {
      lazyframe.addEventListener('load', () => {
        assert_equals(lazyframe.contentWindow.name, "spices");
        resolve();
      });
      lazyframe.scrollIntoView();
    });
  }, "lazy frame loaded after appearing in the viewport");
</script>

Proposed fix

This is similar to what we discussed in the previous tests. If the assert_equals() in the listener fails, then an exception is thrown, but it won’t be caught by the testharness.js framework. A “Harness status: Error” is reported, but the test will only complete after the timeout. This can slow down test execution, especially if this pattern is repeated for several tests.

To make sure we report the failure immediately in that case, we can instead reject the promise if the equality does not hold, or even better, place the assert_equals() check after the promise resolution:

 <script>
   promise_test(function() {
     return new Promise(resolve => {
-      window.addEventListener("load", () => {
-        assert_not_equals(lazyframe.contentWindow.name, "spices");
-        resolve();
-      });
-    });
+      window.addEventListener("load", resolve);
+    }).then(_ => {
+      assert_not_equals(lazyframe.contentWindow.name, "spices");
+    });;
   }, "lazy frame not loaded after page load");
   promise_test(t => {
     t.add_cleanup(_ => window.scrollTo(0, 0));
     return new Promise(resolve => {
-      lazyframe.addEventListener('load', () => {
-        assert_equals(lazyframe.contentWindow.name, "spices");
-        resolve();
-      });
+      lazyframe.addEventListener('load', resolve);
       lazyframe.scrollIntoView();
+    }).then(_ => {
+      assert_equals(lazyframe.contentWindow.name, "spices");
     });
   }, "lazy frame loaded after appearing in the viewport");
 </script>

Regarding the side question, if you run the test by opening the page in the browser, then the report will be appended at the bottom of the page by default. But lazyframe has very large height, and the page may be scrolled to some other location. An explicit <div id="log"> ensures the report is inserted inside that div at top of the page, while the add_cleanup() ensures that we scroll to that location after test execution.

February 20, 2025 11:00 PM

Maksim Sisov

Bisecting Electron with Electron/Fiddle.

  1. Electron Fiddle
  2. Electron Releases

Recently, I have been working on an issue in Electron, which required bisecting and finding the exact version of Electron, when the regression happened.

A quick research did not reveal any guides, but as my search was progressing, I found one interesting commit - feat: add Bisect helper.

Electron Fiddle

Fiddle is an Electron playground that allows developers to experiment with Electron APIs. It has a quick startup template, which you can change as you wish. You can save fiddle locally or as a GitHub Gist, which can be shared with anyone by just entering the Gist URL in the address bar.

Moreover, you can choose what Electron version you wish to use - from stable to nightly releases.

Fiddle

Electron Releases

You can run fiddle using any version of Electron you wish - either stable, beta, or nightly. One can either run fiddle with obsolete versions, which is super great when comparing behaviour between different versions.

An option to choose the version of the Electron can be found at the top-left corner of the Fiddle window.

Once pressed, you can use filter to choose any Electron version you wish.

However, you may not find beta or nightly versions in the filter. For that, go to Settings (a gear icon on the left of the filter), then Electron, and select the desired channels.

Now, you can access all the available Electron versions and try any of them on the fly.

I hope this small guide helps you to triage your Electron problems :)))

by Maksim Sisov (msisov@igalia.com) at February 20, 2025 12:00 AM

February 19, 2025

Nick Yamane

Chromium Ozone/Wayland: The Last Mile Stretch

Hey there! I’m glad to finally start paying my blogging debt :) as this is something I’ve been planning to do for quite some time now. To get the ball rolling, I’ve shared some bits about me in my very first blog post Olá Mundo.

In this article, I’m going to walk through what we’ve been working on since last year in the Chromium Ozone/Wayland project, on which I’ve been involved (directly or indirectly) since I’ve joined Igalia back in 2018.

by nickdiego@igalia.com (Nick Yamane) at February 19, 2025 01:00 PM

Manuel Rego

Announcing the Web Engines Hackfest 2025

Igalia is arranging the twelfth annual Web Engines Hackfest, which will be held on Monday 2nd June through Wednesday 4th June. As usual, this is a hybrid event, at Palexco in A Coruña (Galicia, Spain) as well as remotely.

Registration is now open:

Picture of sunset in A Coruña from June 2024 at Riazor beach where the whole Orzán bay can be seen.
Sunset in A Coruña from Riazor beach (June 2024)

The Web Engines Hackfest is an event where folks working on various parts of the web platform gather for a few days to share knowledge and discuss a variety of topics. These topics include web standards, browser engines, JavaScript engines, and all the related technology around them. Last year, we had eight talks (watch them on YouTube) and 15 breakout sessions (read them on GitHub).

A wide range of experts with a diverse set of profiles and skills attend each year, so if you are working on the web platform, this event is a great opportunity to chat with people that are both developing the standards and working on the implementations. We’re really grateful for all the people that regularly join us for this event; you are the ones that make this event something useful and interesting for everyone! 🙏

Really enjoying Web Engines Hackfest by @igalia once again. Recommended for everyone interested in web technology.

— Anne van Kesteren (@annevk@hachyderm.io) June 05, 2024

The breakout sessions are probably the most interesting part of the event. Many different topics are discussed there, from high-level issues like how to build a new feature for the web platform, to lower-level efforts like the new WebKit SDK container. Together with the hallway discussions and impromptu meetings, the Web Engines Hackfest is an invaluable experience.

Talk by Stephanie Stimac about Sustainable Futures in the Web Engines Hackfest 2024

Big shout-out to Igalia for organising the Web Engines Hackfest every year since 2014, as well as the original WebKitGTK+ Hackfest starting in 2009. The event has grown and we’re now close to 100 onsite participants with representation from all major browser vendors. If your organization is interested in helping make this event possible, please contact us regarding our sponsorship options.

See you all in June!

February 19, 2025 07:55 AM

Stephen Chenney

Canvas Localization Support

The Web must support all users, with a multitude of languages and writing styles. HTML supports localization through the HTTP content language representation header, while HTML defines the lang and dir attributes. Yet the canvas element has no direct way of specifying the language for text drawing, nor good support for controlling the writing direction. Such localization controls the choice of characters from a given font and the bidi ordering.

Canvas Localization Today #

In browsers at the time of writing, a <canvas> element in a web page uses the HTML lang attribute of the element for glyph selection, and uses the CSS direction property for the text direction. Note that in all browsers the CSS direction property will have the HTML dir value unless otherwise set. In addition, there is an explicit direction attribute on the <canvas> element’s CanvasRenderingContext2D to control the text drawing direction. That attribute accepts the special "inherit" value, the default, by which the direction is taken from the element.

This example demonstrates the current behavior for language in canvas text.

<div>
<h3>Canvas lang="en"</h3>
<canvas id="en" width="200px" height="60px" lang="en"></canvas>
</div>
<div>
<h3>Canvas lang="tr"</h3>
<canvas id="tr" width="200px" height="60px" lang="tr"></canvas>
</div>
<script>
let en_context = document.getElementById('en').getContext('2d');
let tr_context = document.getElementById('tr').getContext('2d');

function drawText(context) {
context.font = '20px Lato-Medium';
context.color = 'black';
context.fillText('finish', 50, 20);
}

function run_tests() {
drawText(en_context);
drawText(tr_context);
};

// See the example for the code to load the font
</script>

The language comes from the lang attribute on the canvas element, and here it controls the use of ligatures.

OffscreenCanvas provides no such localization support. The "inherit" text direction is always left-to-right and the language is always the default browser language. As a result, in many locales text content will be drawn differently in offscreen vs. DOM canvases. This is a significant problem given offscreen canvas is the preferred method for high performance applications.

The new canvas lang attribute #

The first step to improving localization is giving authors explicit control over the content language for canvas text metrics and drawing. The HTML WHATWG standards group has agreed to add the lang attribute to CanvasRenderingContext2D and OffscreenCanvasRenderingContext2D. The attribute takes any value that an HTML lang attribute can take, or the "inherit" value specifying that the language be taken from the <canvas> element.

This example shows the simplest use of the new attribute:

<script>
function run_test(language_string) {
let context = document.getElementById(language_string).getContext('2d');
context.lang = language_string;
context.font = '20px Lato-Medium';
context.color = 'black';
context.fillText('finish', 50, 20);
}

let latoMediumFontFace = new FontFace(
// Lato-Medium is a font with language specific ligatures.
'Lato-Medium',
'url(../fonts/Lato-Medium.ttf)'
);

latoMediumFontFace.load().then((font) => {
document.fonts.add(font);
run_test('en');
run_test('tr');
});
</script>

The line context.lang = language_string; sets the language for text to the given string, which can be any valid BC47 locale. In this case we use generic English (“en”) and Turkish (“tr”).

The default value for the lang attribute is "inherit" whereby the text uses the language of the canvas. The first example works as it always has, where the canvas text inherits the language of the element.

Offscreen canvas, however, may now use the lang attribute to localize the language. This was not possible before.

  • The language may be set explicitly by setting the lang attribute in the offscreen context.

  • The language may be inherited from the most intuitive source, such as a canvas element that the offscreen was transferred from (the placeholder canvas) or the document in which the Offscreen canvas is created.

Let’s explore these situations, starting with the explicit setting.

offscreen_ctx.lang = 'tr';
offscreen_ctx.font = '20px Lato-Medium';
offscreen_ctx.color = 'black';
offscreen_ctx.fillText('finish', 50, 20);

The language for the offscreen context is explicitly set to Turkish, and the font uses that language to render appropriate glyphs (ligatures or not, in this case).

When the offscreen is created from a canvas element, through the transferControlToOffscreen() method, the language is taken from the canvas element. Here is an example:

<!DOCTYPE html>
<meta charset="utf-8">
<div>
<h3>Canvas lang="tr"</h3>
<canvas lang="tr" id="tr" width="200px" height="60px"></canvas>
</div>
<script>
let context = document.getElementById("tr").transferControlToOffscreen().getContext("2d");

// The default value for lang is "inherit". The canvas that transferred this
// offscreen context has a lang="tr" attribute, so the language used for text
// is "tr".
context.font = '20px Lato-Medium';
context.color = 'black';
context.fillText('finish', 50, 20);
</script>

The value used for "inherit" is captured when control is transferred. If the language on the original canvas element is subsequently changed, the language used for "inherit" on the offscreen does not change.

When the offscreen canvas is created directly in script, there is no canvas element to use, so "inherit" takes the value from the document element for the realm the script is running in. Here’s an example:

<!DOCTYPE html>
<htnl lang="tr">
<meta charset="utf-8">
<div>
<h3>Canvas lang="tr"</h3>
<canvas id="tr" width="200px" height="60px"></canvas>
</div>
<script>
// This is the output canvas.
var canvas = document.getElementById('tr');
var bitmap_ctx = canvas.getContext('bitmaprenderer');

// A newly constructed offscreen canvas.
// Note the lang attribute on the document element, in this case "tr"
// is stored in the offscreen when it is created.
var offscreen = new OffscreenCanvas(canvas.width, canvas.height);
var offscreen_ctx = offscreen.getContext('2d');

offscreen_ctx.font = '20px Lato-Medium';
offscreen_ctx.color = 'black';
offscreen_ctx.fillText('finish', 50, 20);

// Transfer the OffscreenCanvas image to the output canvas.
const bitmap = offscreen.transferToImageBitmap();
bitmap_ctx.transferFromImageBitmap(bitmap);
</script>
</htnl>

Finally, when the offscreen canvas is transferred to a worker, the worker will use the offscreen’s "inherit" language for it’s own ‘“inherit”’ value. This is a complete example, but the key snippet is:

// Create a canvas to serve as the placeholder for the offscreen worker.
const placeholder_canvas = document.createElement('canvas');
placeholder_canvas.setAttribute('width', '200');
placeholder_canvas.setAttribute('height', '60');
// Set the `lang` attribute on the placeholder canvas`
placeholder_canvas.setAttribute('lang', 'tr');

// Create the offscreen from the placeholder. The offscreen will get the
// language from this placeholder, in this case 'tr'.
const offscreen = placeholder_canvas.transferControlToOffscreen();

// Create the worker and transfer the offscreen to it. The language is
// transferred along with the offscreen, so content rendered in the
// offscreen will use 'tr' for the language (unless the `lang` is
// set explicitly on the context in the worker).
const worker = new Worker('canvas-lang-inherit-worker.js');
worker.postMessage({canvas: offscreen}, [offscreen]);

The lang canvas text attribute is implemented in Chrome 135 and later when “Experimental Web Platform Features” is enabled or the command line flag “–enable-blink-features=CanvasTextLang” is used.

Fixing direction for offscreen canvas #

The problem of direction = "inherit" for OffscreenCanvas is solved along the same lines as lang: Capture the inherited value at creation from the placeholder <canvas> element or the document hosting the script that created the offscreen canvas.

All of the examples above for lang work equally well for direction, where the HTML element attribute is dir.

Thanks #

Canvas lang attribute support in Chromium was implemented by Igalia S.L. funded by Bloomberg L.P.

February 19, 2025 12:00 AM

February 18, 2025

Igalia WebKit Team

WebKit Igalia Periodical #14

Update on what happened in WebKit in the week from February 11 to February 18.

Cross-Port 🐱

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

Several multimedia-related optimizations landed this week, memory allocations were reduced in the video rendering code path and in the MediaStreamAudioDestinationNode implementation.

On the WebRTC fronts, the GstWebRTC backend has gained support for stats rate limiting support and the send/receive buffer sizes were adjusted.

The GStreamer video frame converter, used to show video frames in the canvas or WebGL, has been fixed to use the right GL context and now supports DMA-BUF video frames, too.

Added support to the eotf additional MIME type parameter when checking for supported multimedia content types. This is required by some streaming sites like YouTube TV.

The GStreamer-based WebCodecs AudioData::copyTo() is now spec compliant, handling all possible format conversions defined in the WebCodecs specification.

Infrastructure 🏗️

The WebKit ports that use CMake as their build system—as is the case for the GTK and WPE ones—now enable C++ library assertions by default, when building against GNU libstdc++ or LLVM's libc++. This adds lightweight runtime checks to a number of C++ library facilities, mostly aimed at detecting out-of-bounds memory access, and does not have a measurable performance impact on typical browsing workloads.

A number of Linux distributions were already enabling these assertions as part of their security hardening efforts (e.g. Fedora or Gentoo) and they do help finding actual bugs. As a matter of fact, a number of issues were fixed before all the WebKit API and layout tests with the assertions enabled and the patch could be merged! Going forward, this will prevent accidentally introducing bugs new due to wrong usage of the C++ standard library.

That’s all for this week!

by Igalia WebKit Team at February 18, 2025 11:34 AM

Maksim Sisov

Unmasking Hidden Floating-Point Errors in Chromium’s Ozone/Wayland.

  1. The IEEE 754 Standard and Floating-Point Imprecision
  2. How the libwayland Update Exposed the Issue
  3. Chromium’s Targeted Fix: RoundToNearestThousandth

Floating-point arithmetic under the IEEE 754 standard introduces subtle precision errors due to the inherent limitations of representing decimal numbers in binary form. In graphical applications like Chromium, even tiny discrepancies can manifest as rendering inconsistencies. Recent updates in libwayland’s conversion routines have uncovered such a floating-point precision issue in Chromium’s Ozone/Wayland layer—one that was previously masked by legacy conversion behavior.

The IEEE 754 Standard and Floating-Point Imprecision

The IEEE 754 standard governs how floating-point numbers are represented and manipulated in most modern computing systems. Because many decimals (for example, 0.2) cannot be exactly represented in binary being limited to 32 or 64 bits, their results are actually stored as approximations.

This limited precision means that arithmetic operations can introduce small errors. In many cases, these errors are negligible, but in precision-sensitive environments like graphical rendering, even minor inaccuracies can lead to visible artifacts.

The standard has 3 basic components:

  • The Sign of Mantissa - in other words, the sign bit.
  • The Biased exponent - this field is designed to encode both positive and negative values by adding a fixed bias to the actual exponent, thereby converting it into a non-negative stored form.
  • The Normalised Mantissa - this component represents the significant digits of a number in scientific or floating-point notation. In normalized form, the mantissa is adjusted so that only one nonzero digit—specifically, a single “1”—appears immediately to the left of the decimal point.

Consider the number 0.2. Its binary representation is an infinitely repeating fraction:

0.2 = 0.0011 0011 0011 … (in binary)

Note that the pattern of 0011 repeats indefinitely.

If 0.2 is represented in the IEEE 754 form (32 bits), it takes the following form -

0 01111101 10011001100110011001101

As a result, the repeating nature of these binary representations leads to approximation and precision challenges when dealing with floating-point arithmetics.

0.1 + 0.2 != 0.3

How the libwayland Update Exposed the Issue

Libwayland historically provided a utility to convert integer values into a fixed-point format (wl_fixed) for use in various graphical calculations. However, it carried an unintended side effect: an implicit rounding of the result. This rounding acted as a “safety net,” smoothing over the small imprecisions that naturally occur with floating-point arithmetic.

An update to libwayland refined the conversion routine by making it more precise. The function responsible for converting floating-point values to fixed-point values — commonly used in operations like wl_fixed_to_double — no longer introduced the implicit rounding that was part of the old implementation. With this improved precision, the floating-point discrepancies, previously hidden by the rounding effect, began to surface in the Chromium Ozone/Wayland layer, causing subtle yet noticeable differences in viewport calculations.

Chromium’s Targeted Fix: RoundToNearestThousandth

To restore consistency in graphical rendering following the libwayland update, we needed to address the newly exposed precision error. In commit 6187087, the solution was to reintroduce rounding explicitly within the Ozone/Wayland layer. It was decided to implement a utility function named RoundToNearestThousandth to round the viewport source value to the nearest thousandth. This explicit rounding step effectively recreates the smoothing effect that the old implicit conversion provided, ensuring that minor floating-point errors do not result in visible inconsistencies.

By aligning the rounding behavior of the Ozone/Wayland component with that of Chromium’s cc/viz module, the fix ensures consistent and reliable rendering across the browser’s graphical stack. This deliberate approach to handling floating-point imprecision highlights the importance of explicit error management in complex software systems.

by Maksim Sisov (msisov@igalia.com) at February 18, 2025 12:00 AM

Igalia Compilers Team

Sharing SpiderMonkey tests with the world

At Igalia, we believe in building open and interoperable platforms, and we’ve found that for such platforms to succeed, it is essential to combine a clear standard with an extensive test suite. We’ve worked on a number of such test suites, ranging from the official JavaScript conformance test suite test262 and web-platform-tests, which covers the rest of the web platform, to the Khronos Vulkan and OpenGL CTS. Our experience has consistently shown that the existence of a thorough test suite that is easy for implementers to run, helps significantly to get different products to interoperate in the real world.

An important way to maximize the coverage of such a test suite is to encourage implementers to share the tests they are writing as part of their implementation work. In the first place, it is helpful to share new tests – a great success story here is the web-platform-tests project, which has two-way synchronization tooling for the major browsers. This allows developers to write tests directly within their own project, which are then shared more or less automatically. However, for mature platforms, especially when the platform and implementations are older than the centralised test suite, there is often a large backlog of existing tests. We would love to see more of these tests made available.

During 2024, we looked in particular at contributing such backlog tests to test262. We identified SpiderMonkey’s non262 suite as a potential source of interesting tests for this purpose. This test suite is interesting for several reasons.

In the first place, it is part of the larger jstests suite, which also contains the upstream test262 tests (hence the name). This meant we did not expect architectural issues to crop up when upstreaming those tests. Second, as a test suite built by JavaScript engine implementers, it seemed likely to contain tests for edge cases and requirements that changed over time. It is not uncommon for several implementers to be caught out by the same issue, so such tests are more likely to find bugs in other engines as well.

During our investigation, we discovered that our friends at Bocoup had a similar idea back in 2017, and they had created a python script to transform parts of the jstests suite to work within the test262 test suite, which we gratefully reused. However, some issues quickly came to light: it had been written for Python 2, and its unit test had not been enabled in continuous integration, so it needed some work to be of use in 2024. Once that was done, we discovered that the script had been used as part of a mostly manual process to submit specifically curated tests, and it could not cope with the diversity and complexity of the whole test suite without significant up-front work to put the tests into a shape that it could deal with. We suspect that this is part of the reason that their project did not bear all that much fruit in the end, and decided that our approach needed to maximize the number of shared tests for the effort expended.

After getting the script into shape, we set to work on a batch export of the non262 test suite. In order to verify the quality of the exported tests, we ran the tests against both SpiderMonkey and V8 using the upstream tooling. This process revealed several issues—some anticipated, others more unexpected. For example, a large number of tests used helper functions that were only available in SpiderMonkey, which we either needed to provide in test262 or automatically translate to an existing helper function. Other APIs, for example those testing specific details of the garbage collection implementation, could not be reproduced at all.

Besides that, a fair number of tests relied on specifics of the SpiderMonkey implementation that were not guaranteed by the standard – such as testing exact values of the error.message property or the handling of particular bit patterns in typed arrays. Some tests also turned out not to be testing what they were meant to test or covered a previous version of the specification or a not-yet-finalized proposal. Depending on the situation, we improved the tests and added APIs to make it possible to share them, or we skipped exporting the offending tests.

We also discovered some issues in the test262-harness tool that we used to run the tests upstream, notably around tests using JavaScript modules and the [[IsHTMLDDA]] internal slot (used to specify the web compatibility requirements around document.all). Also, it turned out that the mechanism to include helper libraries into tests was not fully specified, which led to some combinations of helpers to work in some test runners but not others. We have started the process to clarify the documentation on this point.

As part of this project so far, we landed about 1600 new tests into test262 and filed 10 bugs (some covering failures in multiple tests), of which half have been fixed by the engine maintainers. Several failures also were the result of bugs that had been filed earlier but hadn’t been fixed yet. Also, Mozilla has decided to remove the exported tests from their own repository, and to use the centralised copies instead.

In terms of future work for this particular project, we’re expecting to investigate if we can share some more of the tests that are currently skipped. Separately we’re interested in looking into a fully automated two-way synchronization system; this would significantly ease the cooperation of engine developers and project maintainers on a unified test suite, though the engineering effort would be commensurate. We’ll also continue investigating if we can identify other test suites that can benefit from a similar treatment.

We would like to thank Bloomberg for sponsoring this work and the Mozilla community, in particular Dan Minor, for their indispensable help and code reviews.

February 18, 2025 12:00 AM

February 17, 2025

Max Ihlenfeldt

Storing and viewing local test results in ResultDB

Get the same nice graphical view of your local test results as for CQ tests!

February 17, 2025 12:00 AM

February 16, 2025

Qiuyi Zhang (Joyee)

Building Node.js on Windows with clang-cl

Recently Node.js started to support building with clang-cl on Windows. I happened to have the chance to try it out this week and while it still needs some fixups in my case, it’s mostly working very well now. Here are some notes about this

February 16, 2025 01:21 AM

February 14, 2025

Andy Wingo

tracepoints: gnarly but worth it

Hey all, quick post today to mention that I added tracing support to the Whippet GC library. If the support library for LTTng is available when Whippet is compiled, Whippet embedders can visualize the GC process. Like this!

Screenshot of perfetto showing a generational PCC trace

Click above for a full-scale screenshot of the Perfetto trace explorer processing the nboyer microbenchmark with the parallel copying collector on a 2.5x heap. Of course no image will have all the information; the nice thing about trace visualizers like is that you can zoom in to sub-microsecond spans to see exactly what is happening, have nice mouseovers and clicky-clickies. Fun times!

on adding tracepoints

Adding tracepoints to a library is not too hard in the end. You need to pull in the lttng-ust library, which has a pkg-config file. You need to declare your tracepoints in one of your header files. Then you have a minimal C file that includes the header, to generate the code needed to emit tracepoints.

Annoyingly, this header file you write needs to be in one of the -I directories; it can’t be just in the the source directory, because lttng includes it seven times (!!) using computed includes (!!!) and because the LTTng file header that does all the computed including isn’t in your directory, GCC won’t find it. It’s pretty ugly. Ugliest part, I would say. But, grit your teeth, because it’s worth it.

Finally you pepper your source with tracepoints, which probably you wrap in some macro so that you don’t have to require LTTng, and so you can switch to other tracepoint libraries, and so on.

using the thing

I wrote up a little guide for Whippet users about how to actually get traces. It’s not as easy as perf record, which I think is an error. Another ugly point. Buck up, though, you are so close to graphs!

By which I mean, so close to having to write a Python script to make graphs! Because LTTng writes its logs in so-called Common Trace Format, which as you might guess is not very common. I have a colleague who swears by it, that for him it is the lowest-overhead system, and indeed in my case it has no measurable overhead when trace data is not being collected, but his group uses custom scripts to convert the CTF data that he collects to... GTKWave (?!?!?!!).

In my case I wanted to use Perfetto’s UI, so I found a script to convert from CTF to the JSON-based tracing format that Chrome profiling used to use. But, it uses an old version of Babeltrace that wasn’t available on my system, so I had to write a new script (!!?!?!?!!), probably the most Python I have written in the last 20 years.

is it worth it?

Yes. God I love blinkenlights. As long as it’s low-maintenance going forward, I am satisfied with the tradeoffs. Even the fact that I had to write a script to process the logs isn’t so bad, because it let me get nice nested events, which most stock tracing tools don’t allow you to do.

I fixed a small performance bug because of it – a worker thread was spinning waiting for a pool to terminate instead of helping out. A win, and one that never would have shown up on a sampling profiler too. I suspect that as I add more tracepoints, more bugs will be found and fixed.

fin

I think the only thing that would be better is if tracepoints were a part of Linux system ABIs – that there would be header files to emit tracepoint metadata in all binaries, that you wouldn’t have to link to any library, and the actual tracing tools would be intermediated by that ABI in such a way that you wouldn’t depend on those tools at build-time or distribution-time. But until then, I will take what I can get. Happy tracing!

by Andy Wingo at February 14, 2025 01:32 PM

February 12, 2025

Max Ihlenfeldt

Implementing fallback tab dragging for Wayland in Chromium

Fallback tab dragging shipped in Chromium 133! Let't take a look at the problem it solves, how it works, and why it took so long to ship.

February 12, 2025 12:00 AM

February 11, 2025

Igalia WebKit Team

WebKit Igalia Periodical #13

Update on what happened in WebKit in the week from February 3 to February 11.

Cross-Port 🐱

Fixed an assertion crash in the remote Web Inspector when its resources contain an UTF-8 “non-breaking space” character.

Multimedia 🎥

GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.

Media playback now supports choosing the output audio device on a per element basis, using the setSinkId() API. This also added the support needed for enumerating audio outputs, which is needed by Web applications to obtain the identifiers of the available devices. Typical usage includes allowing the user to choose the audio output used in WebRTC-based conference calls.

For now feature flags ExposeSpeakers, ExposeSpeakersWithoutMicrophone, and PerElementSpeakerSelection need to be enabled for testing.

Set proper playbin flags which are needed to properly use OpenMax on the RaspberryPi.

Graphics 🖼️

Landed a change that adds a visualization for damage rectangles, controlled by the WEBKIT_SHOW_DAMAGE environment variable. This highlights areas damaged during rendering of every frame—as long as damage propagation is enabled.

Screenshot of a web page showing the “Poster Circle” CSS transforms demo, with red rectangles overlapping the areas that have been last re-rendered

Releases 📦️

Stable releases of WebKitGTK 2.46.6 and WPE WebKit 2.46.6 are now available. These come along with the first security advisory of the year (WSA-2025-0001: GTK, WPE): they contain mainly security fixes, and everybody is advised to update.

The unstable release train continues as well, with WebKitGTK 2.47.4 and WPE WebKit 2.47.4 available for testing. These are snapshots of the current development status, and while expected to work there may be rough edges—if you encounter any issue, reports at the WebKit Bugzilla are always welcome.

The recently released libwpe 1.16.1 accidentally introduced an ABI break, which has been corrected in libwpe 1.16.2. There are no other changes, and the latter should be preferred.

That’s all for this week!

by Igalia WebKit Team at February 11, 2025 10:43 AM

February 10, 2025

Andy Wingo

whippet at fosdem

Hey all, the video of my FOSDEM talk on Whippet is up:

Slides here, if that’s your thing.

I ended the talk with some puzzling results around generational collection, which prompted yesterday’s post. I don’t have a firm answer yet. Or rather, perhaps for the splay benchmark, it is to be expected that a generational GC is not great; but there are other benchmarks that also show suboptimal throughput in generational configurations. Surely it is some tuning issue; I’ll be looking into it.

Happy hacking!

by Andy Wingo at February 10, 2025 09:01 PM

Manuel Rego

Solving Cross-root ARIA Issues in Shadow DOM

This blog post is to announce that Igalia has gotten a grant from NLnet Foundation to work on solving cross-root ARIA issues in Shadow DOM. My colleague Alice Boxhall, which has been working on sorting out this issue since several years ago, together with support form other igalians is doing the work related to this grant.

The Problem #

Shadow DOM has some issues that prevent it to be used in some situations if you want to have an accessible application. This has been identified by the Web Components Working Group as one of the top priority issues that need to be sorted out.

Briefly speaking, there are mainly two different problems when you want to reference elements for ARIA attributes cross shadow root boundaries.

First issue is that you cannot reference things outside the Shadow DOM. Imagine you have a custom element (#customButton) which contains a native button in its Shadow DOM, and you want to associate the internal button with a label (#label) which is outside in the light DOM.

<label id="label">Label</label>
<custom-button id="customButton">
<template shadowrootmode="open">
<div>foo</div>
<button aria-labelledby="label">Button</button>
<div>bar</div>
</template>
</custom-button>

And the second problem is that you cannot reference things inside a Shadow DOM from the outside. Imagine the opposite situation where you have a custom label (#customLabel) with a native label in its Shadow DOM that you want to reference from a button (#button) in the light DOM.

<custom-label id="customLabel">
<template shadowrootmode="open">
<div>foo</div>
<label>Label</label>
<div>bar</div>
</template>
</custom-label>
<button id="button" aria-labelledby="customLabel">Button</button>

This is a huge issue for web components because they cannot use Shadow DOM, as they would like due to its encapsulation properties, if they want to provide an accessible experience to users. For that reason many of the web components libraries don’t use yet Shadow DOM and have to rely on workarounds or custom polyfills.

If you want to know more on the problem, Alice goes deep on the topic in her blog post How Shadow DOM and accessibility are in conflict.

Prior Art #

The Accessibility Object Model (AOM) effort was started several years ago aiming to solve several issues including the one described before, that had a wider scope and tried to solve many different things including the problems described in this blog post. At that time Alice was at Google and Alex Surkov at Mozilla, both were part of this effort. Coincidentally, they are now at Igalia, which together with Joanmarie Diggs and Valerie Young create a dream team of accessibility experts in our company.

Even when the full problem hasn’t been sorted out yet, there has been some progress with the Element Reflection feature which allows ARIA relationship attributes to be reflected as element references. Whit this users can specify them without the need to assign globally unique ID attributes to each element. This feature has been implemented in Chromium and WebKit by Igalia. So instead of doing something like:

<button id="button" aria-describedby="description">Button</button>
<div id="description">Description of the button.</div>

You could specify it like:

<button id="button">Button</button>
<div id="description">Description of the button.</div>
<script>
button.ariaDescribedByElements = [description];
</script>

Coming back to Shadow DOM, this feature also enables authors to specify ARIA relationships pointing to things outside the Shadow DOM (the first kind of problem described in the previous section), however it doesn’t allow to reference elements inside another Shadow DOM (the second problem). Anyway let’s see an example of how this will solve the first issue:

<label id="label">Label</label>
<custom-button id="customButton"></custom-button>
<script>
const shadowRoot = customButton.attachShadow({mode: "open"});

const foo = document.createElement("div");
foo.textContent = "foo";
shadowRoot.appendChild(foo);

const button = document.createElement("button");
button.textContent = "Button";
/* Here is where we reference the outer label from the button inside the Shadow DOM. */
button.ariaLabelledByElements = [label];
shadowRoot.appendChild(button);

const bar = document.createElement("div");
bar.textContent = "bar";
shadowRoot.appendChild(bar);
</script>

Apart from Element Reflection, which only solves part of the issues, there have been other ideas about how to solve these problems. Initially Cross-root ARIA Delegation proposal by Leo Balter at Salesforce. A different one called Cross-root ARIA Reflection by Westbrook Johnson at Adobe. And finally the Reference Target for Cross-root ARIA proposal by Ben Howell at Microsoft.

Again if you want to learn more about the different nuances of the previous proposals you can revisit Alice’s blog post.

The Proposed Solution: Reference Target #

At this point this is the most promising proposal is the Reference Target one. This proposal allows the web authors to use Shadow DOM and still don’t break the accessibility of their web applications. The proposal is still in flux and it’s currently being prototyped in Chromium and WebKit. Anyway as an example this is the kind of API shape that would solve the second problem described in the initial section, where we reference a label (#actualLabel) inside the Shadow DOM from a button (#button) in the light DOM.

<custom-label id="customLabel">
<template shadowrootmode="open"
shadowrootreferencetarget="actualLabel">

<div>foo</div>
<label id="actualLabel">Label</label>
<div>bar</div>
</template>
</custom-label>
<button id="button" aria-labelledby="customLabel">Button</button>

The Grant #

As part of this grant we’ll work on all the process to get the Reference Target proposal ready to be shipped in the web rendering engines. Some of the tasks that will be done during this project include work in different fronts:

  • Proposal: Help with the work on the proposal identifying issues, missing bits, design solutions, providing improvements, keeping it up to date as the project evolves.
  • Specification: Write the specification text, discuss and review it with the appropriate working groups, improved it based on gathered feedback and implementation experience, resolve issues identified in the standards bodies.
  • Implementation: Prototype implementation in WebKit to verify the proposal, upstream changes to WebKit, fix bugs on the implementation, adapt it to spec modifications.
  • Testing: Analyze current WPT tests, add new ones to increase coverage, validate implementations, keep them up-to-date as things evolve.
  • Outreach: Blog posts explaining the evolution of the project and participation in events with other standards folks to have the proper discussions to move the proposal and specification forward.

NLnet Foundation logo

We’re really grateful that NLnet has trusted us to this project, and we really hope this will allow to fix an outstanding accessibility issue in the web platform that has been around for too many time already. At the same point it’s a bit sad, that the European Union through the NGI funds is the one sponsoring this project, when it will have a very important impact for several big fishes that are part of the Web Components WG.

If you want to follow the evolution of this project, I’d suggest you to follow Alice’s blog where she’ll be updating us about the progress of the different tasks.

February 10, 2025 12:00 AM

February 09, 2025

Andy Wingo

baffled by generational garbage collection

Usually in this space I like to share interesting things that I find out; you might call it a research-epistle-publish loop. Today, though, I come not with answers, but with questions, or rather one question, but with fractal surface area: what is the value proposition of generational garbage collection?

hypothesis

The conventional wisdom is encapsulated in a 2004 Blackburn, Cheng, and McKinley paper, “Myths and Realities: The Performance Impact of Garbage Collection”, which compares whole-heap mark-sweep and copying collectors to their generational counterparts, using the Jikes RVM as a test harness. (It also examines a generational reference-counting collector, which is an interesting predecessor to the 2022 LXR work by Zhao, Blackburn, and McKinley.)

The paper finds that generational collectors spend less time than their whole-heap counterparts for a given task. This is mainly due to less time spent collecting, because generational collectors avoid tracing/copying work for older objects that mostly stay in the same place in the live object graph.

The paper also notes an improvement for mutator time under generational GC, but only for the generational mark-sweep collector, which it attributes to the locality and allocation speed benefit of bump-pointer allocation in the nursery. However for copying collectors, generational GC tends to slow down the mutator, probably because of the write barrier, but in the end lower collector times still led to lower total times.

So, I expected generational collectors to always exhibit lower wall-clock times than whole-heap collectors.

test workbench

In whippet, I have a garbage collector with an abstract API that specializes at compile-time to the mutator’s object and root-set representation and to the collector’s allocator, write barrier, and other interfaces. I embed it in whiffle, a simple Scheme-to-C compiler that can run some small multi-threaded benchmarks, for example the classic Gabriel benchmarks. We can then test those benchmarks against different collectors, mutator (thread) counts, and heap sizes. I expect that the generational parallel copying collector takes less time than the whole-heap parallel copying collector.

results?

So, I ran some benchmarks. Take the splay-tree benchmark, derived from Octane’s splay.js. I have a port to Scheme, and the results are... not good!

In this graph the “pcc” series is the whole-heap copying collector, and “generational-pcc” is the generational counterpart, with a nursery sized such that after each collection, its size is 2 MB times the number of active mutator threads in the last collector. So, for this test with eight threads, on my 8-core Ryzen 7 7840U laptop, the nursery is 16MB including the copy reserve, which happens to be the same size as the L3 on this CPU. New objects are kept in the nursery one cycle before being promoted to the old generation.

There are also results for “mmc” and “generational-mmc” collectors, which use an Immix-derived algorithm that allows for bump-pointer allocation but which doesn’t require a copy reserve. There, the generational collectors use a sticky mark-bit algorithm, which has very different performance characteristics as promotion is in-place, and the nursery is as large as the available heap size.

The salient point is that at all heap sizes, and for these two very different configurations (mmc and pcc), generational collection takes more time than whole-heap collection. It’s not just the splay benchmark either; I see the same thing for the very different nboyer benchmark. What is the deal?

I am honestly quite perplexed by this state of affairs. I wish I had a narrative to tie this together, but in lieu of that, voici some propositions and observations.

“generational collection is good because bump-pointer allocation”

Sometimes people say that the reason generational collection is good is because you get bump-pointer allocation, which has better locality and allocation speed. This is misattribution: it’s bump-pointer allocators that have these benefits. You can have them in whole-heap copying collectors, or you can have them in whole-heap mark-compact or immix collectors that bump-pointer allocate into the holes. Or, true, you can have them in generational collectors with a copying nursery but a freelist-based mark-sweep allocator. But also you can have generational collectors without bump-pointer allocation, for free-list sticky-mark-bit collectors. To simplify this panorama to “generational collectors have good allocators” is incorrect.

“generational collection lowers pause times”

It’s true, generational GC does lower median pause times:

But because a major collection is usually slightly more work under generational GC than in a whole-heap system, because of e.g. the need to reset remembered sets, the maximum pauses are just as big and even a little bigger:

I am not even sure that it is meaningful to compare median pause times between generational and non-generational collectors, given that the former perform possibly orders of magnitude more collections than the latter.

Doing fewer whole-heap traces is good, though, and in the ideal case, the less frequent major traces under generational collectors allows time for concurrent tracing, which is the true mitigation for long pause times.

is it whiffle?

Could it be that the test harness I am using is in some way unrepresentative? I don’t have more than one test harness for Whippet yet. I will start work on a second Whippet embedder within the next few weeks, so perhaps we will have an answer there. Still, there is ample time spent in GC pauses in these benchmarks, so surely as a GC workload Whiffle has some utility.

One reasons that Whiffle might be unrepresentative is that it is an ahead-of-time compiler, whereas nursery addresses are assigned at run-time. Whippet exposes the necessary information to allow a just-in-time compiler to specialize write barriers, for example the inline check that the field being mutated is not in the nursery, and an AOT compiler can’t encode this as an immediate. But it seems a small detail.

Also, Whiffle doesn’t do much compiler-side work to elide write barriers. Could the cost of write barriers be over-represented in Whiffle, relative to a production language run-time?

Relatedly, Whiffle is just a baseline compiler. It does some partial evaluation but no CFG-level optimization, no contification, no nice closure conversion, no specialization, and so on: is it not representative because it is not an optimizing compiler?

is it something about the nursery size?

How big should the nursery be? I have no idea.

As a thought experiment, consider the case of a 1 kilobyte nursery. It is probably too small to allow the time for objects to die young, so the survival rate at each minor collection would be high. Above a certain survival rate, generational GC is probably a lose, because your program violates the weak generational hypothesis: it introduces a needless copy for all survivors, and a synchronization for each minor GC.

On the other hand, a 1 GB nursery is probably not great either. It is plenty large enough to allow objects to die young, but the number of survivor objects in a space that large is such that pause times would not be very low, which is one of the things you would like in generational GC. Also, you lose out on locality: a significant fraction of the objects you traverse are probably out of cache and might even incur TLB misses.

So there is probably a happy medium somewhere. My instinct is that for a copying nursery, you want to make it about as big as L3 cache, which on my 8-core laptop is 16 megabytes. Systems are different sizes though; in Whippet my current heuristic is to reserve 2 MB of nursery per core that was active in the previous cycle, so if only 4 threads are allocating, you would have a 8 MB nursery. Is this good? I don’t know.

is it something about the benchmarks?

I don’t have a very large set of benchmarks that run on Whiffle, and they might not be representative. I mean, they are microbenchmarks.

One question I had was about heap sizes. If a benchmark’s maximum heap size fits in L3, which is the case for some of them, then probably generational GC is a wash, because whole-heap collection stays in cache. When I am looking at benchmarks that evaluate generational GC, I make sure to choose those that exceed L3 size by a good factor, for example the 8-mutator splay benchmark in which minimum heap size peaks at 300 MB, or the 8-mutator nboyer-5 which peaks at 1.6 GB.

But then, should nursery size scale with total heap size? I don’t know!

Incidentally, the way that I scale these benchmarks to multiple mutators is a bit odd: they are serial benchmarks, and I just run some number of threads at a time, and scale the heap size accordingly, assuming that the minimum size when there are 4 threads is four times the minimum size when there is just one thread. However, multithreaded programs are unreliable, in the sense that there is no heap size under which they fail and above which they succeed; I quote:

"Consider 10 threads each of which has a local object graph that is usually 10 MB but briefly 100MB when calculating: usually when GC happens, total live object size is 10×10MB=100MB, but sometimes as much as 1 GB; there is a minimum heap size for which the program sometimes works, but also a minimum heap size at which it always works."

is it the write barrier?

A generational collector partitions objects into old and new sets, and a minor collection starts by visiting all old-to-new edges, called the “remembered set”. As the program runs, mutations to old objects might introduce new old-to-new edges. To maintain the remembered set in a generational collector, the mutator invokes write barriers: little bits of code that run when you mutate a field in an object. This is overhead relative to non-generational configurations, where the mutator doesn’t have to invoke collector code when it sets fields.

So, could it be that Whippet’s write barriers or remembered set are somehow so inefficient that my tests are unrepresentative of the state of the art?

I used to use card-marking barriers, but I started to suspect they cause too much overhead during minor GC and introduced too much cache contention. I switched to precise field-logging barriers some months back for Whippet’s Immix-derived space, and we use the same kind of barrier in the generational copying (pcc) collector. I think this is state of the art. I need to see if I can find a configuration that allows me to measure the overhead of these barriers, independently of other components of a generational collector.

is it something about the generational mechanism?

A few months ago, my only generational collector used the sticky mark-bit algorithm, which is an unconventional configuration: its nursery is not contiguous, non-moving, and can be as large as the heap. This is part of the reason that I implemented generational support for the parallel copying collector, to have a different and more conventional collector to compare against. But generational collection loses on some of these benchmarks in both places!

is it something about collecting more often?

On one benchmark which repeatedly constructs some trees and then verifies them, I was seeing terrible results for generational GC, which I realized were because of cooperative safepoints: generational GC collects more often, so it requires that all threads reach safepoints more often, and the non-allocating verification phase wasn’t emitting any safepoints. I had to change the compiler to emit safepoints at regular intervals (in my case, on function entry), and it sped up the generational collector by a significant amount.

This is one instance of a general observation, which is that any work that doesn’t depend on survivor size in a GC pause is more expensive with a generational collector, which runs more collections. Synchronization can be a cost. I had one bug in which tracing ephemerons did work proportional to the size of the whole heap, instead of the nursery; I had to specifically add generational support for the way Whippet deals with ephemerons during a collection to reduce this cost.

is it something about collection frequency?

Looking deeper at the data, I have partial answers for the splay benchmark, and they are annoying :)

Splay doesn’t actually allocate all that much garbage. At a 2.5x heap, the stock parallel MMC collector (in-place, sticky mark bit) collects... one time. That’s all. Same for the generational MMC collector, because the first collection is always major. So at 2.5x we would expect the generational collector to be slightly slower. The benchmark is simply not very good – or perhaps the most generous interpretation is that it represents tasks that allocate 40 MB or so of long-lived data and not much garbage on top.

Also at 2.5x heap, the whole-heap copying collector runs 9 times, and the generational copying collector does 293 minor collections and... 9 major collections. We are not reducing the number of major GCs. It means either the nursery is too small, so objects aren’t dying young when they could, or the benchmark itself doesn’t conform to the weak generational hypothesis.

At a 1.5x heap, the copying collector doesn’t have enough space to run. For MMC, the non-generational variant collects 7 times, and generational MMC times out. Timing out indicates a bug, I think. Annoying!

I tend to think that if I get results and there were fewer than, like, 5 major collections for a whole-heap collector, that indicates that the benchmark is probably inapplicable at that heap size, and I should somehow surface these anomalies in my analysis scripts.

collecting more often redux

Doing a similar exercise for nboyer at 2.5x heap with 8 threads (4GB for 1.6GB live data), I see that pcc did 20 major collections, whereas generational pcc lowered that to 8 major collections and 3471 minor collections. Could it be that there are still too many fixed costs associated with synchronizing for global stop-the-world minor collections? I am going to have to add some fine-grained tracing to find out.

conclusion?

I just don’t know! I want to believe that generational collection was an out-and-out win, but I haven’t yet been able to prove it is true.

I do have some homework to do. I need to find a way to test the overhead of my write barrier – probably using the MMC collector and making it only do major collections. I need to fix generational-mmc for splay and a 1.5x heap. And I need to do some fine-grained performance analysis for minor collections in large heaps.

Enough for today. Feedback / reactions very welcome. Thanks for reading and happy hacking!

by Andy Wingo at February 09, 2025 12:46 PM

February 06, 2025

Brian Kardell

Link in Bio

Link in Bio

Embracing an IndieWeb thing...

Over the years, I've been there as things have come and gone on the interwebs. I had accounts on AIM, MySpace, Blogger, FaceBook, WordPress, Google Plus, Twitter, Instagram, and plenty more. On each one, I wind up with a profile where I want to link people to other places I am online - and those places don't always make it easy to do that. So, something short and memorable that you could even type if you had to is ideal - like a handle: @briankardell or @bkardell or, in the end: bkardell.com is pretty great.

Back in 2016, some folks felt the same frustration and instead created this Linktree thing. But... It's just like, a fremium thing that gives you the most basic single page website ever? Yes, and today they have 50 million users and are valued at $1.3 billion - that's a value of $26/user. I mean, I found that amazing.

But, here's the thing: After a while I realized that there's something to it. Not the business model, specifically, but the link in bio idea. The link in bio is generally not a great website on its own, but it's a page where people can pretty quickly navigate to the thing they're looking for without other noise or fluff. Which is something that a home page often isn't. So, I've learned to really appreciate them, actually.

Back in about 2020, some IndieWeb folks began thinking about, criticizing and brainstorming around it too. They began writing about Link in Bio, and why it might be useful to have your own version. There are several examples of people doing something similar.

Recently a bunch of things smashed together in my mind. First, on ShopTalk, I heard Chris and Dave talking about "slash pages" (pages right off the root domain with well known names). Second, I've been working on social media plans - getting away from some platforms and moving to others and thinking about these problems. An IndieWeb style `/links` page, that adds rel="me" links is a really nice, simple way to achieve a whole lot of things if we'd all adopt it - not the least of which is that it's an evergreen link, almost as simple as a handle, to where people can find you should you choose to leave...

So, starting with me, and Igalia, you can find our links at bkardell.com/links and igalia.com/links respectively.

A screenshot of Igalia's post with a sunset and birds flying away, linking to our /links page

February 06, 2025 05:00 AM

February 05, 2025

Alberto Garcia

Keeping your system-wide configuration files intact after updating SteamOS

Introduction

If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.


As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.

However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).

/etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.

SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.

However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept.

There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.

Sample configuration file for the SteamOS updater

Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:

https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273

As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!


  1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup ↩

by berto at February 05, 2025 04:13 PM