Update on what happened in WebKit in the week from December 16 to December 25.
Right during the holiday season 🎄, the last WIP installment of the year comes packed with new releases, a couple of functions added to the public API, cleanups, better timer handling, and improvements to MathML and WebXR support.
Cross-Port 🐱
Landed support for font-size: math. Now
math-depth
can automatically control the font size inside of <math> blocks, making
scripts and nested content smaller to improve readability and presentation.
webkit_context_menu_item_get_gaction_target() to obtain the GVariant
associated with a context menu item created from a GAction.
webkit_context_menu_item_get_title() may be used to obtain
the title of a context menu item.
Improved timers, by making some of
them use the timerfd API. This reduces
timer “lateness”—the amount of time elapsed between the configured trigger
time, and the effective one—, which in turn improves the perceived smoothness
of animations thanks to steadier frame delivery timings. Systems where the
timerfd_create and timerfd_settime functions are not available will
continue working as before.
On the WebXR front, support was added
for XR_TRACKABLE_TYPE_DEPTH_ANDROID through the XR_ANDROID_trackables
extension, which allows reporting depth information for elements that take part
in hit testing.
Graphics 🖼️
Landed a change that implements
non-composited page rendering in the WPE port. This new mode is disabled by
default, and may be activated by disabling the AcceleratedCompositing runtime
preference. In such case, the frames are rendered using a simplified code path
that does not involve the internal WebKit compositor. Therefore it may offer a
better performance in some specific cases on constrained embedded devices.
Since version 2.10.2, the FreeType library can be built
with direct support for loading fonts in the
WOFF2 format. Until now, the WPE and GTK WebKit
ports used libwoff2 in an intermediate step
to convert those fonts on-the-fly before handing them to FreeType for
rendering. The CMake build system will now detect when FreeType supports WOFF2
directly and skip the conversion step.
This way, in systems which provide a suitable version of FreeType, libwoff2
will no longer be needed.
WPE WebKit 📟
WPE Platform API 🧩
New, modern platform API that supersedes usage of libwpe and WPE backends.
The legacy libwpe-based API can now be disabled at build
time, by toggling the
ENABLE_WPE_LEGACY_API CMake option. This allows removal of uneeded code when
an application is exclusively using the new WPEPlatform API.
Adaptation of WPE WebKit targeting the Android operating system.
AHardwareBuffer
is now supported as backing for
accelerated graphics surfaces that can be shared across processes. This is the
last piece of the puzzle to use WPEPlatform on Android without involving
expensive operations to copy rendered frames back-and-forth between GPU and
system memory.
Releases 📦️
WebKitGTK
2.50.4 and
WPE WebKit 2.50.4 have
been released. These stable releases include a number of important patches for
security issues, and we urge users and distributors to update to this release
if they have not yet done it. An accompanying security advisory,
WSA-2025-0010, has been published
(GTK,
WPE).
This article is a continuation of the series on WPE performance considerations. While the previous article touched upon fairly low-level aspects of the DOM tree overhead,
this one focuses on more high-level problems related to managing the application’s workload over time. Similarly to before, the considerations and conclusions made in this blog post are strongly related to web applications
in the context of embedded devices, and hence the techniques presented should be used with extra care (and benchmarking) if one would like to apply those on desktop-class devices.
Typical web applications on embedded devices have their workloads distributed over time in various ways. In practice, however, the workload distributions can usually be fitted into one of the following categories:
Idle applications with occasional updates - the applications that present static content and are updated at very low intervals. As an example, one can think of some static dashboard that presents static content and switches
the page every, say, 60 seconds - such as e.g. a static departures/arrivals dashboard on the airport.
Idle applications with frequent updates - the applications that present static content yet are updated frequently (or are presenting some dynamic content, such as animations occasionally). In that case, one can imagine a similar
airport departures/arrivals dashboard, yet with the animated page scrolling happening quite frequently.
Active applications with occasional updates - the applications that present some dynamic content (animations, multimedia, etc.), yet with major updates happening very rarely. An example one can think of in this case is an application
playing video along with presenting some metadata about it, and switching between other videos every few minutes.
Active applications with frequent updates - the applications that present some dynamic content and change the surroundings quite often. In this case, one can think of a stock market dashboard continuously animating the charts
and updating the presented real-time statistics very frequently.
Such workloads can be well demonstrated on charts plotting the browser’s CPU usage over time:
As long as the peak workload (due to updates) is small, no negative effects are perceived by the end user. However, when the peak workload is significant, some negative effects may start getting noticeable.
In case of applications from groups (1) and (2) mentioned above, a significant peak workload may not be a problem at all. As long as there are no continuous visual changes and no interaction is allowed during updates, the end-user
is unable to notice that the browser was not responsive or missed some frames for some period of time. In such cases, the application designer does not need to worry much about the workload.
In other cases, especially the ones involving applications from groups (3) and (4) mentioned above, the significant peak workload may lead to visual stuttering, as any processing making the browser busy for longer than 16.6 milliseconds
will lead to lost frames. In such cases, the workload has to be managed in a way that the peaks are reduced either by optimizing them or distributing them over time.
The first step to addressing the peak workload is usually optimization. Modern web platform gives a full variety of tools to optimize all the stages of web application processing done by the browser. The usual process of optimization is a
2-step cycle starting with measuring the bottlenecks and followed by fixing them. In the process, the usual improvements involve:
using CSS containment,
using shadow DOM,
promoting certain parts of the DOM to layers and manipulating them with transforms,
parallelizing the work with workers/worklets,
using the visibility CSS property to separate painting from layout,
optimizing the application itself (JavaScript code, the structure of the DOM, the architecture of the application),
Unfortunately, in practice, it’s not uncommon that even very well optimized applications still have too much of a peak workload for the constrained embedded devices they’re used on. In such cases, the last resort solution is
pre-rendering. As long as it’s possible from the application business-logic perspective, having at least some web page content pre-rendered is very helpful in situations when workload has to be managed, as pre-rendering
allows the web application designer to choose the precise moment when the content should actually be rendered and how it should be done. With that, it’s possible to establish a proper trade-off between reduction in peak workload and
the amount of extra memory used for storing the pre-rendered contents.
Nowadays, the web platform provides at lest a few widely-adapted APIs that provide means for the application to perform various kinds of pre-rendering. Also, due to the ways the browsers are implemented, some APIs can be purposely misused
to provide pre-rendering techniques not necessarily supported by the specification. However, in the pursuit of good trade-offs, all the possibilities should be taken into account.
Before jumping into particular pre-rendering techniques, it’s necessary to emphasize that the pre-rendering term used in this article refers to the actual rendering being done earlier than it’s visually presented. In that
sense, the resource is rasterized to some intermediate form when desired and then just composited by the browser engine’s compositor later.
The most basic (and limited at the same time) pre-rendering technique is one that involves rendering offline i.e. before the browser even starts. In that case, the first limitation is that the content to be rendered must be known
beforehand. If that’s the case, the rendering can be done in any way, and the result may be captured as e.g. raster or vector image (depending on the desired trade-off). However, the other problem is that such a rendering is usually out of
the given web application scope and thus requires extra effort. Moreover, depending on the situation, the amount of extra memory used, the longer web application startup (due to loading the pre-rendered resources), and the processing
power required to composite a given resource, it may not always be trivial to obtain the desired gains.
The first group of actual pre-rendering techniques happening during web application runtime is related to Canvas and
OffscreenCavas. Those APIs are really useful as they offer great flexibility in terms of usage and are usually very performant.
However, in this case, the natural downside is the lack of support for rendering the DOM inside the canvas. Moreover, canvas has a very limited support for painting text — unlike the DOM, where
CSS has a significant amount of features related to it. Interestingly, there’s an ongoing proposal called HTML-in-Canvas that could resolve those limitations
to some degree. In fact, Blink has a functioning prototype of it already. However, it may take a while before the spec is mature and widely adopted by other browser engines.
When it comes to actual usage of canvas APIs for pre-rendering, the possibilities are numerous, and there are even more of them when combined with processing using workers.
The most popular ones are as follows:
rendering to an invisible canvas and showing it later,
rendering to a canvas detached from the DOM and attaching it later,
rendering to an invisible/detached canvas and producing an image out of it to be shown later,
rendering to an offscreen canvas and producing an image out of it to be shown later.
When combined with workers, some of the above techniques may be used in the worker threads with the rendered artifacts transferred to the main for presentation purposes. In that case, one must be careful with
the transfer itself, as some objects may get serialized, which is very costly. To avoid that, it’s recommended to use transferable objects
and always perform a proper benchmarking to make sure the transfer is not involving serialization in the particular case.
While the use of canvas APIs is usually very straightforward, one must be aware of two extra caveats.
First of all, in the case of many techniques mentioned above, there is no guarantee that the browser will perform actual rasterization at the given point in time. To ensure the rasterization is triggered, it’s usually
necessary to enforce it using e.g. a dummy readback (getImageData()).
Finally, one should be aware that the usage of canvas comes with some overhead. Therefore, creating many canvases or creating them often, may lead to performance problems that could outweigh the gains from the
pre-rendering itself.
The second group of pre-rendering techniques happening during web application runtime is limited to the DOM rendering and comes out of a combination of purposeful spec misuse and tricking the browser engine into making it rasterizing
on demand. As one can imagine, this group of techniques is very much browser-engine-specific. Therefore, it should always be backed by proper benchmarking of all the use cases on the target browsers and target hardware.
In principle, all the techniques of this kind consist of 3 parts:
Enforcing the content to be pre-rendered being placed on a separate layer backed by an actual buffer internally in the browser,
Tricking the browser’s compositor into thinking that the layer needs to be rasterized right away,
Ensuring the layer won’t be composited eventually.
When all the elements are combined together, the browser engine will allocate an internal buffer (e.g. texture) to back the given DOM fragment, it will process that fragment (style recalc, layout), and rasterize it right away. It will do so
as it will not have enough information to allow delaying the rasterization of the layer (as e.g. in case of display: none). Then, when the compositing time comes, the layer will turn out to be invisible in practice
due to e.g. being occluded, clipped, etc. This way, the rasterization will happen right away, but the results will remain invisible until a later time when the layer is made visible.
In practice, the following approaches can be used to trigger the above behavior:
for (1), the CSS properties such as will-change: transform, z-index, position: fixed, overflow: hidden etc. can be used depending on the browser engine,
for (2) and (3), the CSS properties such as opacity: 0, overflow: hidden, contain: strict etc. can be utilized, again, depending on the browser engine.
The scrolling trick
While the above CSS properties allow for various combinations, in case of WPE WebKit in the context of embedded devices (tested on NXP i.MX8M Plus), the combination that has proven to yield the best performance benefits turns
out to be a simple approach involving overflow: hidden and scrolling. The example of such an approach is explained below.
With the number of idle frames (if) set to 59, the idea is that the application does nothing significant for the 59 frames, and then every 60th frame it updates all the numbers in the table.
As one can imagine, on constrained embedded devices, such an approach leads to a very heavy workload during every 60th frame and hence to lost frames and unstable application’s FPS.
As long as the numbers are available earlier than every 60th frame, the above application is a perfect example where pre-rendering could be used to reduce the peak workload.
To simulate that, the 3 variants of the approach involving the scrolling trick were prepared for comparison with the above:
In the above demos, the idea is that each cell with a number becomes a scrollable container with 2 numbers actually — one above the other. In that case, because overflow: hidden is set, only one of the numbers is visible while the
other is hidden — depending on the current scrolling:
With such a setup, it’s possible to update the invisible numbers during idle frames without the user noticing. Due to how WPE WebKit accelerates the scrolling, changing the invisible
numbers, in practice, triggers the layout and rendering right away. Moreover, the actual rasterization to the buffer backing the scrollable container happens immediately (depending on the tiling settings), and hence the high cost of layout
and text rasterization can be distributed. When the time comes, and all the numbers need to be updated, the scrollable containers can be just scrolled, which in that case turns out to be ~2 times faster than updating all the numbers in place.
While the first sysprof trace shows very little processing during 11 idle frames and a big chunk of processing (21 ms) every 12th frame, the second sysprof trace shows how the distribution of load looks. In
that case, the amount of work during 11 idle frames is much bigger (yet manageable), but at the same time, the formerly big chunk of processing every 12th frame is reduced almost 2 times (to 11 ms). Therefore, the overall
frame rate in the application is much better.
Results
Despite the above improvement speaking for itself, it’s worth summarizing the improvement with the benchmarking results of the above demos obtained from the NXP i.MX8M Plus and presenting the application’s average
frames per second (FPS):
Clearly, the positive impact of pre-rendering can be substantial depending on the conditions. In practice, when the rendered DOM fragment is more complex, the trick such as above can yield even better results.
However, due to how tiling works, the effect can be minimized if the content to be pre-rendered spans multiple tiles. In that case, the browser may defer rasterization until the tiles are actually needed. Therefore,
the above needs to be used with care and always with proper benchmarking.
As demonstrated in the above sections, when it comes to pre-rendering the contents to distribute the web application workload over time, the web platform gives both the official APIs to do it, as well as unofficial
means through purposeful misuse of APIs and exploitation of browser engine implementations. While this article hasn’t covered all the possibilities available, the above should serve as a good initial read with some easy-to-try
solutions that may yield surprisingly good results. However, as some of the ideas mentioned above are very much browser-engine-specific, they should be used with extra care and with the limitations (lack of portability)
in mind.
As the web platform constantly evolves, the pool of pre-rendering techniques and tricks should keep evolving as well. Also, as more and more web applications are used on embedded devices, more pressure should be
put on the specification, which should yield more APIs targeting the low-end devices in the future. With that in mind, it’s recommended for the readers to stay up-to-date with the latest specification and
perhaps even to get involved if some interesting use cases would be worth introducing new APIs.
Update on what happened in WebKit in the week from December 8 to December 15.
In this end-of-year special have a new GMallocString helper that makes
management of malloc-based strings more efficient, development releases,
and a handful of advancements on JSC's implementation of Temporal, in
particular the PlainYearMonth class.
Cross-Port 🐱
Added GMallocString class to WTF to adopt UTF8 C strings and make them WebKit first class citizens efficiently (no copies). Applied in GStreamer code together with other improvements by using CStringView. Fixed other twobugs about string management.
JavaScriptCore 🐟
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
Development releases of WebKitGTK 2.51.3
and WPE WebKit 2.51.3
are now available. These include a number of API additions and new features,
and are intended to allow interested parties to test those in advance, prior
to the next stable release series. As usual, bug reports are
welcome in Bugzilla.
Update on what happened in WebKit in the week from December 1 to December 8.
In this edition of the periodical we have further advancements on
the Temporal implementation, support for Vivante super-tiled format,
and an adaptation of the DMA-BUF formats code to the Android port.
Cross-Port 🐱
JavaScriptCore 🐟
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
Implemented the toString, toJSON, and toLocaleString methods for PlainYearMonth objects in JavaScriptCore's implementation of Temporal.
Graphics 🖼️
BitmapTexture and TextureMapperwere prepared to handle textures where the logical size (e.g. 100×100) differs from the allocated size (e.g. 128×128) due to alignment requirements. This allowed to add support for using memory-mapped GPU buffers in the Vivante super-tiled format available on i.MX platforms. Set WEBKIT_SKIA_USE_VIVANTE_SUPER_TILED_TILE_TEXTURES=1 to activate at runtime.
WPE WebKit 📟
WPE Platform API 🧩
New, modern platform API that supersedes usage of libwpe and WPE backends.
The WPEBufferDMABufFormats class has been renamed to WPEBufferFormats, as it can be used in situations where mechanisms other than DMA-BUF may be used for buffer sharing—on Android targets AHardwareBuffer is used instead, for example. The naming change involved also WPEBufferFormatsBuilder (renamed from WPEBufferDMABufFormatsBuilder), and methods and signals in other classes that use these types. Other than the renames, there is no change in functionality.
Some years ago I had mentioned some command line tools I used to analyze and find useful information on GStreamer logs. I’ve been using them consistently along all these years, but some weeks ago I thought about unifying them in a single tool that could provide more flexibility in the mid term, and also as an excuse to unrust my Rust knowledge a bit. That’s how I wrote Meow, a tool to make cat speak (that is, to provide meaningful information).
The idea is that you can cat a file through meow and apply the filters, like this:
which means “select those lines that contain appsinknewsample (with case insensitive matching), but don’t contain V0 nor video (that is, by exclusion, only that contain audio, probably because we’ve analyzed both and realized that we should focus on audio for our specific problem), highlight the different thread ids, only show those lines with timestamp lower than 21.46 sec, and change strings like Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp to become just AppendPipeline.cpp“, to get an output as shown in this terminal screenshot:
Cool, isn’t it? After all, I’m convinced that the answer to any GStreamer bug is always hidden in the logs (or will be, as soon as I add “just a couple of log lines more, bro” ).
Currently, meow supports this set of manipulation commands:
Word filter and highlighting by regular expression (fc:REGEX, or just REGEX): Every expression will highlight its matched words in a different color.
Filtering without highlighting (fn:REGEX): Same as fc:, but without highlighting the matched string. This is useful for those times when you want to match lines that have two expressions (E1, E2) but the highlighting would pollute the line too much. In those case you can use a regex such as E1.*E2 and then highlight the subexpressions manually later with an h: rule.
Negative filter (n:REGEX): Selects only the lines that don’t match the regex filter. No highlighting.
Highlight with no filter (h:REGEX): Doesn’t discard any line, just highlights the specified regex.
Substitution (s:/REGEX/REPLACE): Replaces one pattern for another. Any other delimiter character can be used instead of /, it that’s more convenient to the user (for instance, using # when dealing with expressions to manipulate paths).
Time filter (ft:TIME-TIME): Assuming the lines start with a GStreamer log timestamp, this filter selects only the lines between the target start and end time. Any of the time arguments (or both) can be omitted, but the - delimiter must be present. Specifying multiple time filters will generate matches that fit on any of the time ranges, but overlapping ranges can trigger undefined behaviour.
Highlight threads (ht:): Assuming a GStreamer log, where the thread id appears as the third word in the line, highlights each thread in a different color.
The REGEX pattern is a regular expression. All the matches are case insensitive. When used for substitutions, capture groups can be defined as (?CAPTURE_NAMEREGEX).
The REPLACEment string is the text that the REGEX will be replaced by when doing substitutions. Text captured by a named capture group can be referred to by ${CAPTURE_NAME}.
The TIME pattern can be any sequence of numbers, : or . . Typically, it will be a GStreamer timestamp (eg: 0:01:10.881123150), but it can actually be any other numerical sequence. Times are compared lexicographically, so it’s important that all of them have the same string length.
The filtering algorithm has a custom set of priorities for operations, so that they get executed in an intuitive order. For instance, a sequence of filter matching expressions (fc:, fn:) will have the same priority (that is, any of them will let a text line pass if it matches, not forbidding any of the lines already allowed by sibling expressions), while a negative filter will only be applied on the results left by the sequence of filters before it. Substitutions will be applied at their specific position (not before or after), and will therefore modify the line in a way that can alter the matching of subsequent filters. In general, the user doesn’t have to worry about any of this, because the rules are designed to generate the result that you would expect.
Now some practical examples:
Example 1: Select lines with the word “one”, or the word “orange”, or a number, highlighting each pattern in a different color except the number, which will have no color:
$ cat file.txt | meow one fc:orange 'fn:[0-9][0-9]*' 000 one small orange 005 one big orange
Example 2: Assuming a pictures filename listing, select filenames not ending in “jpg” nor in “jpeg”, and rename the filename to “.bak”, preserving the extension at the end:
Example 3: Only print the log lines with times between 0:00:24.787450146 and 0:00:24.790741865 or those at 0:00:30.492576587 or after, and highlight every thread in a different color:
This is only the begining. I have great ideas for this new tool (as time allows), such as support for parenthesis (so the expressions can be grouped), or call stack indentation on logs generated by tracers, in a similar way to what Alicia’s gst-log-indent-tracers tool does. I might also predefine some common expressions to use in regular expressions, such as the ones to match paths (so that the user doesn’t have to think about them and reinvent the wheel every time). Anyway, these are only ideas. Only time and hyperfocus slots will tell…
Update on what happened in WebKit in the week from November 24 to December 1.
The main highlights for this week are the completion of `PlainMonthDay`
in Temporal, moving networking access for GstWebRTC to the WebProcess,
and Xbox Cloud Gaming now working in the GTK and WPE ports.
Cross-Port 🐱
Multimedia 🎥
GStreamer-based multimedia support for WebKit, including (but not limited to)
playback, capture, WebAudio, WebCodecs, and WebRTC.
Support for remote inbound RTP statistics was improved in
303671@main, we now properly report
framesPerSecond and totalDecodeTime metrics, those fields are used in the
Xbox Cloud Gaming service to show live stats about the connection and video
decoder performance in an overlay.
The GstWebRTC backend now relies on
librice for its
ICE.
The Sans-IO architecture of librice allows us to keep the WebProcess sandboxed
and to route WebRTC-related UDP and (eventually) TCP packets using the
NetworkProcess. This work landed in
303623@main. The GNOME SDK should
also soon ship
librice.
Support for seeking in looping videos was fixed in
303539@main.
JavaScriptCore 🐟
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
Implemented the valueOf and
toPlainDate for PlainMonthDay objects.
This completes the implementation of
TemporalPlainMonthDay objects in JSC!
WebKitGTK 🖥️
The GTK port has gained support for
interpreting touch input as pointer
events. This
matches the behaviour of other browsers by following the corresponding
specifications.
WPE WebKit 📟
Fixed an issue that prevented
WPE from processing further input events after receiving a secondary mouse
button press.
Fixed an issue that caused right
mouse button clicks to prevent processing of further pointer events.
WPE Platform API 🧩
New, modern platform API that supersedes usage of libwpe and WPE backends.
We landed a patch to add a new signal
in WPEDisplay to notify when the connection to the native display has been lost.
Note that this work removed the support for building against
Enchant 1.x, and only version 2 will be
supported. The first stable release to require Enchant 2.x will be 2.52.0 due
in March 2026. Major Linux and BSD distributions have included Enchant 2
packages for years, and therefore this change is not expected to cause any
trouble. The Enchant library is used by the GTK port for spell checking.
Community & Events 🤝
We have published an
article detailing our
work making MathML
interoperable across browser engines! It has live demonstrations and feature
tables with our progress on WebKit support.
We have published new blogs post highlighting the most important changes in
both WPE WebKit
and WebKitGTK 2.50.
Enjoy!
This fall, the WPE WebKit team has released the 2.50 series of the Web engine after six months of hard work. Let’s have a deeper look at some of the most interesting changes in this release series!
Improved rendering performance
For this series, the threaded rendering implementation has been switched to use the Skia API. What has changed is the way we record the painting commands for each layer. Previously we used WebCore’s built-in mechanism (DisplayList) which is not thread-safe, and led to obscure rendering issues in release builds and/or sporadic assertions in debug builds when replaying the display lists in threads other than the main one. The DisplayList usage was replaced with SkPictureRecorder, Skia’s built-in facility, that provides similar functionality but in a thread-safe manner. Using the Skia API, we can leverage multithreading in a reliable way to replay recorded drawing commands in different worker threads, improving rendering performance.
An experimental hybrid rendering mode has also been added. In this mode, WPE WebKit will attempt to use GPU worker threads for rendering but, if these are busy, CPU worker threads will be used whenever possible. This rendering mode is still under investigation, as it is unclear whether the improvements are substantial enough to justify the extra complexity.
Damage propagation to the system compositor, which was added during the 2.48 cycle but remained disabled by default, has now been enabled. The system compositor may now leverage the damage information for further optimization.
Vertical writing-mode rendering has also received improvements for this release series.
Changes in Multimedia support
When available in the system, WebKit can now leverage the XDG desktop portal for accessing capture devices (like cameras) so that no specific sandbox exception is required. This provides secure access to capture devices in browser applications that use WPE WebKit.
Managed Media Source support has been enabled. This potentially improves multimedia playback, for example in mobile devices, by allowing the user agent to react to changes in memory and CPU availability.
Transcoding is now using the GStreamer built-in uritranscodebin element instead of GstTranscoder, which improves stability of the media recording that needs transcoding.
SVT-AV1 encoder support has been added to the media backend.
WebXR support
The WebXR implementation had been stagnating since it was first introduced, and had a number of shortcomings. This was removed in favor of a new implementation, also built using OpenXR, that better adapts to the multiprocess architecture of WebKit.
This feature is considered experimental in 2.50, and while it is complete enough to load and display a number of immersive experiences, a number of improvements and optional features continue to be actively developed. Therefore, WebXR support needs to be enabled at build time with the ENABLE_WEBXR=ON CMake option.
Android support
Support for Android targets has been greatly improved. It is now possible to build WPE WebKit without the need for additional patches when using the libwpe-based WPEBackend-android. This was achieved by incorporating changes that make WebKit use more appropriate defaults (like disabling MediaSession) or using platform-specific features (like ASharedMemory and AHardwareBuffer) when targeting Android.
The WebKit logging system has gained support to use the Android logd service. This is particularly useful for both WebKit and application developers, allowing to configure logging channels at runtime in any WPE WebKit build. For example, the following commands may be used before launching an application to debug WebGL setup and multimedia playback errors:
There is an ongoing effort to enable the WPEPlatform API on Android, and while it builds now, rendering is not yet working.
Web Platform support
As usual, changes in this area are extensive as WebKit constantly adopts, improves, and supports new Web Platform features. However, some interesting additions in this release cycle include:
Work continues on the new WPEPlatform API, which is still shipped as a preview feature in the 2.50 and needs to be explicitly enabled at build time with the ENABLE_WPE_PLATFORM=ON CMake option. The API may still change and applications developed using WPEPlatform are likely to need changes with future WPE WebKit releases; but not for long: the current goal is to have it ready and enabled by default for the upcoming 2.52 series.
One of the main changes is that WPEPlatform now gets built into libWPEWebKit. The rationale for this change is avoiding shipping two copies of shared code from the Web Template Framework (WTF), which saves both disk and memory space usage. The wpe-platform-2.0 pkg-config module is still shipped, which allows application developers to know whether WPEPlatform support has been built into WPE WebKit.
The abstract base class WPEScreenSyncObserver has been introduced, and allows platform implementations to notify on display synchronization, allowing WebKit to better pace rendering.
WPEPlatform has gained support for controllers like gamepads and joysticks through the new WPEGamepadManager and WPEGamepad classes. When building with the ENABLE_MANETTE=ON CMake option a built-in implementation based on libmanette is used by default, if a custom one is not specified.
WPEPlatform now includes a new WPEBufferAndroid class, used to represent graphics buffers backed by AHardwareBuffer. These buffers support being imported into an EGLImage using wpe_buffer_import_to_egl_image().
As part of the work to improve Android support, the buffer rendering and release fences have been moved from WPEBufferDMABuf to the base class, WPEBuffer. This is leveraged by WPEBufferAndroid, and should be helpful if more buffer types are introduced in the future.
Other additions include clipboard support, Interaction Media Features, and an accessibility implementation using ATK.
What’s new for WebKit developers?
WebKit now supports sending tracing marks and counters to Sysprof. Marks indicate when certain events occur and their duration; while counters track variables over time. Together, these allow developers to find performance bottlenecks and monitor internal WebKit performance metrics like frame rates, memory usage, and more. This integration enables developers to analyze the performance of applications, including data for WebKit alongside system-level metrics, in a unified view. For more details see this article, which also details how Sysprof was improved to handle the massive amounts of data produced by WebKit.
Finally, GCC 12.2 is now the minimum required version to build WPE WebKit. Increasing the minimum compiler version allows us to remove obsolete code and focus on improving code quality, while taking advantage of new C++ and compiler features.
Looking forward to 2.52
The 2.52 release series will bring even more improvements, and we expect it to be released during the spring of 2026. Until then!
Update on what happened in WebKit in the week from November 17 to November 24.
In this week's rendition, the WebView snapshot API was enabled on the WPE
port, further progress on the Temporal and Trusted Types implementations,
and the release of WebKitGTK and WPE WebKit 2.50.2.
Cross-Port 🐱
A WebKitImage-based implementation of WebView snapshot landed this week, enabling this feature on WPE when it was previously only available in GTK. This means you can now use webkit_web_view_get_snapshot (and webkit_web_view_get_snapshot_finish) to get a WebKitImage-representation of your screenshot.
WebKitImage implements the GLoadableIcon interface (as well as GIcon's), so you can get a PNG-encoded image using g_loadable_icon_load.
Remove incorrect early return in Trusted Types DOM attribute handling to align with spec changes.
JavaScriptCore 🐟
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
In JavaScriptCore's implementation of Temporal, implemented the with method for PlainMonthDay objects.
In JavaScriptCore's implementation of Temporal, implemented the from and equals methods for PlainMonthDay objects.
These stable releases include a number of patches for security issues, and as such a new security advisory, WSA-2025-0008, has been issued (GTK, WPE).
It is recommend to apply an additional patch that fixes building with the JavaScriptCore “CLoop” interpreter is enabled, which is typicall for architectures where JIT compilation is unsupported. Releases after 2.50.2 will include it and manual patching will no longer be needed.
Update on what happened in WebKit in the week from November 10 to November 17.
This week's update is composed of a new CStringView internal API, more
MathML progress with the implementation of the "scriptlevel" attribute,
the removal of the Flatpak-based SDK, and the maintanance update of
WPEBackend-fdo.
Cross-Port 🐱
Implement the MathML scriptlevel attribute using math-depth.
Finished implementing CStringView, which is a wrapper around UTF8 C strings. It allows you to recover the string without making any copies and perform string operations safely by taking into account the encoding at compile time.
Releases 📦️
WPEBackend-fdo 1.16.1 has been released. This is a maintenance update which adds compatibility with newer Mesa versions.
Infrastructure 🏗️
Most of the Flatpak-based SDK was removed. Developers are warmly encouraged to use the new SDK for their contributions to the Linux ports, this SDK has been successfully deployed on EWS and post-commits bots.
Update on what happened in WebKit in the week from November 3 to November 10.
This week brought a hodgepodge of fixes in Temporal and multimedia,
a small addition to the public API in preparation for future work,
plus advances in WebExtensions, WebXR, and Android support.
Cross-Port 🐱
The platform-independent part of the WebXR Hit Test Module has been implemented. The rest, including the FakeXRDevice mock implementation used for testing will be done later.
On the WebExtensions front, parts of the WebExtensionCallbackHandler code have been rewritten to use more C++ constructs and helper functions, in preparation to share more code among the different WebKit ports.
A new WebKitImage utility class
landed this week. This image
abstraction is one of the steps towards delivering a new improved API for page
favicons, and it is also expected to
be useful for the WebExtensions work, and to enable the webkit_web_view_get_snapshot()
API for the WPE port.
Multimedia 🎥
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Videos with BT2100-PQ colorspace are now tone-mapped to
SDR in WebKit's compositor, ensuring
colours do not appear washed out.
Adaptation of WPE WebKit targeting the Android operating system.
One of the last pieces needed to have the WPEPlatform API working on Android
has been merged: a custom platform
EGL display implementation, and enabling the default display as fallback.
Community & Events 🤝
The dates for the next Web Engines Hackfest
have been announced: it will take place from Monday, June 15th to Wednesday,
June 17th. As it has been the case in the last years, it will be possible to
attend both on-site, and remotely for those who cannot to travel to A Coruña.
The video recording for Adrian Pérez's “WPE Android 🤖 State of the Bot” talk from this year's edition of the WebKit Contributors' Meeting has been published. This was an update on what the Igalia WebKit team has been done during the last year to improve WPE WebKit on Android, and what is coming up next.
Update on what happened in WebKit in the week from October 27 to November 3.
A calmer week this time! This week we have the GTK and WPE ports implementing
the RunLoopObserver infrastructure, which enables more sophisticated scheduling
in WebKit Linux ports, as well as more information in webkit://gpu. On the Trusted
Types front, the timing of check was changed to align with spec changes.
Cross-Port 🐱
Implemented the RunLoopObserver
infrastructure for GTK and WPE ports, a critical piece of technology previously
exclusive to Apple ports that enables sophisticated scheduling features like
OpportunisticTaskScheduler for optimal garbage collection timing.
The implementation refactored the GLib run loop to notify clients about
activity-state transitions (BeforeWaiting, Entry, Exit, AfterWaiting),
then moved from timer-based to
observer-based layer flushing for more
precise control over rendering updates. Finally support was added to support
cross-thread scheduling of RunLoopObservers, allowing the ThreadedCompositor
to use them, enabling deterministic
composition notifications across thread boundaries.
Changed timing of Trusted Types
checks within DOM attribute handling to align with spec changes.
Graphics 🖼️
The webkit://gpu page now showsmoreinformation like the list of
preferred buffer formats, the list of supported buffer formats, threaded
rendering information, number of MSAA samples, view size, and toplevel state.
It is also now possible to make the
page autorefresh every the given amount of seconds by passing a
?refresh=<seconds> parameter in the URL.
Update on what happened in WebKit in the week from October 21 to October 28.
This week has again seen a spike in activity related to WebXR and graphics
performance improvements. Additionally, we got in some MathML additions, a
fix for hue interpolation, a fix for WebDriver screenshots, development
releases, and a blog post about memory profiling.
Cross-Port 🐱
Support for WebXR Layers has seen the
very firstchanges
needed to have them working on WebKit.
This is expected to take time to complete, but should bring improvements in
performance, rendering quality, latency, and power consumption down the road.
Work has started on the WebXR Hit Test
Module, which will allow WebXR
experiences to check for real world surfaces. The JavaScript API bindings were
added, followed by an initial XRRay
implementation. More work is needed to
actually provide data from device sensors.
Now that the WebXR implementation used for the GTK and WPE ports is closer to
the Cocoa ones, it was possible to unify the
code used to handle opaque buffers.
Implemented the text-transform: math-auto CSS property, which replaces the legacy mathvariant system and is
used to make identifiers italic in MathML Core.
Implemented the math-depth CSS
extension from MathML Core.
Graphics 🖼️
The hue interpolation
method
for gradients has been fixed. This is
expected to be part of the upcoming 2.50.2 stable release.
Paths that contain a single arc, oval, or line have been changed to use a
specialized code path, resulting in
improved performance.
WebGL content rendering will be handled by a new isolated process (dubbed “GPU
Process”) by default. This is the
first step towards moving more graphics processing out of the process that
handles processing Web content (the “Web Process”), which will result in
increased resilience against buggy graphics drivers and certain kinds of
malicious content.
The internal webkit://gpu page has been
improved to also display information
about the graphics configuration used in the rendering process.
WPE WebKit 📟
WPE Platform API 🧩
New, modern platform API that supersedes usage of libwpe and WPE backends.
The new WPE Platform, when using Skia (the default), now takes WebDriver
screenshots in the UI Process, using
the final assembled frame that was sent to the system compositor. This fixes
the issues of some operations like 3D CSS animations that were not correctly
captured in screenshots.
Releases 📦️
The first development releases for the current development cycle have been
published: WebKitGTK
2.51.1 and
WPE WebKit 2.51.1. These
are intended to let third parties test upcoming features and improvements and
as such bug reports for those are particularly welcome in
Bugzilla. We are particularly interested in reports
related to WebGL, now that it is handled in an isolated process.
Community & Events 🤝
Paweł Lampe has published a blog post that discusses GTK/WPE WebKit memory profiling using industry-standard tools and a built-in "Malloc Heap Breakdown" WebKit feature.
One of the main constraints that embedded platforms impose on the browsers is a very limited memory. Combined with the fact that embedded web applications tend to run actively for days, weeks, or even longer,
it’s not hard to imagine how important the proper memory management within the browser engine is in such use cases. In fact, WebKit and WPE in particular receive numerous memory-related fixes and improvements every year.
Before making any changes, however, the areas to fix/improve need to be narrowed down first. Like any C++ application, WebKit memory can be profiled using a variety of industry-standard tools. Although such well-known
tools are really useful in the majority of use cases, they have their limits that manifest themselves when applied on production-grade embedded systems in conjunction with long-running web applications.
In such cases, a very useful tool is a debug-only feature of WebKit itself called malloc heap breakdown, which this article describes.
Massif is a heap profiler that comes as part of the Valgrind suite. As its documentation states:
It measures how much heap memory your program uses. This includes both the useful space, and the extra bytes allocated for book-keeping and alignment purposes. It can also measure the size of your program’s stack(s),
although it does not do so by default.
Using Massif with WebKit is very straightforward and boils down to a single command:
The Malloc=1 environment variable set above is necessary to instruct WebKit to enable debug heaps that use the system malloc allocator.
Given some results are generated, the memory usage over time can be visualized using massif-visualizer utility. An example of such a visualization is presented in the image below:
While Massif has been widely adopted and used for many years now, from the very beginning, it suffered from a few significant downsides.
First of all, the way Massif instruments the profiled application introduces significant overhead that may slow down the application up to 2 orders of magnitude. In some cases, such overhead makes it simply unusable.
The other important problem is that Massif is snapshot-based, and hence, the level of detail is not ideal.
Heaptrack is a modern heap profiler developed as part of KDE. The below is its description from the git repository:
Heaptrack traces all memory allocations and annotates these events with stack traces. Dedicated analysis tools then allow you to interpret the heap memory profile to:
find hotspots that need to be optimized to reduce the memory footprint of your application
find memory leaks, i.e. locations that allocate memory which is never deallocated
find allocation hotspots, i.e. code locations that trigger a lot of memory allocation calls
find temporary allocations, which are allocations that are directly followed by their deallocation
At first glance, Heaptrack resembles Massif. However, a closer look at the architecture and features shows that it’s much more than the latter. While it’s fair to say it’s a bit similar, in fact, it is a
significant progression.
Usage of Heaptrack to profile WebKit is also very simple. At the moment of writing, the most suitable way to use it is to attach to a certain running WebKit process using the following command:
heaptrack -p <PID>
while the WebKit needs to be run with system malloc, just like in Massif case:
If profiling of e.g. web content process startup is essential, it’s then recommended also to use WEBKIT2_PAUSE_WEB_PROCESS_ON_LAUNCH=1, which adds 30s delay to the process startup.
When the profiling session is done, the analysis of the recordings is done using:
heaptrack --analyze <RECORDING>
The utility opened with the above, shows various things, such as the memory consumption over time:
flame graphs of memory allocations with respect to certain functions in the code:
etc.
As Heaptrack records every allocation and deallocation, the data it gathers is very precise and full of details, especially when accompanied by stack traces arranged into flame graphs. Also, as Heaptrack
does instrumentation differently than e.g. Massif, it’s usually much faster in the sense that it slows down the profiled application only up to 1 order of magnitude.
Although the memory profilers such as above are really great for everyday use, their limitations on embedded platforms are:
they significantly slow down the profiled application — especially on low-end devices,
they effectively cannot be run for a longer period of time such as days or weeks, due to memory consumption,
they are not always provided in the images — and hence require additional setup,
they may not be buildable out of the box on certain architectures — thus requiring extra patching.
While the above limitations are not always a problem, usually at least one of them is. What’s worse, usually at least one of the limitations turns into a blocking problem. For example, if the target device is very short on memory,
it may be basically impossible to run anything extra beyond the browser. Another example could be a situation where the application slowdown due to the profiler usage, leads to different application behavior, such as a problem
that originally reproduced 100% of the time, does not reproduce anymore etc.
Profiling the memory of WebKit while addressing the above problems points towards a solution that does not involve any extra tools, i.e. instrumenting WebKit itself. Normally, adding such an instrumentation to the C++ application
means a lot of work. Fortunately, in the case of WebKit, all that work is already done and can be easily enabled by using the Malloc heap breakdown.
In a nutshell, Malloc heap breakdown is a debug-only feature that enables memory allocation tracking within WebKit itself. Since it’s built into WebKit, it’s very lightweight and very easy to build, as it’s just about setting
the ENABLE_MALLOC_HEAP_BREAKDOWN build option. Internally, when the feature is enabled, WebKit switches to using debug heaps that use system malloc along with the malloc zone API
to mark objects of certain classes as belonging to different heap zones and thus allowing one to track the allocation sizes of such zones.
As the malloc zone API is specific to BSD-like OSes, the actual implementations (and usages) in WebKit have to be considered separately for Apple and non-Apple ports.
Malloc heap breakdown was originally designed only with Apple ports in mind, with the reason being twofold:
The malloc zone API is provided virtually by all platforms that Apple ports integrate with.
MacOS platforms provide a great utility called footprint that allows one to inspect per-zone memory statistics for a given process.
Given the above, usage of malloc heap breakdown with Apple ports is very smooth and as simple as building WebKit with the ENABLE_MALLOC_HEAP_BREAKDOWN build option and running on macOS while using the footprint utility:
Footprint is a macOS specific tool that allows the developer to check memory usage across regions.
Since all of the non-Apple WebKit ports are mostly being built and run on non-BSD-like systems, it’s safe to assume the malloc zone API is not offered to such ports by the system itself.
Because of the above, for many years, malloc heap breakdown was only available for Apple ports.
The idea behind the integration for non-Apple ports is to provide a simple WebKit-internal library that provides a fake <malloc/malloc.h> header along with simple implementation that provides malloc_zone_*() function implementations
as proxy calls to malloc(), calloc(), realloc() etc. along with a tracking mechanism that keeps references to memory chunks. Such an approach gathers all the information needed to be reported later on.
At the moment of writing, the above allows 2 methods of reporting the memory usage statistics periodically:
By default, when WebKit is built with ENABLE_MALLOC_HEAP_BREAKDOWN, the heap breakdown is printed to the standard output every few seconds for each process. That can be tweaked by setting WEBKIT_MALLOC_HEAP_BREAKDOWN_LOG_INTERVAL=<SECONDS>
environment variable.
The results have a structure similar to the one below:
Given the allocation statistics per-zone, it’s easy to narrow down the unusual usage patterns manually. The example of a successful investigation is presented in the image below:
Moreover, the data presented can be processed either manually or using scripts to create memory usage charts that span as long as the application lifetime so e.g. hours (20+ like below), days, or even longer:
Periodic reporting to sysprof
The other reporting mechanism currently supported is reporting periodically to sysprof as counters. In short, sysprof is a modern system-wide profiling tool
that already integrates with WebKit very well when it comes to non-Apple ports.
The condition for malloc heap breakdown reporting to sysprof is that the WebKit browser needs to be profiled e.g. using:
sysprof-cli -f -- <BROWSER_COMMAND>
and the sysprof has to be in the latest version possible.
With the above, the memory usage statistics can then be inspected using the sysprof utility and look like in the image below:
In the case of sysprof, memory statistics in that case are just a minor addition to other powerful features that were well described in this blog post from Georges.
While malloc heap breakdown is very useful in some use cases — especially on embedded systems — there are a few problems with it.
First of all, compilation with -DENABLE_MALLOC_HEAP_BREAKDOWN=ON is not guarded by any continuous integration bots; therefore, the compilation issues are expected on the latest WebKit main. Fortunately, fixing the problems
is usually straightforward. For a reference on what may be causing compilation problems usually, one should refer to 299555@main, which contains a full variety of fixes.
The second problem is that malloc heap breakdown uses WebKit’s debug heaps, and hence the memory usage patterns may be different just because system malloc is used.
The third, and final problem, is that malloc heap breakdown integration for non-Apple ports introduces some overhead as the allocations need to lock/unlock the mutex, and as statistics are stored in the memory as well.
Although malloc heap breakdown can be considered fairly constrained, in the case of non-Apple ports, it gives some additional possibilities that are worth mentioning.
Because on non-Apple ports, the custom library is used to track allocations (as mentioned at the beginning of the Malloc heap breakdown on non-Apple ports section), it’s very easy
to add more sophisticated tracking/debugging/reporting capabilities. The only file that requires changes in such a case is:
Source/WTF/wtf/malloc_heap_breakdown/main.cpp.
Some examples of custom modifications include:
adding different reporting mechanisms — e.g. writing to a file, or to some other tool,
reporting memory usage with more details — e.g. reporting the per-memory-chunk statistics,
dumping raw memory bytes — e.g. when some allocations are suspicious.
altering memory in-place — e.g. to simulate memory corruption.
While the presented malloc heap breakdown mechanism is a rather poor approximation of what industry standard tools offer, the main benefit of it is that it’s built into WebKit, and that in some rare use-cases (especially on
embedded platforms), it’s the only way to perform any reasonable profiling.
In general, as a rule of thumb, it’s not recommended to use malloc heap breakdown unless all other methods have failed. In that sense, it should be considered a last resort approach. With that in mind, malloc heap breakdown
can be seen as a nice mechanism complementing other tools in the toolbox.
Update on what happened in WebKit in the week from October 13 to October 20.
This week was calmer than previous week but we still had some
meaningful updates. We had a Selenium update, improvements to
how tile sizes are calculated, and a new Igalian in the list
of WebKit committer!
Cross-Port 🐱
Selenium's relative locators are now supported after commit 301445@main. Before, finding elements with locate_with(By.TAG_NAME, "input").above({By.ID: "password"}) could lead to "Unsupported locator strategy" errors.
Graphics 🖼️
A patch landed to compute the layers tile size, using a different strategy depending on whether GPU rendering is enabled, which improved the performance for both GPU and CPU rendering modes.
Update on what happened in WebKit in the week from October 6 to October 13.
Another week with many updates in Temporal, the automated testing
infrastructure is now running WebXR API tests; and WebKitGTK gets
a fix for the janky Inspector resize while it drops support for
libsoup 2. Last but not least, there are fresh releases of both the
WPE and GTK ports including a security fix.
Cross-Port 🐱
Multimedia 🎥
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
JavaScriptCore's implementation of
Temporal
received a flurry of improvements:
Implemented the toString,
toJSON, and toLocaleString methods for the PlainMonthDay type.
Brought the implementation of the round method on TemporalDuration
objects up to spec. This is
the last in the series of patches that refactor TemporalDuration methods to
use the InternalDuration type, enabling mathematically precise computations
on time durations.
Implemented basic support for
the PlainMonthDay type, without most methods yet.
Brought the implementations of the since and until functions on Temporal
PlainDate objects up to
spec, improving the precision
of computations.
WebKitGTK 🖥️
WebKitGTK will no longer support
using libsoup 2 for networking starting with version 2.52.0, due in March 2026.
An article in the
website has
more details and migrations tips for application developers.
Fixedthe jittering
bug of the docked Web Inspector window width and
height while dragging the resizer.
Releases 📦️
WebKitGTK
2.50.1 and
WPE WebKit 2.50.1 have
been released. These include a number of small fixes, improved text rendering
performance, and a fix for audio playback on Instagram.
A security advisory, WSA-2025-0007
(GTK,
WPE), covers one security
issue fixed in these releases. As usual, we recommend users and distributors to
keep their WPE WebKit and WebKitGTK packages updated.
Infrastructure 🏗️
Updated the API test runner to
run monado-service without standard input using XRT_NO_STDIN=TRUE, which
allows the WPE and GTK bots to start validating the WebXR API.
Submitted a change that allows
relaxing the DMA-BUF requirement when creating an OpenGL display in the
OpenXRCoordinator, so that bots can run API tests in headless environments that
don't have that extension.
Update on what happened in WebKit in the week from September 29 to October 6.
Another exciting weekful of updates, this time we have a number of fixes on
MathML, content secutiry policy, and Aligned Trusted types, public API for
WebKitWebExtension has finally been added, and fixed enumeration of speaker
devices. In addition to that, there's ongoing work to improved compatibility
for broken AAC audio streams in MSE, a performance improvement to text
rendering with Skia was merged, and fixed multi-plane DMA-BUF handling in WPE.
Last but not least, The 2026 edition of the Web Engines Hackfest has been
announced! It will take place from June 15th to the 17th.
In JavaScriptCore's implementation of Temporal, improved the precision of calculations with the total() function on Durations. This was joint work with Philip Chimento.
In JavaScriptCore's implementation of Temporal, continued refactoring addition for Durations to be closer to the spec.
Graphics 🖼️
Landed a patch to build a SkTextBlob when recording DrawGlyphs operations for the GlyphDisplayListCache, which shows a significant improvement in MotionMark “design” test when using GPU rendering.
WPE WebKit 📟
WPE Platform API 🧩
New, modern platform API that supersedes usage of libwpe and WPE backends.
Improvedwpe_buffer_import_to_pixels() to work correctly on non-linear and multi-plane DMA-BUF buffers by taking into account their modifiers when mapping the buffers.
Update on what happened in WebKit in the week from September 22 to September 29.
Many news this week! We've got a performance improvement in the Vector
implementation, a fix that makes a SVG attribute work similarly to HTML,
and further advancements on WebExtension support. We also saw an update
to WPE Android, the test infrastructure can now run WebXR tests, WebXR
support in WPE Android, and a rather comprehensive blog post about the
performance considerations of WPE WebKit with regards to the DOM tree.
Cross-Port 🐱
Vector copies performance was improved across the board, and specially for MSE use-cases
Fixed SVG <a> rel attribute to work the same as HTML <a>'s.
Work on WebExtension support continues with more Objective-C converted to C++, which allows all WebKit ports to reuse the same utility code in all ports.
WPE now supports importing pixels from non-linear DMABuf formats since commit 300687@main. This will help the work to make WPE take screenshots from the UIProcess (WIP) instead of from the WebProcess, so they match better what's actually shown on the screen.
Adaptation of WPE WebKit targeting the Android operating system.
WPE-Android is being updated to use WPE WebKit 2.50.0. As usual, the ready-to-use packages will arrive in a few days to the Maven Central repository.
Added support to run WebXR content on Android, by using AHarwareBuffer to share graphics buffers between the main process and the content rendering process. This required coordination to make the WPE-Android runtime glue expose the current JavaVM and Activity in a way that WebKit could then use to initialize the OpenXR platform bindings.
Community & Events 🤝
Paweł Lampe has published in his blog the first post in a series about different aspects of Web engines that affect performance, with a focus on WPE WebKit and interesting comparisons between desktop-class hardware and embedded devices. This first article analyzes how “idle” nodes in the DOM tree render measurable effects on performance (pun intended).
Infrastructure 🏗️
The test infrastructure can now run API
tests that need WebXR support, by
using a dummy OpenXR compositor provided by the Monado
runtime, along with the first tests and an additional one
that make use of this.
Designing performant web applications is not trivial in general. Nowadays, as many companies decide to use web platform on embedded devices, the problem of designing performant web applications becomes even more complicated.
Typical embedded devices are orders of magnitude slower than desktop-class ones. Moreover, the proportion between CPU and GPU power is commonly different as well. This usually results in unexpected performance bottlenecks
when the web applications designed with desktop-class devices in mind are being executed on embedded environments.
In order to help web developers approach the difficulties that the usage of web platform on embedded devices may bring, this blog post initiates a series of articles covering various performance-related aspects
in the context of WPE WebKit usage on embedded devices. The coverage in general will include:
introducing the demo web applications dedicated to showcasing use cases of a given aspect,
benchmarking and profiling the WPE WebKit performance using the above demos,
discussing the causes for the performance measured,
inferring some general pieces of advice and rules of thumb based on the results.
This article, in particular, discusses the overhead of nodes in the DOM tree when it comes to layouting. It does that primarily by investigating the impact of idle nodes that introduce the least overhead and hence
may serve as a lower bound for any general considerations. With the data presented in this article, it should be clear how the DOM tree size/depth scales in the case of embedded devices.
Historically, the DOM trees emerging from the usual web page designs were rather limited in size and fairly shallow. This was the case as there were
no reasons for them to be excessively large unless the web page itself had a very complex UI. Nowadays, not only are the DOM trees much bigger and deeper, but they also tend to contain idle nodes that artificially increase
the size/depth of the tree. The idle nodes are the nodes in the DOM that are active yet do not contribute to any visual effects. Such nodes are usually a side effect of using various frameworks and approaches that
conceptualize components or services as nodes, which then participate in various kinds of processing utilizing JavaScript. Other than idle nodes, the DOM trees are usually bigger and deeper nowadays, as there
are simply more possibilities that emerged with the introduction of modern APIs such as Shadow DOM,
Anchor positioning, Popover, and the like.
In the context of web platform usage on embedded devices, the natural consequence of the above is that web designers require more knowledge on how the particular browser performance scales with the DOM tree size and shape.
Before considering embedded devices, however, it’s worth to take a brief look at how various web engines scale on desktop with the DOM tree growing in depth.
In short, the above demo measures the average duration of a benchmark function run, where the run does the following:
changes the text of a single DOM element to a random number,
forces a full tree layout.
Moreover, the demo allows one to set 0 or more parent idle nodes for the node holding text, so that the layout must consider those idle nodes as well.
The parameters used in the URL above mean the following:
vr=0 — the results are reported to the console. Alternatively (vr=1), at the end of benchmarking (~23 seconds), the result appears on the web page itself.
ms=1 — the results are reported in “milliseconds per run”. Alternatively (ms=0), “runs per second” are reported instead.
dv=0 — the idle nodes are using <span> tag. Alternatively, (dv=1) <div> tag is used instead.
ns=N — the N idle nodes are added.
The idea behind the experiment is to check how much overhead is added as the number of extra idle nodes (ns=N) in the DOM tree increases. Since the browsers used in the experiments are not fair to compare due to various reasons,
instead of concrete numbers in milliseconds, the results are presented in relative terms for each browser separately. It means that the benchmarking result for ns=0 serves as a baseline, and other results show the relative duration
increase to that baseline result, where, e.g. a 300% increase means 3 times the baseline duration.
The results for a few mainstream browsers/browser engines (WebKit GTK MiniBrowser [09.09.2025], Chromium 140.0.7339.127, and Firefox 142.0) and a few experimental ones (Servo [04.07.2024] and Ladybird [30.06.2024])
are presented in the image below:
As the results show, trends among all the browsers are very close to linear. It means that the overhead is very easy to assess, as usually N times more idle nodes will result in N
times the overhead.
Moreover, up until 100-200 extra idle nodes in the tree, the overhead trends are very similar in all the browsers except for experimental Ladybird. That in turn means that even for big web applications, it’s safe to
assume the overhead among the browsers will be very much the same. Finally, past the 200 extra idle nodes threshold, the overhead across browsers diverges. It’s very likely due to the fact that the browsers are not
optimizing such cases as a result of a lack of real-world use cases.
All in all, the conclusion is that on desktop, only very large / specific web applications should be cautious about the overhead of nodes, as modern web browsers/engines are very well optimized for handling substantial amounts
of nodes in the DOM.
When it comes to the embedded devices, the above conclusions are no longer applicable. To demonstrate that, a minimal browser utilizing WPE WebKit is used to run the demo from the previous section both on desktop and
NXP i.MX8M Plus platforms. The latter is a popular choice for embedded applications as it has quite an interesting set of features while still having strong specifications, which may be compared to those of Raspberry Pi 5.
The results are presented in the image below:
This time, the Y axis presents the duration (in milliseconds) of a single benchmark run, and hence makes it very easy to reason about overhead. As the results show, in the case of the desktop, 100 extra idle nodes in the DOM
introduce barely noticeable overhead. On the other hand, on an embedded platform, even without any extra idle nodes, the time to change and layout the text is already taking around 0.6 ms. With 10 extra idle nodes, this
duration increases to 0.75 ms — thus yielding 0.15 ms overhead. With 100 extra idle nodes, such overhead grows to 1.3 ms.
One may argue if 1.3 ms is much, but considering an application that e.g. does 60 FPS rendering, the
time at application disposal each frame is below 16.67 ms, and 1.3 ms is ~8% of that, thus being very considerable. Similarly, for the application to be perceived as responsive, the input-to-output latency should usually
be under 20 ms. Again, 1.3 ms is a significant overhead for such a scenario.
Given the above, it’s safe to state that the 20 extra idle nodes should be considered the safe maximum for embedded devices in general. In case of low-end embedded devices i.e. ones comparable to Raspberry Pi 1 and 2,
the maximum should be even lower, but a proper benchmarking is required to come up with concrete numbers.
While the previous subsection demonstrated that on embedded devices, adding extra idle nodes as parents must usually be done in a responsible way, it’s worth examining if there are nuances that need to be considered as
well.
The first matter that one may wonder about is whether there’s any difference between the overhead of idle nodes being inlines (display: inline) or blocks (display: block). The intuition here may be that, as idle nodes
have no visual impact on anything, the overhead should be similar.
To verify the above, the demo from Desktop considerations section can be used with dv parameter used to control whether extra idle nodes should be blocks (1, <div>) or inlines (0, <span>).
The results from such experiments — again, executed on NXP i.MX8M Plus — are presented in the image below:
While in the safe range of 0-20 extra idle nodes the results are very much similar, it’s evident that in general, the idle nodes of block type are actually introducing more overhead.
The reason for the above is that, for layout purposes, the handling of inline and block elements is very different. The inline elements sharing the same line can be thought of as being flattened within so called
line box tree. The block elements, on the other hand, have to be represented in a tree.
To show the above visually, it’s interesting to compare sysprof flamegraphs of WPE WebProcess from the scenarios comprising 20 idle nodes and using either <span> or <div> for idle nodes:
idle <span> nodes:
idle <div> nodes:
The first flamegraph proves that there’s no clear dependency between the call stack and the number of idle nodes. The second one, on the other hand, shows exactly the opposite — each of the extra idle nodes is
visible as adding extra calls. Moreover, each of the extra idle block nodes adds some overhead thus making the flamegraph have a pyramidal shape.
Another nuance worth exploring is the overhead of text nodes created because of whitespaces.
When the DOM tree is created from the HTML, usually a lot of text nodes are created just because of whitespaces. It’s because the HTML usually looks like:
<span> <span> (...) </span> </span>
rather than:
<span><span>(...)</span></span>
which makes sense from the readability point of view. From the performance point of view, however, more text nodes naturally mean more overhead. When such redundant text nodes are combined with
idle nodes, the net outcome may be that with each extra idle node, some overhead will be added.
To verify the above hypothesis, the demo similar to the above one can be used along with the above one to perform a series of experiments comparing the approach with and without redundant whitespaces:
random-number-changing-in-the-tree-w-whitespaces.html?vr=0&ms=1&dv=0&ns=0.
The only difference between the demos is that the w-whitespaces one creates the DOM tree with artificial whitespaces, simulating as-if it was written in the formatted document. The comparison results
from the experiments run on NXP i.MX8M Plus are presented in the image below:
As the numbers suggest, the overhead of redundant text nodes is rather small on a per-idle-node basis. However, as the number of idle nodes scales, so does the overhead. Around 100 extra idle nodes, the
overhead is noticeable already. Therefore, a natural conclusion is that the redundant text nodes should rather be avoided — especially as the number of nodes in the tree becomes significant.
The last topic that deserves a closer look is whether adding idle nodes as siblings is better than adding them as parent nodes. In theory, having extra nodes added as siblings should be better as the layout engine
will have to consider them, yet it won’t mark them with a dirty flag and hence it won’t have to layout them.
The experiment results corroborate the theoretical considerations made above — idle nodes added as siblings indeed introduce less layout overhead. The savings are not very large from a single idle node perspective,
but once scaled enough, they are beneficial enough to justify DOM tree re-organization (if possible).
The above experiments mostly emphasized the idle nodes, however, the results can be extrapolated to regular nodes in the DOM tree. With that in mind, the overall conclusion to the experiments done in the former sections
is that DOM tree size and shape has a measurable impact on web application performance on embedded devices. Therefore, web developers should try to optimize it as early as possible and follow the general rules of thumb that
can be derived from this article:
Nodes are not free, so they should always be added with extra care.
Idle nodes should be limited to ~20 on mid-end and ~10 on low-end embedded devices.
Idle nodes should be inline elements, not block ones.
Redundant whitespaces should be avoided — especially with idle nodes.
Nodes (especially idle ones) should be added as siblings.
Although the above serves as great guidance, for better results, it’s recommended to do the proper browser benchmarking on a given target embedded device — as long as it’s feasible.
Also, the above set of rules is not recommended to follow on desktop-class devices, as in that case, it can be considered a premature optimization. Unless the particular web application yields an exceptionally large
DOM tree, the gains won’t be worth the time spent optimizing.
Update on what happened in WebKit in the week from September 15 to September 22.
The first release in a new stable series is now out! And despite that,
the work continues on WebXR, multimedia reliability, and WebExtensions
support.
Cross-Port 🐱
Fixed running WebXR tests in the WebKit build infrastructure, and made a few more of them run. This both increases the amount of WebXR code covered during test runs, and helps prevent regressions in the future.
As part of the ongoing work to get WebExtensions support in the GTK and WPE WebKit ports, a number of classes have been convertedfrom Objective-Cto C++, in order to use share their functionality among all ports.
Multimedia 🎥
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
WebKitGTK 2.50.0 and WPE WebKit 2.50.0 are now available. These are the first releases of a new stable series, and are the result of the last six months of work. This development cycle focused on rendering performance improvements, improved support for font features, and more. New public API has been added to obtain the theme color declared by Web pages.
For those longer to integrate newer releases, which we know can be a longer process when targeting embedded devices, we have also published WPE WebKit 2.48.7 with a few stability and security fixes.
Accompanying these releases there is security advisory WSA-2025-0006 (GTK, WPE), with information about solved security issues. As usual, we encourage everybody to use the most recent versions where such issues are known to be fixed.
Update on what happened in WebKit in the week from September 8 to September 15.
The JavaScriptCore implementation of Temporal continues to be polished,
as does SVGAElement, and WPE and WebKitGTK accessibility tests can now
run (but they are not passing yet).
Cross-Port 🐱
Add support for the hreflang attribute on SVGAElement, this helps to align it with HTMLAnchorElement.
An improvement in harnessing code for A11y tests allowed to unblock many tests marked as Timeout/Skip in WPEWebKit and WebKitGTK ports. These tests are not passing yet, but they are at least running now.
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
In the JavaScriptCore (JSC) implementation of Temporal, refactored the implementations of the difference operations (since and until) for the TemporalPlainTime type in order to match the spec. This enables further work on Temporal, which is being done incrementally.
Update on what happened in WebKit in the week from September 1 to September 8.
In this week's installment of the periodical, we have better spec compliance of
JavaScriptCore's implementation of Temporal, an improvement in how gamepad events
are handled, WPE WebKit now implements a helper class which allows test baselines
to be aligned with other ports, and finally, an update on recent work on Sysprof.
Cross-Port 🐱
Until now, unrecognized gamepads didn't emit button presses or axis move events if they didn't map to the standard mapping layout according to W3C (https://www.w3.org/TR/gamepad/#remapping). Now we ensure that unrecognized gamepads always map to the standard layout, so events are always emitted if a button is pressed or the axis is moved.
JavaScriptCore 🐟
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
In the JavaScriptCore (JSC) implementation of Temporal, the compare() method on Temporal durations was modified to follow the spec, which increases the precision with which comparisons are made. This is another step towards a full spec-compliant implementation of Temporal in JSC.
WPE WebKit 📟
Added a specific implementation for helper class ImageAdapter for WPE. This class allows to load image resources that until now were only shipped in WebKitGTK and other ports. This change has aligned many WPE specific test baselines with the rest of WebKit ports, which were now removed.
Community & Events 🤝
Sysprof has received a variety of new features, improvements, and bugfixes, as part of the integration with Webkit. We continued pushing this front in the past 6 months! A few highlights:
An important bug with counters was fixed, and further integration was added to WebKit
It is now possible to hide marks from the waterfall view
Further work on the remote inspector integration, wkrictl, was done
Last year the Webkit project started to integrate its tracing routines with Sysprof. Since then, the feedback I’ve received about it is that it was a pretty big improvement in the development of the engine! Yay.
People started using Sysprof to have insights about the internal states of Webkit, gather data on how long different operations took, and more. Eventually we started hitting some limitations in Sysprof, mostly in the UI itself, such as lack of correlational and visualization features.
Earlier this year a rather interesting enhancement in Sysprof was added: it is now possible to filter the callgraph based on marks. What it means in practice is, it’s now possible to get statistically relevant data about what’s being executed during specific operations of the app.
In parallel to WebKit, recently Mesa merged a patch that integrates Mesa’s tracing routines with Sysprof. This brought data from yet another layer of the stack, and it truly enriches the profiling we can do on apps. We now have marks from the DRM vblank event, the compositor, GTK rendering, WebKit, Mesa, back to GTK, back to the compositor, and finally the composited frame submitted to the kernel. A truly full stack view of everything.
So, what’s the catch here? Well, if you’re an attentive reader, you may have noticed that the marks counter went from this last year:
To this, in March 2025:
And now, we’re at this number:
I do not jest when I say that this is a significant number! I mean, just look at this screenshot of a full view of marks:
Naturally, this is pushing Sysprof to its limits! The app is starting to struggle to handle such massive amounts of data. Having so much data also starts introducing noise in the marks – sometimes, for example, you don’t care about the Mesa marks, or the WebKit marks, of the GLib marks.
Hiding Marks
The most straightforward and impactful improvement that could be done, in light of what was explained above, was adding a way to hide certain marks and groups.
Sysprof heavily uses GListModels, as is trendy in GTK4 apps, so marks, catalogs, and groups are all considered lists containing lists containing items. So it felt natural to wrap these items in a new object with a visible property, and filter by this property, pretty straightforward.
Except it was not
Turns out, the filtering infrastructure in GTK4 did not support monitoring items for property changes. After talking to GTK developers, I learned that this was just a missing feature that nobody got to implementing. Sounded like a great opportunity to enhance the toolkit!
It took some wrestling, but it worked, the reviews were fantastic and now GtkFilterListModel has a new watch-items property. It only works when the the filter supports monitoring, so unfortunately GtkCustomFilter doesn’t work here. The implementation is not exactly perfect, so further enhancements are always appreciated.
So behold! Sysprof can now filter marks out of the waterfall view:
Counters
Another area where we have lots of potential is counters. Sysprof supports tracking variables over time. This is super useful when you want to monitor, for example, CPU usage, I/O, network, and more.
Naturally, WebKit has quite a number of internal counters that would be lovely to have in Sysprof to do proper integrated analysis. So between last year and this year, that’s what I’ve worked on as well! Have a look:
Unfortunately it took a long time to land some of these contributions, because Sysprof seemed to be behaving erratically with counters. After months fighting with it, I eventually figured out what was going on with the counters, and wrote the patch with probably my biggest commit message this year (beat only by few others, including a literal poem.)
Wkrictl
WebKit also has a remote inspector, which has stats on JavaScript objects and whatnot. It needs to be enabled at build time, but it’s super useful when testing on embedded devices.
I’ve started working on a way to extract this data from the remote inspector, and stuff this data into Sysprof as marks and counters. It’s called wkrict. Have a look:
This is far from finished, but I hope to be able to integrate this when it’s more concrete and well developed.
Future Improvements
Over the course of an year, the WebKit project went from nothing to deep integration with Sysprof, and more recently this evolved into actual tooling built around this integration. This is awesome, and has helped my colleagues and other contributors to contribute to the project in ways it simply wasn’t possible before.
There’s still *a lot* of work to do though, and it’s often the kind of work that will benefit everyone using Sysprof, not only WebKit. Here are a few examples:
Integrate JITDump symbol resolution, which allows profiling the JavaScript running on webpages. There’s ongoing work on this, but needs to be finished.
Per-PID marks and counters. Turns out, WebKit uses a multi-process architecture, so it would be better to redesign the marks and counters views to organize things by PID first, then groups, then catalogs.
A new timeline view. This is strictly speaking a condensed waterfall view, but it makes it more obvious the relationship between “inner” and “outer” marks.
Performance tuning in Sysprof and GTK. We’re dealing with orders of magnitude more data than we used to, and the app is starting to struggle to keep up with it.
Some of these tasks involve new user interfaces, so it would be absolutely lovely if Sysprof could get some design love from the design team. If anyone from the design team is reading this, we’d love to have your help
Finally, after all this Sysprof work, Christian kindly offered me to help co-maintain the project, which I accepted. I don’t know how much time and energy I’ll be able to dedicate, but I’ll try and help however I can!
I’d like to thank Christian Hergert, Benjamin Otte, and Matthias Clasen for all the code reviews, for all the discussions and patience during the influx of patches.
This article is a continuation of the series on damage propagation. While the previous article laid some foundation on the subject, this one
discusses the cost (increased CPU and memory utilization) that the feature incurs, as this is highly dependent on design decisions and the implementation of the data structure used for storing damage information.
From the perspective of this article, the two key things worth remembering from the previous one are:
The damage propagation is an optional WPE/GTK WebKit feature that — when enabled — reduces the browser’s GPU utilization at the expense of increased CPU and memory utilization.
On the implementation level, the damage is almost always a collection of rectangles that cover the changed region.
As mentioned in the section about damage of the previous article,
the damage information describes a region that changed and requires repainting. It was also pointed out that such a description is usually done via a collection of rectangles. Although sometimes
it’s better to describe a region in a different way, the rectangles are a natural choice due to the very nature of the damage in the web engines that originates from the box model.
A more detailed description of the damage nature can be inferred from the Pipeline details section of the
previous article. The bottom line is, in the end, the visual changes to the render tree yield the damage information in the form of rectangles.
For the sake of clarity, such original rectangles may be referred to as raw damage.
In practice, the above means that it doesn’t matter whether, e.g. the circle is drawn on a 2D canvas or the background color of some block element changes — ultimately, the rectangles (raw damage) are always produced
in the process.
As the raw damage is a collection of rectangles describing a damaged region, the geometrical consequence is that there may be more than one set of rectangles describing the same region.
It means that raw damage could be stored by a different set of rectangles and still precisely describe the original damaged region — e.g. when raw damage contains more rectangles than necessary.
The example of different approximations of a simple raw damage is depicted in the image below:
Changing the set of rectangles that describes the damaged region may be very tempting — especially when the size of the set could be reduced. However, the following consequences must be taken into account:
The damaged region could shrink when some damaging information would be lost e.g. if too many rectangles would be removed.
The damaged region could expand when some damaging information would be added e.g. if too many or too big rectangles would be added.
The first consequence may lead to visual glitches when repainting. The second one, however, causes no visual issues but degrades performance since a larger area
(i.e. more pixels) must be repainted — typically increasing GPU usage. This means the damage information can be approximated as long as the trade-off between the extra repainted area and the degree of simplification
in the underlying set of rectangles is acceptable.
The approximation mentioned above means the situation where the approximated damaged region covers the original damaged region entirely i.e. not a single pixel of information is lost. In that sense, the
approximation can only add extra information. Naturally, the lower the extra area added to the original damaged region, the better.
The approximation quality can be referred to as damage resolution, which is:
low — when the extra area added to the original damaged region is significant,
high — when the extra area added to the original damaged region is small.
The examples of low (left) and high (right) damage resolutions are presented in the image below:
Given the description of the damage properties presented in the sections above, it’s evident there’s a certain degree of flexibility when it comes to processing damage information. Such a situation is very fortunate in the
context of storing the damage, as it gives some freedom in designing a proper data structure. However, before jumping into the actual solutions, it’s necessary to understand the problem end-to-end.
layer damage — the damage tracked separately for each layer,
frame damage — the damage that aggregates individual layer damages and consists of the final damage of a given frame.
Assuming there are L layers and there is some data structure called Damage that can store the damage information, it’s easy to notice that there may be L+1 instances
of Damage present at the same time in the pipeline as the browser engine requires:
L Damage objects for storing layer damage,
1 Damage object for storing frame damage.
As there may be a lot of layers in more complex web pages, the L+1 mentioned above may be a very big number.
The first consequence of the above is that the Damage data structure in general should store the damage information in a very compact way to avoid excessive memory usage when L+1 Damage objects
are present at the same time.
The second consequence of the above is that the Damage data structure in general should be very performant as each of L+1 Damage objects may be involved into a considerable amount of processing when there are
lots of updates across the web page (and hence huge numbers of damage rectangles).
To better understand the above consequences, it’s essential to examine the input and the output of such a hypothetical Damage data structure more thoroughly.
The Damage becomes an input of other Damage in some situations, happening in the middle of the damage propagation pipeline when the broader damage is being assembled from smaller chunks of damage. What it consists
of depends purely on the Damage implementation.
The raw damage, on the other hand, becomes an input of the Damage always at the very beginning of the damage propagation pipeline. In practice, it consists of a set of rectangles that are potentially overlapping, duplicated, or empty. Moreover,
such a set is always as big as the set of changes causing visual impact. Therefore, in the worst case scenario such as drawing on a 2D canvas, the number of rectangles may be enormous.
Given the above, it’s clear that the hypothetical Damage data structure should support 2 distinct input operations in the most performant way possible:
When it comes to the Damage data structure output, there are 2 possibilities either:
other Damage,
the platform API.
The Damage becomes the output of other Damage on each Damage-to-Damage append that was described in the subsection above.
The platform API, on the other hand, becomes the output of Damage at the very end of the pipeline e.g. when the platform API consumes the frame damage (as described in the
pipeline details section of the previous article).
In this situation, what’s expected on the output technically depends on the particular platform API. However, in practice, all platforms supporting damage propagation require a set of rectangles that describe the damaged region.
Such a set of rectangles is fed into the platforms via APIs by simply iterating the rectangles describing the damaged region and transforming them to whatever data structure the particular API expects.
The natural consequence of the above is that the hypothetical Damage data structure should support the following output operation — also in the most performant way possible:
Given all the above perspectives, the problem of designing the Damage data structure can be summarized as storing the input damage information to be accessed (iterated) later in a way that:
the performance of operations for adding and iterating rectangles is maximal (performance),
the memory footprint of the data structure is minimal (memory footprint),
the stored region covers the original region and has the area as close to it as possible (damage resolution).
With the problem formulated this way, it’s obvious that this is a multi-criteria optimization problem with 3 criteria:
Given the problem of storing damage defined as above, it’s possible to propose various ways of solving it by implementing a Damage data structure. Before diving into details, however, it’s important to emphasize
that the weights of criteria may be different depending on the situation. Therefore, before deciding how to design the Damage data structure, one should consider the following questions:
What is the proportion between the power of GPU and CPU in the devices I’m targeting?
What are the memory constraints of the devices I’m targeting?
What are the cache sizes on the devices I’m targeting?
What is the balance between GPU and CPU usage in the applications I’m going to optimize for?
Are they more rendering-oriented (e.g. using WebGL, Canvas 2D, animations etc.)?
Are they more computing-oriented (frequent layouts, a lot of JavaScript processing etc.)?
Although answering the above usually points into the direction of specific implementation, usually the answers are unknown and hence the implementation should be as generic as possible. In practice,
it means the implementation should not optimize with a strong focus on just one criterion. However, as there’s no silver bullet solution, it’s worth exploring multiple, quasi-generic solutions that have been researched as
part of Igalia’s work on the damage propagation, and which are the following:
Damage storing all input rects,
Bounding box Damage,
Damage using WebKit’s Region,
R-Tree Damage,
Grid-based Damage.
All of the above implementations are being evaluated along the 3 criteria the following way:
Performance
by specifying the time complexity of add(Rectangle) operation as add(Damage) can be transformed into the series of add(Rectangle) operations,
by specifying the time complexity of forEachRectangle(...) operation.
Memory footprint
by specifying the space complexity of Damage data structure.
The most natural — yet very naive — Damage implementation is one that wraps a simple collection (such as vector) of rectangles and hence stores the raw damage in the original form.
In that case, the evaluation is as simple as evaluating the underlying data structure.
Assuming a vector data structure and O(1) amortized time complexity of insertion, the evaluation of such a Damage implementation is:
Performance
insertion is O(1) ✅
iteration is O(N) ❌
Memory footprint
O(N) ❌
Damage resolution
perfect ✅
Despite being trivial to implement, this approach is heavily skewed towards the damage resolution criterion. Essentially, the damage quality is the best possible, yet the expense is a very poor
performance and substantial memory footprint. It’s because a number of input rects N can be a very big number, thus making the linear complexities unacceptable.
The other problem with this solution is that it performs no filtering and hence may store a lot of redundant rectangles. While the empty rectangles can be filtered out in O(1),
filtering out duplicates and some of the overlaps (one rectangle completely containing the other) would make insertion O(N). Naturally, such a filtering
would lead to a smaller memory footprint and faster iteration in practice, however, their complexities would not change.
The second simplest Damage implementation one can possibly imagine is the implementation that stores just a single rectangle, which is a minimum bounding rectangle (bounding box) of all the damage
rectangles that were added into the data structure. The minimum bounding rectangle — as the name suggests — is a minimal rectangle that can fit all the input rectangles inside. This is well demonstrated in the picture below:
As this implementation stores just a single rectangle, and as the operation of taking the bounding box of two rectangles is O(1), the evaluation is as follows:
Performance
insertion is O(1) ✅
iteration is O(1) ✅
Memory footprint
O(1) ✅
Damage resolution
usually low ⚠️
Contrary to the Damage storing all input rects, this solution yields a perfect performance and memory footprint at the expense of low damage resolution. However,
in practice, the damage resolution of this solution is not always low. More specifically:
in the optimistic cases (raw damage clustered), the area of the bounding box is close to the area of the raw damage inside,
in the average cases, the approximation of the damaged region suffers from covering significant areas that were not damaged,
in the worst cases (small damage rectangles on the other ends of a viewport diagonal), the approximation is very poor, and it may be as bad as covering the whole viewport.
As this solution requires a minimal overhead while still providing a relatively useful damage approximation, in practice, it is a baseline solution used in:
Chromium,
Firefox,
WPE and GTK WebKit when UnifyDamagedRegions runtime preference is enabled, which means it’s used in GTK WebKit by default.
When it comes to more sophisticated Damage implementations, the simplest approach in case of WebKit is to wrap data structure already implemented in WebCore called
Region. Its purpose
is just as the name suggests — to store a region. More specifically, it’s meant to store rectangles describing region in an efficient way both for storage and for access to take advantage
of scanline coherence during rasterization. The key characteristic of the data structure is that it stores rectangles without overlaps. This is achieved by storing y-sorted lists of x-sorted, non-overlapping
rectangles. Another important property is that due to the specific internal representation, the number of integers stored per rectangle is usually smaller than 4. Also, there are some other useful properties
that are, however, not very useful in the context of storing the damage. More details on the data structure itself can be found in the J. E. Steinhart’s paper from 1991 titled
SCANLINE COHERENT SHAPE ALGEBRA
published as part of Graphics Gems II book.
The Damage implementation being a wrapper of the Region was actually used by GTK and WPE ports as a first version of more sophisticated Damage alternative for the bounding box Damage. Just as expected,
it provided better damage resolution in some cases, however, it suffered from effectively degrading to a more expensive variant bounding box Damage in the majority of situations.
The above was inevitable as the implementation was falling back to bounding box Damage when the Region’s internal representation was getting too complex. In essence, it was addressing the Region’s biggest problem,
which is that it can effectively store N2 rectangles in the worst case due to the way it splits rectangles for storing purposes. More specifically, as the Region stores ledges
and spans, each insertion of a new rectangle may lead to splitting O(N) existing rectangles. Such a situation is depicted in the image below, where 3 rectangles are being split
into 9:
Putting the above fallback mechanism aside, the evaluation of Damage being a simple wrapper on top of Region is the following:
Performance
insertion is O(logN) ✅
iteration is O(N2) ❌
Memory footprint
O(N2) ❌
Damage resolution
perfect ✅
Adding a fallback, the evaluation is technically the same as bounding box Damage for N above the fallback point, yet with extra overhead. At the same time, for smaller N, the above evaluation
didn’t really matter much as in such case all the performance, memory footprint, and the damage resolution were very good.
Despite this solution (with a fallback) yielded very good results for some simple scenarios (when N was small enough), it was not sustainable in the long run, as it was not addressing the majority of use cases,
where it was actually a bit slower than bounding box Damage while the results were similar.
In the pursuit of more sophisticated Damage implementations, one can think of wrapping/adapting data structures similar to quadtrees, KD-trees etc. However, in most of such cases, a lot of unnecessary overhead is added
as the data structures partition the space so that, in the end, the input is stored without overlaps. As overlaps are not necessarily a problem for storing damage information, the list of candidate data structures
can be narrowed down to the most performant data structures allowing overlaps. One of the most interesting of such options is the R-Tree.
In short, R-Tree (rectangle tree) is a tree data structure that allows storing multiple entries (rectangles) in a single node. While the leaf nodes of such a tree store the original
rectangles inserted into the data structure, each of the intermediate nodes stores the bounding box (minimum bounding rectangle, MBR) of the children nodes. As the tree is balanced, the above means that with every next
tree level from the top, the list of rectangles (either bounding boxes or original ones) gets bigger and more detailed. The example of the R-tree is depicted in the Figure 5 from
the Object Trajectory Analysis in Video Indexing and Retrieval Applications paper:
The above perfectly shows the differences between the rectangles on various levels and can also visually suggest some ideas when it comes to adapting such a data structure into Damage:
The first possibility is to make Damage a simple wrapper of R-Tree that would just build the tree and allow the Damage consumer to pick the desired damage resolution on iteration attempt. Such an approach is possible
as having the full R-Tree allows the iteration code to limit iteration to a certain level of the tree or to various levels from separate branches. The latter allows Damage to offer a particularly interesting API where the
forEachRectangle(...) function could accept a parameter specifying how many rectangles (at most) are expected to be iterated.
The other possibility is to make Damage an adaptation of R-Tree that conditionally prunes the tree while constructing it not to let it grow too much, yet to maintain a certain height and hence certain damage quality.
Regardless of the approach, the R-Tree construction also allows one to implement a simple filtering mechanism that eliminates input rectangles being duplicated or contained by existing rectangles on the fly. However,
such a filtering is not very effective as it can only consider a limited set of rectangles i.e. the ones encountered during traversal required by insertion.
Damage as a simple R-Tree wrapper
Although this option may be considered very interesting, in practice, storing all the input rectangles in the R-Tree means storing N rectangles along with the overhead of a tree structure. In the worst case scenario
(node size of 2), the number of nodes in the tree may be as big as O(N), thus adding a lot of overhead required to maintain the tree structure. This fact alone makes this solution have an
unacceptable memory footprint. The other problem with this idea is that in practice,
the damage resolution selection is usually done once — during browser startup. Therefore, the ability to select damage resolution during runtime brings no benefits while introduces unnecessary overhead.
The evaluation of the above is the following:
Performance
insertion is O(logMN) where M is the node size ✅
iteration is O(K) where K is a parameter and 0≤K≤N ✅
Memory footprint
O(N) ❌
Damage resolution
low to high ✅
Damage as an R-Tree adaptation with pruning
Considering the problems the previous idea has, the option with pruning seems to be addressing all the problems:
the memory footprint can be controlled by specifying at which level of the tree the pruning should happen,
the damage resolution (level of the tree where pruning happens) can be picked on the implementation level (compile time), thus allowing some extra implementation tricks if necessary.
While it’s true the above problems are not existing within this approach, the option with pruning — unfortunately — brings new problems that need to be considered. As a matter of fact, all the new problems it brings
are originating from the fact that each pruning operation leads to the loss of information and hence to the tree deterioration over time.
Before actually introducing those new problems, it’s worth understanding more about how insertions work in the R-Tree.
When the rectangle is inserted to the R-Tree, the first step is to find a proper position for the new record (see ChooseLeaf algorithm from Guttman1984). When the target node is
found, there are two possibilities:
adding the new rectangle to the target node does not cause overflow,
adding the new rectangle to the target node causes overflow.
If no overflow happens, the new rectangle is just added to the target node. However, if overflow happens i.e. the number of rectangles in the node exceeds the limit, the node splitting algorithm is invoked (see SplitNode
algorithm from Guttman1984) and the changes are being propagated up the tree (see ChooseLeaf algorithm from Guttman1984).
The node splitting, along with adjusting the tree, are very important steps within insertion as those algorithms are the ones that are responsible for shaping and balancing the tree. For example, when all the nodes in the tree are
full and the new rectangle is being added, the node splitting will effectively be executed for some leaf node and all its ancestors, including root. It means that the tree will grow and possibly, its structure will change significantly.
Due to the above mechanics of R-Tree, it can be reasonably asserted that the tree structure becomes better as a function of node splits. With that, the first problem of the tree pruning becomes obvious:
tree pruning on insertion limits the amount of node splits (due to smaller node splits cascades) and hence limits the quality of the tree structure. The second problem — also related to node splits — is that
with all the information lost due to pruning (as pruning is the same as removing a subtree and inserting its bounding box into the tree) each node split is less effective as the leaf rectangles themselves are
getting bigger and bigger due to them becoming bounding boxes of bounding boxes (…) of the original rectangles.
The above problems become more visible in practice when the R-tree input rectangles tend to be sorted. In general, one of the R-Tree problems is that its structure tends to be biased when the input rectangles are sorted.
Despite the further insertions usually fix the structure of the biased tree, it’s only done to some degree, as some tree nodes may not get split anymore. When the pruning happens and the input is sorted (or partially sorted)
the fixing of the biased tree is much harder and sometimes even impossible. It can be well explained with the example where a lot of rectangles from the same area are inserted into the tree. With the number of such rectangles
being big enough, a lot of pruning will happen and hence a lot of rectangles will be lost and replaced by larger bounding boxes. Then, if a series of new insertions will start inserting nodes from a different area which is
partially close to the original one, the new rectangles may end up being siblings of those large bounding boxes instead of the original rectangles that could be clustered within nodes in a much more reasonable way.
Given the above problems, the evaluation of the whole idea of Damage being the adaptation of R-Tree with pruning is the following:
Performance
insertion is O(logMK) where M is the node size, K is a parameter, and 0<K≤N ✅
iteration is O(K) ✅
Memory footprint
O(K) ✅
Damage resolution
low to medium ⚠️
Despite the above evaluation looks reasonable, in practice, it’s very hard to pick the proper pruning strategy. When the tree is allowed to be taller, the damage resolution is usually better, but the increased memory footprint,
logarithmic insertions, and increased iteration time combined pose a significant problem. On the other hand, when the tree is shorter, the damage resolution tends to be low enough not to justify using R-Tree.
The last, more sophisticated Damage implementation, uses some ideas from R-Tree and forms a very strict, flat structure. In short, the idea is to take some rectangular part of a plane and divide it into cells,
thus forming a grid with C columns and R rows. Given such a division, each cell of the grid is meant to store at most one rectangle that effectively is a bounding box of the rectangles matched to
that cell. The overview of the approach is presented in the image below:
As the above situation is very straightforward, one may wonder what would happen if the rectangle would span multiple cells i.e. how the matching algorithm would work in that case.
Before diving into the matching, it’s important to note that from the algorithmic perspective, the matching is very important as it accounts for the majority of operations during new rectangle insertion into the Damage data structure.
It’s because when the matched cell is known, the remaining part of insertion is just about taking the bounding box of existing rectangle stored in the cell and the new rectangle, thus having
O(1) time complexity.
As for the matching itself, it can be done in various ways:
it can be done using strategies known from R-Tree, such as matching a new rectangle into the cell where the bounding box enlargement would be the smallest etc.,
it can be done by maximizing the overlap between the new rectangle and the given cell,
it can be done by matching the new rectangle’s center (or corner) into the proper cell,
etc.
The above matching strategies fall into 2 categories:
O(CR) matching algorithms that compare a new rectangle against existing cells while looking for the best match,
O(1) matching algorithms that calculate the target cell using a single formula.
Due to the nature of matching, the O(CR) strategies eventually lead to smaller bounding boxes stored in the Damage and hence to better damage resolution as compared to the
O(1) algorithms. However, as the practical experiments show, the difference in damage resolution is not big enough to justify O(CR)
time complexity over O(1). More specifically, the difference in damage resolution is usually unnoticeable, while the difference between
O(CR) and O(1) insertion complexity is major, as the insertion is the most critical operation of the Damage data structure.
Due to the above, the matching method that has proven to be the most practical is matching the new rectangle’s center into the proper cell. It has O(1) time complexity
as it requires just a few arithmetic operations to calculate the center of the incoming rectangle and to match it to the proper cell (see
the implementation in WebKit). The example of such matching is presented in the image below:
The overall evaluation of the grid-based Damage constructed the way described in the above paragraphs is as follows:
performance
insertion is O(1) ✅
iteration is O(CR) ✅
memory footprint
O(CR) ✅
damage resolution
low to high (depending on the CR) ✅
Clearly, the fundamentals of the grid-based Damage are strong, but the data structure is heavily dependent on the CR. The good news is that, in practice, even a fairly small grid such as 8x4
(CR=32)
yields a damage resolution that is high. It means that this Damage implementation is a great alternative to bounding box Damage as even with very small performance and memory footprint overhead,
it yields much better damage resolution.
Moreover, the grid-based Damage implementation gives an opportunity for very handy optimizations that improve memory footprint, performance (iteration), and damage resolution further.
As the grid dimensions are given a-priori, one can imagine that intrinsically, the data structure needs to allocate a fixed-size array of rectangles with CR entries to store cell bounding boxes.
One possibility for improvement in such a situation (assuming small CR) is to use a vector along with bitset so that only non-empty cells are stored in the vector.
The other possibility (again, assuming small CR) is to not use a grid-based approach at all as long as the number of rectangles inserted so far does not exceed CR.
In other words, the data structure can allocate an empty vector of rectangles upon initialization and then just append new rectangles to the vector as long as the insertion does not extend the vector beyond
CR entries. In such a case, when CR is e.g. 32, up to 32 rectangles can be stored in the original form. If at some point the data structure detects that it would need to
store 33 rectangles, it switches internally to a grid-based approach, thus always storing at most 32 rectangles for cells. Also, note that in such a case, the first improvement possibility (with bitset) can still be used.
Summarizing the above, both improvements can be combined and they allow the data structure to have a limited, small memory footprint, good performance, and perfect damage resolution as long as there
are not too many damage rectangles. And if the number of input rectangles exceeds the limit, the data structure can still fall-back to a grid-based approach and maintain very good results. In practice, the situations
where the input damage rectangles are not exceeding CR (e.g. 32) are very common, and hence the above improvements are very important.
Overall, the grid-based approach with the above improvements has proven to be the best solution for all the embedded devices tried so far, and therefore, such a Damage implementation is a baseline solution used in
WPE and GTK WebKit when UnifyDamagedRegions runtime preference is not enabled — which means it works by default in WPE WebKit.
The former sections demonstrated various approaches to implementing the Damage data structure meant to store damage information. The summary of the results is presented in the table below:
While all the solutions have various pros and cons, the Bounding box and Grid-basedDamage implementations are the most lightweight and hence are most useful in generic use cases.
On typical embedded devices — where CPUs are quite powerful compared to GPUs — both above solutions are acceptable, so the final choice can be determined based on the actual use case. If the actual web application
often yields clustered damage information, the Bounding boxDamage implementation should be preferred. Otherwise (majority of use cases), the Grid-basedDamage implementation will work better.
On the other hand, on desktop-class devices – where CPUs are far less powerful than GPUs – the only acceptable solution is Bounding boxDamage as it has a minimal overhead while it sill provides some
decent damage resolution.
The above are the reasons for the default Damage implementations used by desktop-oriented GTK WebKit port (Bounding boxDamage) and embedded-device-oriented WPE WebKit (Grid-basedDamage).
When it comes to non-generic situations such as unusual hardware, specific applications etc. it’s always recommended to do a proper evaluation to determine which solution is the best fit. Also, the Damage implementations
other than the two mentioned above should not be ruled out, as in some exotic cases, they may give much better results.
Update on what happened in WebKit in the week from August 25 to September 1.
The rewrite of the WebXR support continues, as do improvements
when building for Android, along with smaller fixes in multimedia
and standards compliance.
Cross-Port 🐱
The WebXR implementation has gained
input through OpenXR, including
support for the hand interaction—useful for devices which only support
hand-tracking—and the generic simple profile. This was soon followed by the
addition of support for the Hand
Input module.
Aligned the SVGStyleElement
type and media attributes with HTMLStyleElement's.
Multimedia 🎥
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Support for FFMpeg GStreamer audio decoders was
re-introduced
because the alternative decoders making use of FDK-AAC might not be available
in some distributions and Flatpak runtimes.
Graphics 🖼️
Usage of fences has been introduced
to control frame submission of rendered WebXR content when using OpenXR. This
approach avoids blocking in the renderer process waiting for frames to be
completed, resulting in slightly increased performance.
New, modern platform API that supersedes usage of libwpe and WPE backends.
Changed WPEPlatform to be built as part of the libWPEWebKit
library. This avoids duplicating some
code in different libraries, brings in a small reduction in used space, and
simplifies installation for packagers. Note that the wpe-platform-2.0 module
is still provided, and applications that consume the WPEPlatform API must still
check and use it.
Adaptation of WPE WebKit targeting the Android operating system.
Support for sharing AHardwareBuffer handles across processes is now
available. This lays out the
foundation to use graphics memory directly across different WebKit subsystems
later on, making some code paths more efficient, and paves the way towards
enabling the WPEPlatform API on Android.