Planet Igalia Chromium

January 13, 2026

Miyoung Shin

Our Journey to support Extensions for embedders

A History of Extensions for Embedders — and Where We’re Heading

Chromium’s Extensions platform has long been a foundational part of the desktop browsing experience. Major Chromium-based browsers—such as Chrome and Microsoft Edge—ship with full support for the Chrome Extensions ecosystem, and user expectations around extension availability and compatibility continue to grow.

In contrast, some Chromium embedders— for instance, products built directly on the //content API without the full //chrome stack—do not naturally have access to Extensions. Similarly, the traditional Chrome for Android app does not support Extensions. While some embedders have attempted to enable limited Extensions functionality by pulling in selected pieces of the //chrome layer, this approach is heavyweight, difficult to maintain, and fundamentally incapable of delivering full feature parity.

At Igalia we have been willing to help on the long term-goal of making Extensions usable on lightweight, //content-based products, without requiring embedders to depend on //chrome. This post outlines the background of that effort, the phases of work so far, the architectural challenges involved, and where the project is headed.

Note: ChromeOS supporting extensions (ChromeOS has announced plans to incorporate more of the Android build stack) is not the same thing as Chrome-Android App supporting extensions. The two codepaths and platform constraints differ significantly. While the traditional Chrome app on Android phones and tablets still does not officially support extensions, recent beta builds of desktop-class Chrome on Android have begun to close this gap by enabling native extension installation and execution.

Tracking bug: https://issues.chromium.org/issues/356905053

Extensions Architecture — Layered View

The following diagram illustrates the architectural evolution of Extensions support for Chromium embedders.

Traditional Chromium Browser Stack

At the top of the stack, Chromium-based browsers such as Chrome and Edge rely on the full //chrome layer. Historically, the Extensions platform has lived deeply inside this layer, tightly coupled with Chrome-specific concepts such as Profile, browser windows, UI surfaces, and Chrome services.

+-----------------------+
|      //chrome         |
|  (UI, Browser, etc.)  |
+-----------------------+
|     //extensions      |
+-----------------------+
|      //content        |
+-----------------------+

This architecture works well for full browsers, but it is problematic for embedders. Products built directly on //content cannot reuse Extensions without pulling in a large portion of //chrome, leading to high integration and maintenance costs.


Phase 1 — Extensions on Android (Downstream Work)

In 2023, a downstream project at Igalia required extension support on a Chromium-based Android application. The scope was limited—we only needed to support a small number of specific extensions—so we implemented:

  • basic installation logic,
  • manifest handling,
  • extension launch/execution flows, and
  • a minimal subset of Extensions APIs that those extensions depended on.

This work demonstrated that Extensions can function in an Android environment. However, it also highlighted a major problem: modifying the Android //chrome codepath is expensive. Rebasing costs are high, upstream alignment is difficult, and the resulting solution is tightly coupled to Chrome-specific abstractions. The approach was viable only because the downstream requirements were narrow and controlled.

I shared this experience at BlinkOn Lightning Talk: “Extensions on Android”.


Phase 2 — Extensions for Embedders
( //content + //extensions + //components/extensions )

Following Phase 1, we began asking a broader question:

Can we provide a reusable, upstream-friendly Extensions implementation that works for embedders without pulling in the //chrome layer?

Motivation

Many embedders aim to remain as lightweight as possible. Requiring //chrome introduces unnecessary complexity, long build times, and ongoing maintenance costs. Our hypothesis was that large portions of the Extensions stack could be decoupled from Chrome and reused directly by content-based products.

One early idea was to componentize the Extensions code by migrating substantial parts of //chrome/*/extensions into //components/extensions.

+-------------------------+
| //components/extensions |
+-------------------------+
|      //extensions       |
+-------------------------+
|       //content         |
+-------------------------+

Proof-of-concept : Wolvic

We tested this idea through Wolvic , a VR browser used in several commercial
solutions. Wolvic has two implementations:

  • a Gecko-based version, and
  • a Chromium-based version built directly on the //content API.


Originally, Extensions were already supported in Wolvic-Gecko, but not in Wolvic-Chromium. To close that gap, we migrated core pieces of the Extensions machinery into //components/extensions and enabled extension loading and execution in a content-only environment.

By early 2025, this work successfully demonstrated that Extensions could run without the //chrome layer.

Demo video::
https://youtube.com/shorts/JmQnpC-lxR8?si=Xf0uB6q__j4pmlSj

Design document:
https://docs.google.com/document/d/1I5p4B0XpypR7inPqq1ZnGMP4k-IGeOpKGvCFS0EDWHk/edit?usp=sharing

However, this work lived entirely in the Wolvic repository, which is a fork of Chromium. While open source, this meant that other embedders could not easily benefit without additional rebasing and integration work.

This raised an important question:

Why not do this work directly in the Chromium upstream so that all embedders can benefit?


Phase 3 — Extensions for Embedders
(//content + //extensions)

Following discussions with the Extensions owner (rdevlin.cronin@chromium.org), we refined the approach further.

Rather than migrating functionality into //components, the preferred long-term direction is to move Extensions logic directly into the //extensions layer wherever possible.

+-----------------------+
|      Embedder UI      | (minimal interfaces)
+-----------------------+
|      //extensions     |
+-----------------------+
|       //content       |
+-----------------------+

This approach offers several advantages:

  • clearer layering and ownership,
  • fewer architectural violations,
  • reduced duplication between Chrome and embedders,
  • a cleaner API surface for integration.

We aligned on this direction and began upstream work accordingly.

Tracking bug: 🔗 https://issues.chromium.org/issues/358567092

Our goals for Content Shell + //extensions are:

  1. Embedders should only implement a small set of interfaces, primarily for UI surfaces (install prompts, permission dialogs) and optional behaviors.
  2. Full Web Extensions APIs support
    w3c standard : https://w3c.github.io/webextensions/specification/
  3. Chrome Web Store compatibility
    Embedders should be able to install and run extensions directly from the Chrome Web Store.

Short-term Goal: Installation Support

Our immediate milestone is to make installation work entirely using //content + //extensions.

Current progress:

  • ✅ .zip installation support already lives in //extensions
  • 🚧 Migrating Unpacked directory installation from //chrome to //extensions
    (including replacing Profile with BrowserContext abstractions)
  • 🔜 Moving .crx installation code from //chrome → //extensions

    As part of this effort, we are introducing clean, well-defined interfaces for install prompts and permission confirmations:
  • Chrome will continue to provide its full-featured UI
  • Embedders can implement minimal, custom UI as needed

What Comes Next:

Once installation is fully supported, we will move on to:

  • Chrome Web Store integration flows
  • Core WebExtensions APIs required by commonly used extensions

Main Engineering Challenge — Detaching from the Chrome Layer

The hardest part of this migration is not moving files—it is breaking long-standing dependencies on the //chrome layer.

The Extensions codebase is large and historically coupled to Chrome-only concepts such as:

  • Profile
  • Browser
  • Chrome-specific WebContents delegates
  • Chrome UI surfaces
  • Chrome services (sync, signin, prefs)

Each migration requires careful refactoring, layering reviews, and close collaboration with component owners. While the process is slow, it has already resulted in meaningful architectural improvements.


What’s Next?

In the next post, We’ll demonstrate:

A functioning version of Extensions running on top of
//content + //extensions only — capable of installing and running extensions app.

from Igalia side, we continue working on ways to make easier integrating Chromium on other platforms, etc. This will mark the first end-to-end, //chrome-free execution path for extensions in content-based browsers.

Stay tuned!

by mshin at January 13, 2026 02:00 AM

January 05, 2026

Andy Wingo

pre-tenuring in v8

Hey hey happy new year, friends! Today I was going over some V8 code that touched pre-tenuring: allocating objects directly in the old space instead of the nursery. I knew the theory here but I had never looked into the mechanism. Today’s post is a quick overview of how it’s done.

allocation sites

In a JavaScript program, there are a number of source code locations that allocate. Statistically speaking, any given allocation is likely to be short-lived, so generational garbage collection partitions freshly-allocated objects into their own space. In that way, when the system runs out of memory, it can preferentially reclaim memory from the nursery space instead of groveling over the whole heap.

But you know what they say: there are lies, damn lies, and statistics. Some programs are outliers, allocating objects in such a way that they don’t die young, or at least not young enough. In those cases, allocating into the nursery is just overhead, because minor collection won’t reclaim much memory (because too many objects survive), and because of useless copying as the object is scavenged within the nursery or promoted into the old generation. It would have been better to eagerly tenure such allocations into the old generation in the first place. (The more I think about it, the funnier pre-tenuring is as a term; what if some PhD programs could pre-allocate their graduates into named chairs? Is going straight to industry the equivalent of dying young? Does collaborating on a paper with a full professor imply a write barrier? But I digress.)

Among the set of allocation sites in a program, a subset should pre-tenure their objects. How can we know which ones? There is a literature of static techniques, but this is JavaScript, so the answer in general is dynamic: we should observe how many objects survive collection, organized by allocation site, then optimize to assume that the future will be like the past, falling back to a general path if the assumptions fail to hold.

my runtime doth object

The high-level overview of how V8 implements pre-tenuring is based on per-program-point AllocationSite objects, and per-allocation AllocationMemento objects that point back to their corresponding AllocationSite. Initially, V8 doesn’t know what program points would profit from pre-tenuring, and instead allocates everything in the nursery. Here’s a quick picture:

diagram of linear allocation buffer containing interleaved objects and allocation mementos
A linear allocation buffer containing objects allocated with allocation mementos

Here we show that there are two allocation sites, Site1 and Site2. V8 is currently allocating into a linear allocation buffer (LAB) in the nursery, and has allocated three objects. After each of these objects is an AllocationMemento; in this example, M1 and M3 are AllocationMemento objects that point to Site1 and M2 points to Site2. When V8 allocates an object, it increments the “created” counter on the corresponding AllocationSite (if available; it’s possible an allocation comes from C++ or something where we don’t have an AllocationSite).

When the free space in the LAB is too small for an allocation, V8 gets another LAB, or collects if there are no more LABs in the nursery. When V8 does a minor collection, as the scavenger visits objects, it will look to see if the object is followed by an AllocationMemento. If so, it dereferences the memento to find the AllocationSite, then increments its “found” counter, and adds the AllocationSite to a set. Once an AllocationSite has had 100 allocations, it is enqueued for a pre-tenuring decision; sites with 85% survival get marked for pre-tenuring.

If an allocation site is marked as needing pre-tenuring, the code in which it is embedded it will get de-optimized, and then next time it is optimized, the code generator arranges to allocate into the old generation instead of the default nursery.

Finally, if a major collection collects more than 90% of the old generation, V8 resets all pre-tenured allocation sites, under the assumption that pre-tenuring was actually premature.

tenure for me but not for thee

What kinds of allocation sites are eligible for pre-tenuring? Sometimes it depends on object kind; wasm memories, for example, are almost always long-lived, so they are always pre-tenured. Sometimes it depends on who is doing the allocation; allocations from the bootstrapper, literals allocated by the parser, and many allocations from C++ go straight to the old generation. And sometimes the compiler has enough information to determine that pre-tenuring might be a good idea, as when it generates a store of a fresh object to a field in an known-old object.

But otherwise I thought that the whole AllocationSite mechanism would apply generally, to any object creation. It turns out, nope: it seems to only apply to object literals, array literals, and new Array. Weird, right? I guess it makes sense in that these are the ways to create objects that also creates the field values at creation-time, allowing the whole block to be allocated to the same space. If instead you make a pre-tenured object and then initialize it via a sequence of stores, this would likely create old-to-new edges, preventing the new objects from dying young while incurring the penalty of copying and write barriers. Still, I think there is probably some juice to squeeze here for pre-tenuring of class-style allocations, at least in the optimizing compiler or in short inline caches.

I suspect this state of affairs is somewhat historical, as the AllocationSite mechanism seems to have originated with typed array storage strategies and V8’s “boilerplate” object literal allocators; both of these predate per-AllocationSite pre-tenuring decisions.

fin

Well that’s adaptive pre-tenuring in V8! I thought the “just stick a memento after the object” approach is pleasantly simple, and if you are only bumping creation counters from baseline compilation tiers, it likely amortizes out to a win. But does the restricted application to literals point to a fundamental constraint, or is it just accident? If you have any insight, let me know :) Until then, happy hacking!

by Andy Wingo at January 05, 2026 03:38 PM

December 02, 2025

Manuel Rego

You can now easily customize find-in-page with the new ::search-text pseudo-element, that is shipping in Chromium 144.0.7547. 🚀

Screenshot of the following CSS as an example of how to customize find-in-page: :root::search-text { background: yellow; } :root::search-text:current { color: white; background: olive; text-decoration: underline; } aside::search-text { background: magenta; } aside::search-text:current { background: darkmagenta; text-decoration: underline; }

Find more details on the blog post by Stephen Chenney. Thanks to Bloomberg for sponsoring this work.

December 02, 2025 12:00 AM

November 13, 2025

Andy Wingo

the last couple years in v8's garbage collector

Let’s talk about memory management! Following up on my article about 5 years of developments in V8’s garbage collector, today I’d like to bring that up to date with what went down in V8’s GC over the last couple years.

methodololology

I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it’s mostly a Google affair.

Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8’s GC over this time, and I’m going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let’s take a deeper look at each of these in turn.

the sandbox

There was a nice blog post in June last year summarizing the sandbox effort: basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables. Some objects that should never be directly referenced by user code go in a separate “trusted space”, which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages, there are “shared” variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox: sandboxed code is prevented by the hardware from writing memory outside the sandbox.

Leaning into the “attacker can write anything in their address space” threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can’t trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

The best and most amusing instance of this phenomenon is related to integers. Google’s style guide recommends signed integers by default, so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code, which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative. Good times!

oilpan

It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

Last time I described how V8 would like to add support for generational garbage collection to Oilpan, but that for that, you’d need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery, but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection, but it didn’t work out. Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

Instead, V8 added support for pinning to the Scavenger copying nursery implementation. If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone “direct handles” in V8 itself.

The funny thing is that I don’t think any of this is shipping yet; or, if it is, it’s only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic, and it would be a delight to see some actual numbers.

shared-memory multi-threading

JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don’t know what the state of shared-memory multi-threading is in JS, but in WebAssembly it seems to be moving apace, and Wasm uses the JS GC. Maybe I am overstating the effort here—probably it doesn’t come to 20%—but wiring this up has been a whole thing.

I will mention just one patch here that I found to be funny. So with pointer compression, an object’s fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

side quests

Right, we’ve covered what to me are the main stories of V8’s GC over the past couple years. But let me mention a few funny side quests that I saw.

the heuristics two-step

This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

mutex mayhem

Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8’s use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms. Then... removed it all and switched to abseil.

Personally, I am delighted to see this patch series, I wouldn’t have thought that there was juice to squeeze in V8’s use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

ta-ta, third-party heap

It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago.

fin

So what’s next? I don’t know; it’s been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker’s full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

by Andy Wingo at November 13, 2025 03:21 PM