Planet Igalia

June 14, 2019

Michael Catanzaro

An OpenJPEG Surprise

My previous blog post seems to have resolved most concerns about my requests for Ubuntu stable release updates, but I again received rather a lot of criticism for the choice to make WebKit depend on OpenJPEG, even though my previous post explained clearly why there are are not any good alternatives.

I was surprised to receive a pointer to ffmpeg, which has its own JPEG 2000 decoder that I did not know about. However, we can immediately dismiss this option due to legal problems with depending on ffmpeg. I also received a pointer to a resurrected libjasper, which is interesting, but since libjasper was removed from Ubuntu, its status is not currently better than OpenJPEG.

But there is some good news! I have looked through Ubuntu’s security review of the OpenJPEG code and found some surprising results. Half the reported issues affect the library’s companion tools, not the library itself. And the other half of the issues affect the libmj2 library, a component of OpenJPEG that is not built by Ubuntu and not used by WebKit. So while these are real security issues that raise concerns about the quality of the OpenJPEG codebase, none of them actually affect OpenJPEG as used by WebKit. Yay!

The remaining concern is that huge input sizes might cause problems within the library that we don’t yet know about. We don’t know because OpenJPEG’s fuzzer discards huge images instead of testing them. Ubuntu’s security team thinks there’s a good chance that fixing the fuzzer could uncover currently-unknown multiplication overflow issues, for instance, a class of vulnerability that OpenJPEG has clearly had trouble with in the past. It would be good to see improvement on this front. I don’t think this qualifies as a security vulnerability, but it is certainly a security problem that would facilitate discovering currently-unknown vulnerabilities if fixed.

Still, on the whole, the situation is not anywhere near as bad as I’d thought. Let’s hope OpenJPEG can be included in Ubuntu main sooner rather than later!

by Michael Catanzaro at June 14, 2019 02:43 PM

June 10, 2019

Javier Fernández

A new terminal-style line breaking with CSS Text

The CSS Text 3 specification defines a module for text manipulation and covers, among a few other features, the line breaking behavior of the browser, including white space handling. I’ve been working lately on some new features and bug fixing for this specification and I’d like to introduce in this posts the last one we made available for the Web Platform users. This is yet another contribution that came out the collaboration between Igalia and Bloomberg, which has been held for several years now and has produced many important new features for the Web, like CSS Grid Layout.

The feature

I guess everybody knows the white-space CSS property, which allows web authors to control two main aspects of the rendering of a text line: collapsing and wrapping. A new value break-spaces has been added to the ones available for this property, which allows web authors to emulate a terminal-like line breaking behavior. This new value operates basically like pre-wrap, but with two key differences:

  • any sequence of preserved white space characters takes up space, even at the end of the line.
  • a preserved white space sequence can be wrapped at any character, moving the rest of the sequence, intact, to the line bellow.

What does this new behavior actually mean ? I’ll try to explain it with a few examples. Lets start with a simple but quite illustrative demo which tries to emulate a meteorology monitoring system which shows relevant changes over time, where the gaps between subsequent changes must be preserved:

 #terminal {
  font: 20px/1 monospace;
  width: 340px;
  height: 5ch;
  background: black;
  color: green;
  overflow: hidden;
  white-space: break-spaces;
  word-break: break-all;


Another interesting use case for this feature could be a logging system which should preserve the text formatting of the logged information, considering different window sizes. The following demo tries to describe this such scenario:

body { width: 1300px; }
#logging {
  font: 20px/1 monospace;
  background: black;
  color: green;

  animation: resize 7s infinite alternate;

  white-space: break-spaces;
  word-break: break-all;
@keyframes resize {
  0% { width: 25%; }
  100% { width: 100%; }

Hash: 5a2a3d23f88174970ed8 Version: webpack 3.12.0 Time: 22209ms Asset Size Chunks Chunk Names pages/widgets/index.51838abe9967a9e0b5ff.js 1.17 kB 10 [emitted] pages/widgets/index img/icomoon.7f1da5a.svg 5.38 kB [emitted] fonts/icomoon.2d429d6.ttf 2.41 kB [emitted] img/fontawesome-webfont.912ec66.svg 444 kB [emitted] [big] fonts/fontawesome-webfont.b06871f.ttf 166 kB [emitted] img/mobile.8891a7c.png 39.6 kB [emitted] img/play_button.6b15900.png 14.8 kB [emitted] img/keyword-back.f95e10a.jpg 43.4 kB [emitted] . . .

Use cases

In the demo shown before there are several cases that I think it’s worth to analyze in detail.

A breaking opportunity exists after any white space character

The main purpose of this feature is to preserve the white space sequences length even when it has to be wrapped into multiple lines. The following example tries to describe this basic use case:

.container {
  font: 20px/1 monospace;
  width: 5ch;
  white-space: break-spaces;
  border: 1px solid;


The example above shows how the white space sequence with a length of 15 characters is preserved and wrapped along 3 different lines.

Single leading white space

Before the addition of the break-spaces value this scenario was only possible at the beginning of the line. In any other case, the trailing white spaces were either collapsed or hang, hence the next line couldn’t start with a sequence of white spaces. Lets consider the following example:

.container {
  font: 20px/1 monospace;
  width: 3ch;
  white-space: break-spaces;
  border: 1px solid;


Like when using pre-wrap, the single leading space is preserved. Since break-spaces allows breaking opportunities after any white space character, we break after the first leading white space (” |XX XX”). The second line can be broken after the first preserved white space, creating another leading white space in the next line (” |XX | XX”).

However, lets consider now a case without such first single leading white space.

.container {
  font: 20px/1 monospace;
  width: 3ch;
  white-space: break-spaces;
  border: 1px solid;


Again, it s not allowed to break before the first space, but in this case there isn’t any previous breaking opportunity, so the first space after the word XX should overflow (“XXX | XX”); the next white space character will be moved down to the next line as preserved leading space.

Breaking before the first white space

I mentioned before that the spec states clearly that the break-space feature allows breaking opportunities only after white space characters. However, it’d be possible to break the line just before the first white space character after a word if the feature is used in combination with other line breaking CSS properties, like word-break or overflow-wrap (and other properties too).

.container {
  font: 20px/1 monospace;
  width: 4ch;
  white-space: break-spaces;
  overflow-wrap: break-word;
  border: 1px solid;


The two white spaces between the words are preserved due to the break-spaces feature, but the first space after the XXXX word would overflow. Hence, the overflow-wrap: break-word feature is applied to prevent the line to overflow and introduce an additional breaking opportunity just before the first space after the word. This behavior causes that the trailing spaces are moved down as a leading white space sequence in the next line.

We would get the same rendering if word-break: break-all is used instead overflow-wrap (or even in combination), but this is actualy an incorrect behavior, which has the corresponding bug reports in WebKit (197277) and Blink (952254) according to the discussion in the CSS WG (see issue #3701).

Consider previous breaking opportunities

In the previous example I described a combination of line breaking features that would allow breaking before the first space after a word. However, this should be avoided if there are previous breaking opportunities. The following example is one of the possible scenarios where this may happen:

.container {
  font: 20px/1 monospace;
  width: 4ch;
  white-space: break-spaces;
  overflow-wrap: break-word;
  border: 1px solid;


In this case, we could break after the second word (“XX X| X”), since overflow-wrap: break-word would allow us to do that in order to avoid the line to overflow due to the following white space. However, white-space: break-spaces only allows breaking opportunities after a space character, hence, we shouldn’t break before if there are valid previous opportunities, like in this case in the space after the first word (“XX |X X”).

This preference for previous breaking opportunities before breaking the word, honoring the overflow-wrap property, is also part of the behavior defined for the white-space: pre-wrap feature; although in that case, there is no need to deal with the issue of breaking before the first space after a word since trailing space will just hang. The following example uses just the pre-wrap to show how previous opportunities are selected to avoid overflow or breaking a word (unless explicitly requested by word-break property).

.container {
  font: 20px/1 monospace;
  width: 2ch;
  white-space: pre-wrap;
  border: 1px solid;


In this case, break-all enables breaking opportunities that are not available otherwise (we can break a word at any letter), which can be used to prevent the line to overflow; hence, the overflow-wrap property doesn’t take any effect. The existence of previous opportunities is not considered now, since break-all mandates to produce the longer line as possible.

This new white-space: break-spaces feature implies a different behavior when used in combination with break-all. Even though the preference of previous opportunities should be ignored if we use the word-break: break-all, this may not be the case for the breaking before the first space after a word scenario. Lets consider the same example but using now the word-break: break-all feature:

.container {
  font: 20px/1 monospace;
  width: 4ch;
  white-space: break-spaces;
  overflow-wrap: break-word;
  word-break: break-all;
  border: 1px solid;


The example above shows that using word-break: break-all doesn’t produce any effect. It’s debatable whether the use of break-all should force the selection of the breaking opportunity that produces the longest line, like it happened in the pre-wrap case described before. However, the spec states clearly that break-spaces should only allow breaking opportunities after white space characters. Hence, I considered that breaking before the first space should only happen if there is no other choice.

As a matter of fact, specifying break-all we shouldn’t considering only previous white spaces, to avoid breaking before the first white space after a word; the break-all feature creates additional breaking opportunities, indeed, since it allows to break the word at any character. Since break-all is intended to produce the longest line as possible, this new breaking opportunity should be chosen over any previous white space. See the following test case to get a clearer idea of this scenario:

.container {
  font: 20px/1 monospace;
  width: 4ch;
  white-space: break-spaces;
  overflow-wrap: break-word;
  word-break: break-all;
  border: 1px solid;


Bear in mind that the expected rendering in the above example may not be obtained if your browser’s version is still affected by the bugs 197277(Safari/WebKit) and 952254(Chrome/Blink). In this case, the word is broken despite the opportunity in the previous white space, and also avoiding breaking after the ‘XX’ word, just before the white space.

There is an exception to the rule of avoiding breaking before the first white space after a word if there are previous opportunities, and it’s precisely the behavior the line-break: anywhere feature would provide. As I said, all these assumptions were not, in my opinion, clearly defined in the current spec, so that’s why I filed an issue for the CSS WG so that we can clarify when it’s allowed to break before the first space.

Current status and support

The intent-to-ship request for Chrome has been approved recently, so I’m confident the feature will be enabled by default in Chrome 76. However, it’s possible to try the feature in older versions by enabling the Experimental Web Platform Features flag. More details in the corresponding Chrome Status entry. I want to highlight that I also implemented the feature for LayoutNG, the new layout engine that Chrome will eventually ship; this achievement is very important to ensure the stability of the feature in future versions of Chrome.

In the case of Safari, the patch with the implementation of the feature landed in the WebKit’s trunk in r244036, but since Apple doesn’t announce publicly when a new release of Safari will happen or which features it’ll ship, it’s hard to guess when the break-spaces feature will be available for the web authors using such browser. Meanwhile, It’s possible to try the feature in the Safari Technology Preview 80.

Finally, while I haven’t see any signal of active development in Firefox, some of the Mozilla developers working on this area of the Gecko engine have shown public support for the feature.

The following table summarizes the support of the break-spaces feature in the 3 main browsers:

Chrome Safari Firefox
Experimental M73 STP 80 Public support
Ship M76 Unknown Unknown

Web Platform Tests

At Igalia we believe that the Web Platform Tests project is a key piece to ensure the compatibility and interoperability of any development on the Web Platform. That’s why a substantial part of my work to implement this relatively small feature was the definition of enough tests to cover the new functionality and basic use cases of the feature.

white-space overflow-wrap word-break

Implementation in several web engines

During the implementation of a browser feature, even a small one like this, it’s quite usual to find out bugs and interoperability issues. Even though this may slow down the implementation of the feature, it’s also a source of additional Web Platform tests and it may contribute to the robustness of the feature itself and the related CSS properties and values. That’s why I decided to implement the feature in parallel for WebKit (Safari) and Blink (Chrome) engines, which I think it helped to ensure interoperability and code maturity. This approach also helped to get a deeper understanding of the line breaking logic and its design and implementation in different web engines.

I think it’s worth mentioning some of these code architectural differences, to get a better understanding of the work and challenges this feature required until it reached web author’s browser.

Chrome/Blink engine

Lets start with Chrome/Blink, which was especially challenging due to the fact that Blink is implementing a new layout engine (LayoutNG). The implementation for the legacy layout engine was the first step, since it ensures the feature will arrive earlier, even behind an experimental runtime flag.

The legacy layout relies on the BreakingContext class to implement the line breaking logic for the inline layout operations. It has the main characteristic of handling the white space breaking opportunities by its own, instead of using the TextBreakIterator (based on ICU libraries), as it does for determining breaking opportunities between letters and/or symbols. This design implies too much complexity to do even small changes like this, especially because is very sensible in terms of performance impact. In the following diagram I try to show a simplified view of the classes involved and the interactions implemented by this line breaking logic.

The LayoutNG line breaking logic is based on a new concept of fragments, mainly handled by the NGLineBreaker class. This new design simplifies the line breaking logic considerably and it’s highly optimized and adapted to get the most of the TextBreakIterator classes and the ICU features. I tried to show a simplified view of this new design with the following diagram:

In order to describe the work done to implement the feature for this web engine, I’ll list the main bugs and patches landed during this time: CR#956465, CR#952254, CR#944063,CR#900727, CR#767634, CR#922437

Safari/WebKit engine

Although as time passes this is less probable, WebKit and Blink still share some of the layout logic from the ages prior to the fork. Although Blink engineers have applied important changes to the inline layout logic, both code refactoring and optimizations, there are common design patterns that made relatively easy porting to WebKit the patches that implemented the feature for the Blink’s legacy layout. In WebKit, the line breaking logic is also implemented by the BreakingContext class and it has a similar architecture, as it’s described, in a quite simplified way, in the class diagram above (it uses different class names for the render/layout objects, though) .

However, Safari supports for the mac and iOS platforms a different code path for the line breaking logic, implemented in the SimpleLineLayout class. This class provides a different design for the line breaking logic, and, similar to what Blink implements in LayoutNG, is based on a concept of text fragments. It also relies as much as possible into the TextBreakIterator, instead of implementing complex rules to handle white spaces and breaking opportunities. The following diagrams show this alternate design to implement the line breaking process.

This SimpleLineLayout code path in not supported by other WebKit ports (like WebKitGtk+ or WPE) and it’s not available either when using some CSS Text features or specific fonts. There are other limitations to use this SimpleLineLayout codepath, which may lead to render the text using the BreakingContext class.

Again, this is the list of bugs that were solved to implement the feature for the WebKit engine: WK#197277, WK#196169, WK#196353, WK#195361, WK#177327, WK#197278


I hope that at this point these 2 facts are clear now:

  • The white-space: break-spaces feature is a very simple but powerful feature that provides a new line breaking behavior, based on unix-terminal systems.
  • Although it’s a simple feature, on the paper (spec), it implies a considerable amount of work so that it reaches the browser and it’s available for web authors.

In this post I tried to explain in a simple way the main purpose of this new feature and also some interesting corner cases and combinations with other Line Breaking features. The demos I used shown 2 different use cases of this feature, but there are may more. I’m sure the creativity of web authors will push the feature to the limits; by then, I’ll be happy to answer doubts, about the spec or the implementation for the web engines, and of course fix the bugs that may appear once the feature is more used.

Igalia logo
Bloomberg logo

Igalia and Bloomberg working together to build a better web

Finally, I want to thank Bloomberg for supporting the work to implement this feature. It’s another example of how non-browser vendors can influence the Web Platform and contribute with actual features that will be eventually available for web authors. This is the kind of vision that we need if we want to keep a healthy, open and independent Web Platform.

by jfernandez at June 10, 2019 08:11 PM

June 06, 2019

Eleni Maria Stea

Depth-aware upsampling experiments (Part 3.2: Improving the upsampling using normals to classify the samples)

This post is again about improving the upsampling of the half-resolution SSAO render target used in the VKDF sponza demo that was written by Iago Toral. I am going to explain how I used information from the normals to understand if the samples of each 2×2 neighborhood we check during the upsampling belong to the … Continue reading Depth-aware upsampling experiments (Part 3.2: Improving the upsampling using normals to classify the samples)

by hikiko at June 06, 2019 08:15 PM

June 05, 2019

Eleni Maria Stea

Depth-aware upsampling experiments (Part 3.1: Improving the upsampling using depths to classify the samples)

In my previous posts of these series I analyzed the basic idea behind the depth-aware upsampling techniques. In the first post [1], I implemented the nearest depth sampling algorithm [3] from NVIDIA and in the second one [2], I compared some methods that are improving the quality of the z-buffer downsampled data that I use … Continue reading Depth-aware upsampling experiments (Part 3.1: Improving the upsampling using depths to classify the samples)

by hikiko at June 05, 2019 07:41 PM

Depth-aware upsampling experiments (Part 2: Improving the Z-buffer downsampling)

In the previous post of these series, I tried to explain the nearest depth algorithm [1] that I used to improve Iago Toral‘s SSAO upscaling in the sponza demo of VKDF. Although the nearest depth was improving the ambient occlusion in higher resolutions the results were not very good, so I decided to try more … Continue reading Depth-aware upsampling experiments (Part 2: Improving the Z-buffer downsampling)

by hikiko at June 05, 2019 01:37 PM

June 04, 2019

Eleni Maria Stea

Some additions to vkrunner

A new option has been added to Vkrunner (the Vulkan shader testing tool written by Neil Roberts) to allow selecting the Vulkan device for each shader test. Usage: [crayon-5d07e34ee115f976984075/] or [crayon-5d07e34ee1168825615487/]   When the device id is not set, the default GPU is used. IDs start from 1 to match the convention of the VK-GL-CTS … Continue reading Some additions to vkrunner

by hikiko at June 04, 2019 12:30 PM

June 03, 2019

Andy Wingo

pictie, my c++-to-webassembly workbench

Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

loading pictie...

JavaScript disabled, no pictie demo. See the pictie web page for more information. >&&<&>>>&&><<>>&&<><>>

wtf just happened????!?

So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

To find out the answers to these questions and to evaluate potential platform modifications, I needed a small, standalone test case. So... I wrote one? It seemed like a good idea at the time.

pictie is a test bed

Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known Structure and Interpretation of Computer Programs computer science textbook.

prototype in action

So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the WebIDL bindings proposal, typed objects, and GC.

prototype the web forward

As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

by Andy Wingo at June 03, 2019 10:10 AM

May 28, 2019

Eleni Maria Stea

Depth-aware upsampling experiments (Part 1: Nearest depth)

This post is about different depth aware techniques I tried in order to improve the upsampling of the low resolution Screen Space Ambient Occlusion (SSAO) texture of a VKDF demo. VKDF is a library and collection of Vulkan demos, written by Iago Toral. In one of his demos (the sponza), Iago implemented SSAO among many … Continue reading Depth-aware upsampling experiments (Part 1: Nearest depth)

by hikiko at May 28, 2019 08:42 PM

May 24, 2019

Andy Wingo

lightening run-time code generation

The upcoming Guile 3 release will have just-in-time native code generation. Finally, amirite? There's lots that I'd like to share about that and I need to start somewhere, so this article is about one piece of it: Lightening, a library to generate machine code.

on lightning

Lightening is a fork of GNU Lightning, adapted to suit the needs of Guile. In fact at first we chose to use GNU Lightning directly, "vendored" into the Guile source respository via the git subtree mechanism. (I see that in the meantime, git gained a kind of a subtree command; one day I will have to figure out what it's for.)

GNU Lightning has lots of things going for it. It has support for many architectures, even things like Itanium that I don't really care about but which a couple Guile users use. It abstracts the differences between e.g. x86 and ARMv7 behind a common API, so that in Guile I don't need to duplicate the JIT for each back-end. Such an abstraction can have a slight performance penalty, because maybe it missed the opportunity to generate optimal code, but this is acceptable to me: I was more concerned about the maintenance burden, and GNU Lightning seemed to solve that nicely.

GNU Lightning also has fantastic documentation. It's written in C and not C++, which is the right thing for Guile at this time, and it's also released under the LGPL, which is Guile's license. As it's a GNU project there's a good chance that GNU Guile's needs might be taken into account if any changes need be made.

I mentally associated Paolo Bonzini with the project, who I knew was a good no-nonsense hacker, as he used Lightning for a smalltalk implementation; and I knew also that Matthew Flatt used Lightning in Racket. Then I looked in the source code to see architecture support and was pleasantly surprised to see MIPS, POWER, and so on, so I went with GNU Lightning for Guile in our 2.9.1 release last October.

on lightening the lightning

When I chose GNU Lightning, I had in mind that it was a very simple library to cheaply write machine code into buffers. (Incidentally, if you have never worked with this stuff, I remember a time when I was pleasantly surprised to realize that an assembler could be a library and not just a program that processes text. A CPU interprets machine code. Machine code is just bytes, and you can just write C (or Scheme, or whatever) functions that write bytes into buffers, and pass those buffers off to the CPU. Now you know!)

Anyway indeed GNU Lightning 1.4 or so was that very simple library that I had in my head. I needed simple because I would need to debug any problems that came up, and I didn't want to add more complexity to the C side of Guile -- eventually I should be migrating this code over to Scheme anyway. And, of course, simple can mean fast, and I needed fast code generation.

However, GNU Lightning has a new release series, the 2.x series. This series is a rewrite in a way of the old version. On the plus side, this new series adds all of the weird architectures that I was pleasantly surprised to see. The old 1.4 didn't even have much x86-64 support, much less AArch64.

This new GNU Lightning 2.x series fundamentally changes the way the library works: instead of having a jit_ldr_f function that directly emits code to load a float from memory into a floating-point register, the jit_ldr_f function now creates a node in a graph. Before code is emitted, that graph is optimized, some register allocation happens around call sites and for temporary values, dead code is elided, and so on, then the graph is traversed and code emitted.

Unfortunately this wasn't really what I was looking for. The optimizations were a bit opaque to me and I just wanted something simple. Building the graph took more time than just emitting bytes into a buffer, and it takes more memory as well. When I found bugs, I couldn't tell whether they were related to my usage or in the library itself.

In the end, the node structure wasn't paying its way for me. But I couldn't just go back to the 1.4 series that I remembered -- it didn't have the architecture support that I needed. Faced with the choice between changing GNU Lightning 2.x in ways that went counter to its upstream direction, switching libraries, or refactoring GNU Lightning to be something that I needed, I chose the latter.

in which our protagonist cannot help himself

Friends, I regret to admit: I named the new thing "Lightening". True, it is a lightened Lightning, yes, but I am aware that it's horribly confusing. Pronounced like almost the same, visually almost identical -- I am a bad person. Oh well!!

I ported some of the existing GNU Lightning backends over to Lightening: ia32, x86-64, ARMv7, and AArch64. I deleted the backends for Itanium, HPPA, Alpha, and SPARC; they have no Debian ports and there is no situation in which I can afford to do QA on them. I would gladly accept contributions for PPC64, MIPS, RISC-V, and maybe S/390. At this point I reckon it takes around 20 hours to port an additional backend from GNU Lightning to Lightening.

Incidentally, if you need a code generation library, consider your choices wisely. It is likely that Lightening is not right for you. If you can afford platform-specific code and you need C, Lua's DynASM is probably the right thing for you. If you are in C++, copy the assemblers from a JavaScript engine -- C++ offers much more type safety, capabilities for optimization, and ergonomics.

But if you can only afford one emitter of JIT code for all architectures, you need simple C, you don't need register allocation, you want a simple library to just include in your source code, and you are good with the LGPL, then Lightening could be a thing for you. Check the gitlab page for info on how to test Lightening and how to include it into your project.

giving it a spin

Yesterday's Guile 2.9.2 release includes Lightening, so you can give it a spin. The switch to Lightening allowed us to lower our JIT optimization threshold by a factor of 50, letting us generate fast code sooner. If you try it out, let #guile on freenode know how it went. In any case, happy hacking!

by Andy Wingo at May 24, 2019 08:44 AM

May 23, 2019

Andy Wingo

bigint shipping in firefox!

I am delighted to share with folks the results of a project I have been helping out on for the last few months: implementation of "BigInt" in Firefox, which is finally shipping in Firefox 68 (beta).

what's a bigint?

BigInts are a new kind of JavaScript primitive value, like numbers or strings. A BigInt is a true integer: it can take on the value of any finite integer (subject to some arbitrarily large implementation-defined limits, such as the amount of memory in your machine). This contrasts with JavaScript number values, which have the well-known property of only being able to precisely represent integers between -253 and 253.

BigInts are written like "normal" integers, but with an n suffix:

var a = 1n;
var b = a + 42n;
b << 64n
// result: 793209995169510719488n

With the bigint proposal, the usual mathematical operations (+, -, *, /, %, <<, >>, **, and the comparison operators) are extended to operate on bigint values. As a new kind of primitive value, bigint values have their own typeof:

typeof 1n
// result: 'bigint'

Besides allowing for more kinds of math to be easily and efficiently expressed, BigInt also allows for better interoperability with systems that use 64-bit numbers, such as "inodes" in file systems, WebAssembly i64 values, high-precision timers, and so on.

You can read more about the BigInt feature over on MDN, as usual. You might also like this short article on BigInt basics that V8 engineer Mathias Bynens wrote when Chrome shipped support for BigInt last year. There is an accompanying language implementation article as well, for those of y'all that enjoy the nitties and the gritties.

can i ship it?

To try out BigInt in Firefox, simply download a copy of Firefox Beta. This version of Firefox will be fully released to the public in a few weeks, on July 9th. If you're reading this in the future, I'm talking about Firefox 68.

BigInt is also shipping already in V8 and Chrome, and my colleague Caio Lima has an project in progress to implement it in JavaScriptCore / WebKit / Safari. Depending on your target audience, BigInt might be deployable already!


I must mention that my role in the BigInt work was relatively small; my Igalia colleague Robin Templeton did the bulk of the BigInt implementation work in Firefox, so large ups to them. Hearty thanks also to Mozilla's Jan de Mooij and Jeff Walden for their patient and detailed code reviews.

Thanks as well to the V8 engineers for their open source implementation of BigInt fundamental algorithms, as we used many of them in Firefox.

Finally, I need to make one big thank-you, and I hope that you will join me in expressing it. The road to ship anything in a web browser is long; besides the "simple matter of programming" that it is to implement a feature, you need a specification with buy-in from implementors and web standards people, you need a good working relationship with a browser vendor, you need willing technical reviewers, you need to follow up on the inevitable security bugs that any browser change causes, and all of this takes time. It's all predicated on having the backing of an organization that's foresighted enough to invest in this kind of long-term, high-reward platform engineering.

In that regard I think all people that work on the web platform should send a big shout-out to Tech at Bloomberg for making BigInt possible by underwriting all of Igalia's work in this area. Thank you, Bloomberg, and happy hacking!

by Andy Wingo at May 23, 2019 12:13 PM

May 21, 2019

Adrián Pérez

WPE WebKit 2.24

While WPE WebKit 2.24 has now been out for a couple of months, it includes over a year of development effort since our first official release, which means there is plenty to talk about. Let's dive in!

API & ABI Stability

The public API for WPE WebKit has been essentially unchanged since the 2.22.x releases, and we consider it now stable and its version has been set to 1.0. The pkg-config modules for the main components have been updated accordingly and are now named wpe-1.0 (for libwpe), wpebackend-fdo-1.0 (the FDO backend), and wpe-webkit-1.0 (WPE WebKit itself).

Our plan for the foreseeable future is to keep the WPE WebKit API backwards-compatible in the upcoming feature releases. On the other hand, the ABI might change, but will be kept compatible if possible, on a best-effort basis.

Both API and ABI are always guaranteed to remain compatible inside the same stable release series, and we are trying to follow a strict “no regressions allowed” policy for patch releases. We have added a page in the Web site which summarizes the WPE WebKit release schedule and this API/ABI stability guarantee.

This should allow distributors to always ship the latest available point release in a stable series. This is something we always strongly recommend because almost always they include fixes for security vulnerabilities.


Web engines are security-critical software components, on which users rely every day for visualizing and manipulating sensitive information like personal data, medical records, or banking information—to name a few. Having regular releases means that we are able to publish periodical security advisories detailing the vulnerabilities fixed by them.

As WPE WebKit and WebKitGTK share a number of components with each other, advisories and the releases containing the corresponding fixes are published in sync, typically on the same day.

The team takes security seriously, and we are always happy to receive notice of security bugs. We ask reporters to act responsibly and observe the WebKit security policy for guidance.

Content Filtering

This new feature provides access to the WebKit internal content filtering engine, also used by Safari content blockers. The implementation is quite interesting: filtering rule sets are written as JSON documents, which are parsed and compiled to a compact bytecode representation, and a tiny virtual machine which executes it for every resource load. This way deciding whether a resource load should be blocked adds very little overhead, at the cost of a (potentially) slow initial compilation. To give you an idea: converting the popular EasyList rules to JSON results in a ~15 MiB file that can take up to three seconds to compile on ARM processors typically used in embedded devices.

In order to penalize application startup as little as possible, the new APIs are fully asynchronous and compilation is offloaded to a worker thread. On top of that, compiled rule sets are cached on disk to be reused across different runs of the same application (see WebKitUserContentFilterStore for details). Last but not least, the compiled bytecode is mapped on memory and shared among all the processes which need it: a browser with many tabs opened will practically use the same amount of memory for content filtering than one with a single Web page loaded. The implementation is shared by the GTK and WPE WebKit ports.

I had been interested in implementing support for content filtering even before the WPE WebKit port existed, with the goal of replacing the ad blocker in GNOME Web. Some of the code had been laying around in a branch since the 2016 edition of the Web Engines Hackfest, it moved from my old laptop to the current one, and I worked on it on-and-off while the different patches needed to make it work slowly landed in the WebKit repository—one of the patches went through as many as seventeen revisions! At the moment I am still working on replacing the ad blocker in Web—on my free time—which I expect will be ready for GNOME 3.34.

It's All Text!

No matter how much the has evolved over the years, almost every Web site out there still needs textual content. This is one department where 2.24.x shines: text rendering.

Carlos García has been our typography hero during the development cycle: he single-handedly implemented support for variable fonts (demo), fixed our support for composite emoji (like 🧟‍♀️, composed of the glyphs “woman” and “zombie”), and improved the typeface selection algorithm to prefer coloured fonts for emoji. Additionally, many other subtle issues have been fixed, and the latest two patch releases include important fixes for text rendering.

Tip: WPE WebKit uses locally installed fonts as fallback. You may want to install at least one coloured font like Twemoji, which will ensure emoji glyphs can always be displayed.

API Ergonomics

GLib 2.44 added a nifty feature back in 2015: automatic cleanup of variables when they go out of scope using g_auto, g_autofree, and g_autoptr.

We have added the needed bits in the headers to allow their usage with the types from the WPE WebKit API. This enables developers to write code less likely to introduce accidental memory leaks because they do not need to remember freeing resources manually:

WebKitWebView* create_view (void)
    g_autoptr(WebKitWebContext) ctx = webkit_web_context_new ();
     * Configure "ctx" to your liking here. At the end of the scope (this
     * function), a g_object_unref(ctx) call will be automatically done.
    return webkit_web_view_new_with_context (ctx);

Note that this does not change the API (nor the ABI). You will need to build your applications with GCC or Clang to make use of this feature.

“Featurism” and “Embeddability”

Look at that, I just coined two new “technobabble” terms!

There are many other improvements which are shipping right now in WPE WebKit. The following list highlights the main user and developer visible features that can be found in the 2.24.x versions:

  • A new GObject based API for JavaScriptCore.
  • A fairly complete WebDriver implementation. There is a patch for supporting WPE WebKit in Selenium pending to be integrated. Feel free to vote 👍 for it to be merged.
  • WPEQt, which provides an idiomatic API similar to that of QWebView and allows embedding WPE WebKit as a widget in Qt/QML applications.
  • Support for the JPEG2000 image format. Michael Catanzaro has written about the reasoning behind this in his write-up about WebKitGTK 2.24.
  • Allow configuring the background of the WebKitWebView widget. Translucent backgrounds work as expected, which allows for novel applications like overlaying Web content on top of video streams.
  • An opt-in 16bpp rendering mode, which can be faster in some cases—remember to measure and profile in your target hardware! For now this only works with the RGB565 pixel format, which is the most common one used in embedded devices where 24bpp and 32bpp modes are not available.
  • Support for hole-punching using external media players. Note that at the moment there is no public API for this and you will need to patch the WPE WebKit code to plug your playback engine.

Despite all the improvements and features, still the main focus of WPE WebKit is providing an embeddable Web engine. Fear not: new features either are opt-in (e.g. 16bpp rendering), or disabled by default and add no overhead unless enabled (WebDriver, background color), or have no measurable impact at all (g_autoptr). Not to mention that many features can be even disabled at build time, bringing to the table smaller binaries and runtime footprint—but that would be a topic for another day.

by aperez ( at May 21, 2019 07:00 PM

Eleni Maria Stea

A simple pixel shader viewer

In a previous post, I wrote about Vkrunner, and how I used it to play with fragment shaders. While I was writing the shaders for it, I had to save them, generate a PPM image and display it to see the changes. This render to image/display repetition gave me the idea to write a minimal … Continue reading A simple pixel shader viewer

by hikiko at May 21, 2019 05:52 AM

May 14, 2019

Javier Muñoz

Cephalocon Barcelona 2019

Next week I will attend Cephalocon 2019. It will take place on 19 and 20 May in Barcelona.

I will deliver a talk, under the sponsorship of my company Igalia, about Ceph Object Storage and the RGW/S3 service layer.

In this talk, I will share my experience contributing new features and bugfixes upstream that were developed through open projects in the community.

I will also review some of the contributions from Jewel to Nautilus and its impact from the product/service perspective for users and companies investing in upstream development.

Cephalocon 2019 is our second international conference and it aims to bring together more than 800 technologists and adopters from across the globe to showcase the history and future of Ceph, demonstrate real-world applications and highlight vendor solutions.

The registration of the attendees is still open. You can find more information about the event and how to register on the official event page. The complete schedule is also available.

See you there!

Update 2019/05/25

by Javier at May 14, 2019 10:00 PM

Alicia Boya

validateflow: A new tool to test GStreamer pipelines

It has been a while since GstValidate has been available. GstValidate has made it easier to write integration tests that check that playback and transcoding executing actions (like seeking, changing subtitle tracks, etc…) work as expected; testing at a high level rather than fine exact/fine grained data flow.

As GStreamer is applied to an ever wider variety of cases, testing often becomes cumbersome for those cases that resemble less typical playback. On one hand there is the C testing framework intended for unit tests, which is admittedly low level. Even when using something like GstHarness, checking an element outputs the correct buffers and events requires a lot of manual coding. On the other hand gst-validate so far has focused mostly on assets that can be played with a typical playbin, requiring extra effort and coding for the less straightforward cases.

This has historically left many specific test cases within that middle ground without an effective way to be tested. validateflow attempts to fill this gap by allowing gst-validate to test that custom pipelines acted in a certain way produce the expected result.

validateflow itself is a GstValidate plugin that records buffers and events flowing through a given pad and records them in a log file. The first time a test is run, this log becomes the expectation log. Further executions of the test still create a new log file, but this time it’s compared against the expectation log. Any difference is reported as an error. The user can rebaseline the tests by removing the expectation log file and running it again. This is very similar to how many web browser tests work (e.g. Web Platform Tests).

How to get it

validateflow has been landed recently on the development versions of GStreamer. Before 1.16 is released you’ll be able to use it by checking out the latest master branches of GStreamer subprojects, preferably with something like gst-build.

Make sure to update both gst-devtools. Then update gst-integration-testsuites by running the following command, that will update the repo and fetch media files. Otherwise you will get errors.

gst-validate-launcher --sync -L

Writing tests

The usual way to use validateflow is through pipelines.json, a file parsed by the validate test suite (the one run by default by gst-validate-launcher) where all the necessary elements of a validateflow tests can be placed together.

For instance:

    "pipeline": "appsrc ! qtdemux ! fakesink async=false",
    "config": [
        "%(validateflow)s, pad=fakesink0:sink, record-buffers=false"
    "scenarios": [
            "name": "default",
            "actions": [
                "description, seek=false, handles-states=false",
                "appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-85.mp4/init.mp4\"",
                "appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-85.mp4/media1.mp4\"",
                "checkpoint, text=\"A moov with a different edit list is now pushed\"",
                "appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-86.mp4/init.mp4\"",
                "appsrc-push, target-element-name=appsrc0, file-name=\"%(medias)s/fragments/car-20120827-86.mp4/media2.mp4\"",

These are:

  • pipeline: A string with the same syntax of gst-launch describing the pipeline to use. Python string interpolation can be used to get the path to the medias directory where audio and video assets are placed in the gst-integration-testsuites repo by writing %(media)s. It can also be used to get a video or audio sink that can be muted, with %(videosink)s or %(audiosink)s
  • config: A validate configuration file. Among other things that can be set, here validateflow overrides are defined, one per line, with %(validateflow)s, which expands to validateflow, plus some options defining where the logs will be written (which depends on the test name). Each override monitors one pad. The settings here define which pad, and what will be recorded.
  • scenarios: Usually a single scenario is provided. A series of actions performed in order on the pipeline. These are normal GstValidate scenarios, but new actions have been added, e.g. for controlling appsrc elements (so that you can push chunks of data in several steps instead of relying on a filesrc pushing a whole file and be done with it).
  • Running tests

    The tests defined in pipelines.json are automatically run by default when running gst-validate-launcher, since they are part of the default test suite.

    You can get the list of all the pipelines.json tests like this:

    gst-validate-launcher -L |grep launch_pipeline

    You can use these test names to run specific tests. The -v flag is useful to see the actions as they are executed. --gdb runs the test inside the GNU debugger.

    gst-validate-launcher -v validate.launch_pipeline.qtdemux_change_edit_list.default

    In the command line argument above validate. defines the name of the test suite Python file, testsuites/ The rest, launch_pipeline.qtdemux_change_edit_list.default is actually a regex: actually . just happens to match a period but it would match any character (it would be more correct, albeit also more inconvenient, to use \. instead). You can use this feature to run several related tests, for instance:

    $ gst-validate-launcher -m 'validate.launch_pipeline\.appsrc_.*'
    Setting up GstValidate default tests
    [3 / 3]  validate.launch_pipeline.appsrc_preroll_test.single_push: Passed
               Total time spent: 0:00:00.369149 seconds
               Passed: 3
               Failed: 0
               Total: 3

    Expectation files are stored in a directory named flow-expectations, e.g.:


    The actual output log (which is compared to the expectations) is stored as a log file, e.g.:


    Here is how a validateflow log looks.

    event stream-start: GstEventStreamStart, flags=(GstStreamFlags)GST_STREAM_FLAG_NONE, group-id=(uint)1;
    event caps: video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)2.1, profile=(string)main, codec_data=(buffer)014d4015ffe10016674d4015d901b1fe4e1000003e90000bb800f162e48001000468eb8f20, width=(int)426, height=(int)240, pixel-aspect-ratio=(fraction)1/1;
    event segment: format=TIME, start=0:00:00.000000000, offset=0:00:00.000000000, stop=none, time=0:00:00.000000000, base=0:00:00.000000000, position=0:00:00.000000000
    event tag: GstTagList-stream, taglist=(taglist)"taglist\,\ video-codec\=\(string\)\"H.264\\\ /\\\ AVC\"\;";
    event tag: GstTagList-global, taglist=(taglist)"taglist\,\ datetime\=\(datetime\)2012-08-27T01:00:50Z\,\ container-format\=\(string\)\"ISO\\\ fMP4\"\;";
    event tag: GstTagList-stream, taglist=(taglist)"taglist\,\ video-codec\=\(string\)\"H.264\\\ /\\\ AVC\"\;";
    event caps: video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)2.1, profile=(string)main, codec_data=(buffer)014d4015ffe10016674d4015d901b1fe4e1000003e90000bb800f162e48001000468eb8f20, width=(int)426, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)24000/1001;
    CHECKPOINT: A moov with a different edit list is now pushed
    event caps: video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)3, profile=(string)main, codec_data=(buffer)014d401effe10016674d401ee8805017fcb0800001f480005dc0078b168901000468ebaf20, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1;
    event segment: format=TIME, start=0:00:00.041711111, offset=0:00:00.000000000, stop=none, time=0:00:00.000000000, base=0:00:00.000000000, position=0:00:00.041711111
    event tag: GstTagList-stream, taglist=(taglist)"taglist\,\ video-codec\=\(string\)\"H.264\\\ /\\\ AVC\"\;";
    event tag: GstTagList-stream, taglist=(taglist)"taglist\,\ video-codec\=\(string\)\"H.264\\\ /\\\ AVC\"\;";
    event caps: video/x-h264, stream-format=(string)avc, alignment=(string)au, level=(string)3, profile=(string)main, codec_data=(buffer)014d401effe10016674d401ee8805017fcb0800001f480005dc0078b168901000468ebaf20, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)24000/1001;

    Prerolling and appsrc

    By default scenarios don’t start executing actions until the pipeline is playing. Also by default sinks require a preroll for that to occur (that is, a buffer must reach the sink before the state transition to playing is completed).

    This poses a problem for scenarios using appsrc, as no action will be executed until a buffer reaches the sink, but a buffer can only be pushed as the result of an appsrc-push action, creating a chicken and egg problem.

    For many cases that don’t require playback we can solve this simply by disabling prerolling altogether, setting async=false in the sinks.

    For cases where prerolling is desired (like playback), handles_states=true should be set in the scenario description. This makes the scenario actions run without having to wait for a state change. appsrc-push will notice the pipeline is in a state where buffers can’t flow and enqueue the buffer without waiting for it so that the next action can run immediately. Then the set-state can be used to set the state of the pipeline to playing, which will let the appsrc emit the buffer.

    description, seek=false, handles-states=true
    appsrc-push, target-element-name=appsrc0, file-name="raw_h264.0.mp4"
    set-state, state=playing
    appsrc-eos, target-element-name=appsrc0


    The documentation of validateflow, explaining its usage in more detail can be found here:

    by aboya at May 14, 2019 02:13 PM

    May 06, 2019

    Eleni Maria Stea

    Vkrunner allows specifying the required Vulkan version

    The required Vulkan implementation version for a Vkrunner shader test can now be specified in its [require] section. Tests that are targeting Vulkan versions that aren’t supported by the device driver will be skipped. Syntax:[crayon-5d07e34ee244a811122193/]   For example:[crayon-5d07e34ee2454144526997/]   Reminding that vkrunner is a Vulkan shader testing tool similar to piglit that was written by … Continue reading Vkrunner allows specifying the required Vulkan version

    by hikiko at May 06, 2019 07:11 PM

    Having fun with Vkrunner!

    Vkrunner is a Vulkan shader testing tool similar to Piglit, written by Neil Roberts. It is mostly used by graphics drivers developers, and was also part of the official Khronos conformance tests suite repository (VK-GL-CTS) for some time [1]. There are already posts [2] about its use but they are all written from a driver … Continue reading Having fun with Vkrunner!

    by hikiko at May 06, 2019 07:10 PM

    April 29, 2019

    Asumu Takikawa

    WebAssembly in Redex

    Recently I’ve been studying the semantics of WebAssembly (wasm). If you’ve never heard of WebAssembly, it’s a new web programming language that’s planned to be a low-level analogue to JavaScript that’s supported by multiple browsers. Wasm could be used as a compilation target for a variety of new frontend languages for web programming.

    (see also Lin Clark’s illustrated blog series on WebAssembly for a nice introduction)

    As a cross-browser effort, the language comes with a detailed independent specification. A very nice aspect of the WebAssembly spec is that the language’s semantics are specified precisely with a small-step operational semantics (it’s even a reduction semantics in the style of Felleisen and Hieb).

    A condensed version of the wasm semantics was presented in a 2017 PLDI paper written by Andreas Hass and co-authors.

    (the current semantics of the full language is available at the WebAssembly spec website)

    In this blog post, I’m going to share some work I’ve done to make an executable version of the operational semantics from the 2017 paper. In the process, I’ll try to explain a few of the interesting design choices of wasm via the semantics.

    A good thing about operational semantics is that they are relatively easy to understand and can be easily converted into an executable form. You can think of them as basically an interpreter for your programming language that is specified precisely using mathematical relations defined over a formal grammar.

    Specifying a language in this way is helpful for proving various properties about your language. It’s also an implementation independent way to understand how programs will evaluate, which you can use to validate a production implementation.

    Because of the way that operational semantics resemble interpreters, you can construct executable operational semantics using domain specific modeling languages designed for that purpose (Redex, K, etc).

    (Note: wasm already comes with a reference interpreter written in OCaml, but it’s not using a modeling language in the sense described here)

    The modeling language I’ll use in this blog post is Redex, a DSL hosted in Racket that is designed for reduction semantics.

    Why Make an Executable Formal Semantics?

    If you’re not familiar with semantics modeling tools, you might be wondering what it even means to write an executable model. The basic process is to take the formal grammars and rules presented in an operational semantics and transcribe them as programs written in a specialized modeling language. The resulting interpreters can be used to run examples in the modeled language.

    The advantage to the executable model is that you can apply software engineering principles like randomized testing on your language semantics to see if there are bugs in your specification.

    Another benefit that you get is visualization, which can be helpful for understanding how specific programs execute.

    For example, here’s a screenshot showing a trace visualization for executing a factorial program in wasm:

    step-through of a wasm program
    Stepping through a function call in Redex

    This is a custom visualization that I made leveraging Redex’s built-in traces function, which lets you visualize every step of the reduction process. Here it shows a trace starting with a function call to an implementation of factorial. Each box show a visual representation of the instructions for that machine state. The arrows show the order of reduction, and which rule was used to produce the next state.

    In the next section I’m going to go over some background about semantics and modeling in Redex that’s needed to understand how the wasm model works (you can skip this if you already familiar with operational semantics and Redex).

    Redex & Reduction Semantics Basics

    As mentioned above, the basic idea of reduction semantics is to define a kind of formal interpreter for your programming language. In order to define this interpreter, you need to define (1) your language as a grammar, and (2) what the states for the machine that runs your programs looks like.

    In a simple language, the machine states might just be the program expressions in your language. But when you start to deal with more complicated features like memory and side effects, your machine may need additional components (like a store for representing memory, a representation of stack frames, and so on).

    In Redex, language definitions are made using the define-language form. It defines a grammar for your language using a straightforward pattern-based language that looks like BNF.

    A simple grammar might look like the following:

    (define-language simple-language
      ;; values
      (v ::= number)
      ;; expressions
      (e ::= v
             (add e e)
             (if e e e))
      ;; evaluation contexts
      (E ::= hole
             (add E e)
             (add v E)
             (if E e e)))

    The define-language form takes sets of non-terminals (e.g., e or v) paired with productions for terms in the language (e.g., (add e e)) and creates a language description bound to the given name (i.e., simple-language).

    This is a small and simple language with only conditionals, numbers, and basic addition. It accepts terms like (term 5), (term (+ (+ 1 2) 3)), (term (if 0 (+ 1 3) (if 1 1 2))) and so on.

    (term is a special Redex form for constructing language terms)

    In order to actually evaluate programs, we will need to define a reduction relation, which describes how machine states reduce to other states. In this simple language, states are just expressions.

    Here’s a very simple reduction relation:

    (define simple-red-v0
      (reduction-relation simple-language
        ;; machine states are just expressions "e"
        #:domain e
        (--> (add v_1 v_2)               ; pattern match
             ,(+ (term v_1) (term v_2))  ; result of reduction
             ;; this part is just the name of the rule
        ;; two rules for conditionals, true and false
        (--> (if 1 e_t e_f) ; the _ is used to name pattern occurences, like e_t
        (--> (if 0 e_t e_f) e_f

    You can read this reduction-relation form as describing a relation on the states e of the language, where the --> clauses define specific rules.

    Each --> describes how to reduce terms that match a pattern, like (add v_1 v_2), into a result term. The , allows an escape from the Redex DSL into full Racket to do arbitrary computations, like arithmetic in this case.

    With this reduction relation in hand, we can evaluate examples in the language using the test-->> unit test form, which will recursively apply the reduction relation until it’s done:

    (test-->> simple-red-v0 (term (add 3 4)) (term 7))
    (test-->> simple-red-v0 (term (if 0 1 2)) (term 2))

    These tests will succeed, showing that the first term evaluates to the second in each case.

    Unfortunately, this isn’t quite enough to evaluate more complex terms:

    > (test-->> simple-red-v0 (term (add (if 0 (add 3 4) (add 18 2)) 5)) (term 25))
    FAILED ::2666
    expected: 25
      actual: '(add (if 0 (add 3 4) (add 18 2)) 5)

    This is because the reduction relation above is defined for every form in the language, but it doesn’t tell you how to reduce inside nested expressions. For example, there is no explicit rule for evaluating an if nested inside an add.

    Of course, writing rules for all these nested combinations doesn’t make sense. That’s where evaluation contexts come in. An evaluation context like E (see the grammar from earlier) defines where reduction can happen. Anywhere that a hole exists in an evaluation context is a reducible spot.

    So for example, (add 3 hole) and (add hole (if 0 1 2)) are valid evaluation contexts, following the productions in the grammar. On the other hand, (if (add 1 -1) hole 5) is not a valid evaluation context, because you can’t evaluate the “then” branch of a conditional before evaluating the condition.

    You can combine evaluation contexts with an in-hole form to do context-based matching in reduction relations. Adding that changes the above reduction relation to this one:

    (define simple-red-v1
      (reduction-relation simple-language
        #:domain e
        ;; note the in-hole that's added
        (--> (in-hole E (add v_1 v_2))
             (in-hole E ,(+ (term v_1) (term v_2)))
        (--> (in-hole E (if 1 e_t e_f))
             (in-hole E e_t)
        (--> (in-hole E (if 0 e_t e_f))
             (in-hole E e_f)

    With this new relation, the test from before will succeed:

    (test-->> simple-red-v0 (term (add (if 0 (add 3 4) (add 18 2)) 5)) (term 25))

    This is because the in-hole pattern will match an evaluation context whose hole is substituted with a term that will be reduced.

    To make that a bit more concrete, the test above would execute a pattern match like this:

    > (redex-match
       (in-hole E (if 0 e_1 e_2))
       (term (add (if 0 (add 3 4) (add 18 2)) 5)))
       (bind 'E '(add hole 5))
       (bind 'e_1 '(add 3 4))
       (bind 'e_2 '(add 18 2)))))

    This interaction shows that the evaluation context pattern E gets matched to an add term with a hole. Inside that hole is an if expression, which is where the reduction will happen.

    Evaluation contexts are powerful in that you can describe all of these nested computations with straightforward rules that only mention the inner term that is being reduced. They are also useful for describing more complicated language features as well, such as mutation and state.

    With that brief tour of Redex, the next section will give a high-level overview of a model of wasm in Redex.

    WebAssembly Design via Redex

    WebAssembly is an interesting language because its design goals are specifically oriented towards web programming. That means, for example, that programs should be compact so that web browsers don’t have to download and process large scripts.

    Security is of course also a major concern on the web, so the language is designed with isolation in mind to ensure that programs cannot access unnecessary state or interfere with the runtime system’s data.

    The desire for a “a compact program representation” (Hass et al.) led to a stack-based design, in contrast to the simple nested expression language I gave as an example earlier.

    This means that operations take values from the stack, rather than nesting them in a tree-like structure. For example, instead of an expression like (add 3 4), wasm would use something like (i32.const 3) (i32.const 4) add. This sequence of instructions pushes two constants onto the stack, and then performs an addition with values popped from the stack (and then pushes the result).

    Wasm is also statically typed with a very simple type system. The validation provided by the type system ensures that you can statically know the stack layout at any point in the program, which means that you can’t have programs that access out-of-bounds positions in the stack (or the wrong registers, if you compile stack accesses to register accesses).

    By understanding the semantics, either in math or in model code, you can see how these security concerns are addressed operationally. For example, later in this section I’ll explain how the operational semantics demonstrates wasm’s memory isolation.

    To provide a starting point for explaining the Redex model, here’s a screenshot of the grammar of wasm from Hass et al’s paper:

    wasm grammar
    Grammar from Hass et al., CC-BY 4.0

    This formal grammar can be transcribed straightforwardly as a BNF-style language definition, as explained in the previous section. The grammar in Redex looks something like this:

    (define-language wasm-lang
      ;; basic types
      (t   ::= t-i t-f)
      (t-f ::= f32 f64)
      (t-i ::= i32 i64)
      ;; function types
      (tf  ::= (-> (t ...) (t ...)))
      ;; instructions (excerpted)
      (e-no-v ::= drop                      ; drop stack value
                  select                    ; select 1 of 2 values
                  (block tf e*)             ; control block
                  (loop tf e*)              ; looping
                  (if tf e* else e*)        ; conditional
                  (br i)                    ; branch
                  (br-if i)                 ; conditional branch
                  (call i)                  ; call (function by index)
                  (call cl)                 ; call (a closure)
                  return                    ; return from function
                  (get-local i)             ; get local variable
                  (set-local i)             ; set local variable
                  (label n {e*} e*)         ; branch target
                  (local n {i (v ...)} e*)  ; function body instruction
                  ...                       ; and so on
      ;; instructions including constant values
      (e    ::= e-no-v
                (const t c))
      (c    ::= number)
      ;; sequences of instructions
      (e*   ::= ϵ
                (e e*))
      ;; various kinds of indices
      ((i j l k m n a o) integer)
      ;; modules and various other forms omitted

    This shows just a subset of the grammar, but you can see there’s a close correspondence to the math. The main differences are just in the surface syntax, such as how expressions are ordered or nested.

    Using this syntax, the addition instructions from before would inhabit the e* non-terminal for a sequence of instructions. Following the grammar, it would be written as ((const i32 4) ((const i32 3) (add ε))). This instruction sequence is represented with a nested list (where ε is the null list) rather than a flat sequence to make it easier to manipulate in Redex.

    We also need to define evaluation contexts for this language. This is thankfully pretty simple:

    (E ::= hole
           (v E)
           ((label n (e*) E) e*))

    What these contexts means is that either you will look for a matching sequence of instructions that comes after a nested prefix (possibly empty) of values v to reduce, or you will look to reduce inside a label form (or a combination of these two patterns).

    Unlike the simple language from earlier, wasm has side effects, memory, modules, and other features. Because of this complexity, the machine states of wasm have to include a store (the non-terminal s), a call frame (F), a sequence of instructions (e*), and an index for the current module instance (i).

    As a result, the reduction relation becomes more complicated. The structure of the relation looks like this:

    (define wasm->
       ;; the machine states
       #:domain (s F e* i)
       ;; a subset of reduction rules shown below
       ;; rule for binary operations, like add or sub
       (++> ;; this pattern matches two consts on the stack
            (in-hole E ((const t c_1) ((const t c_2) ((binop t) e*))))
            ;; the two consts are consumed, and replaced with a result const
            (in-hole E ((const t c) e*))
            ;; do-binop is a helper function for executing the operations
            (where c (do-binop binop t c_1 c_2))
       ;; rule for just dropping a stack value
       (++> (in-hole E (v (drop e*))) ; consumes one v = (const t c)
            (in-hole E e*)            ; return remaining instructions e*
       ;; more rules would go here
       ;; shorthand reduction definitions
       [(--> (s F x i)
             (s F y i))
        (++> x y)]))

    Again, we have the #:domain keyword indicating what the machine states are. Then there are two rules for binary operations and the drop instruction (other rules omitted for now).

    The rules use a shorthand form ++> (defined at the bottom) that only match the instruction sequence, and ignores the store, call frame, and so on. This is just used to simplify how the rules look to match the paper.

    For comparison, here’s a screenshot of the full reduction semantics figure from the wasm paper:

    wasm reduction rules
    Reduction relation from Hass et al., CC-BY 4.0

    You can look at the 3rd and 12th rules in that figure and compare them to the two in the code excerpt above. You can see that there’s a close correspondence. In this fashion, you can transcribe all the rules from math to code, though you also have to write a significant amount of helper code to implement the side conditions in the rules.

    As I promised earlier, by inspecting the semantics, you can see how the language isolates the memory of wasm scripts that are executing. The store term s, whose grammar I omitted earlier, is defined like this:

      ;; stores: contains modules, tables, memories
      (s       ::= {(inst modinst ...)
                    (tab tabinst ...)
                    (mem meminst ...)})
      ;; modules with a list of closures cl, global values v,
      ;; table index and memory index
      (modinst ::= {(func cl ...) (glob v ...)}
                   ;; tab and mem are optional, hence all these cases
                   {(func cl ...) (glob v ...) (tab i)}
                   {(func cl ...) (glob v ...) (mem i)}
                   {(func cl ...) (glob v ...) (tab i) (mem i)})
      ;; tables are lists of closures
      (tabinst ::= (cl ...))
      ;; memories are lists of bytes
      (meminst ::= (b ...))

    The store is a structure containing some module, table, and memory instances. These are basically several kinds of global state that the instructions need to reference to accomplish various non-local operations, such as memory access, global variable access, function calls, and so on.

    Each module instance contains a list of functions (stored as closures cl), a list of global variables, and optionally indices specifying tables and memories associated with the module.

    The functions and global variables have an obvious purpose, which is to allow function calls to fetch the code for the function and allowing global variables to be read/written.

    The table index inside modules allow dynamic dispatch to functions stored in a shared table of closures. This lets you use a function pointer-like dispatch pattern without the dangers of pointers into memory.

    Finally, each module may declare an associated memory via an index, which means different modules can share access to the same memory if desired.

    This index number indexes into the list of memory instances (mem meminst ...), each of which is just represented as a list of bytes (b ...). The memory can be used for loads and stores to represent structured data or for other uses of memory.

    The Redex code for memory load and store rules look like this:

       ;; reductions for operating on memory
       (--> ;; one stack argument for address in memory, k
            (s F (in-hole E ((const i32 k) ((load t a o) e*))) i)
            ;; reintrepret bytes as the appropriately typed data
            (s F (in-hole E ((const t (const-reinterpret t (b ...))) e*)) i)
            ;; helper function fetches bytes from appropriate memory in s
            (where (b ...) (store-mem s i ,(+ (term k) (term o)) (sizeof t)))
       (--> ;; stack arguments for address k and new value c
            (s F (in-hole E ((const i32 k) ((const t c) ((store t a o) e*)))) i)
            ;; installs a new store s_new with the memory changed
            (s_new F (in-hole E e*) i)
            ;; the size in bytes for the given type
            (where n (sizeof t))
            ;; helper function modifies the memory in s, creating s_new
            (where s_new (store-mem= s i ,(+ (term k) (term o)) n (bits n t c)))

    These rules are a little more complicated and rely on various helper functions. The helper functions are not too complicated, however, and basically just do the appropriate indexing into the store data structure.

    One key difference from the earlier rules is the use of --> without any shorthands. This allows the rules to reference the store s to access parts of the global machine state. In this case, it’s the memory part of the store that’s needed.

    From this, you can see that wasm code only ever touches the appropriate memory instance that’s associated with the module that the code is in. More specifically, that’s because the store lookup is done using the current module index i. No other memories can be accessed via this lookup since this index is fixed for a given function definition.

    All memory accesses are also bounds-checked to avoid access to arbitrary regions of memory that might interfere with the runtime system and result in security vulnerabilities. The bounds checking is done inside the store-mem and store-mem= helper functions, which will fail to match in the where clauses if the index k is out of bounds.

    You can get an idea of how wasm is designed with web security in mind from this particular rule and how the store is designed. If you look at other rules, you can also see how global variables and local variables are kept separate from general memory access as well, which prevents memory access from interfering with variables and vice versa.

    In addition, code (e.g., function definitions, indirectly accessed functions in tables) are not stored in the memory either, which prevents arbitrary code execution or modification. You can see this from how the function closures are stored in separate function and table sections in modules, as explained above.

    Where to go from here

    The last section gave an overview of some interesting parts of the wasm semantics, with a focus on understanding some of the isolation guarantees that the design provides.

    This is just a small taste of the full semantics, which you can understand more comprehensively by reading the paper or the execution part of the language specification. The paper’s a very interesting read, with lots of attention paid to explaining design rationales.

    You can also try examples out with the basic Redex model that I’ve built, though with the caveat that it’s not quite complete. For example, I didn’t implement the type system and there are some rough edges around the numeric operations.

    There are also interesting ways in which the model could be extended.

    For one, it could cover the full semantics that’s described in the spec instead of the small version in the paper. If you combined that with a parser for wasm’s surface syntax, you could run real wasm programs that a web browser would understand and trace through the reductions.

    Wasm’s reference interpreter also comes with a lot of tests. With a proper parser, we could feed in all the tests to make sure the model is accurate (I’m sure there are bugs in my model).

    It would then be interesting to do random testing to discover if there are discrepancies between the specification-based semantics encoded in the executable model and implementations in various web browsers.

    If anyone’s interested in exploring any of that, you can check out the code I’ve written so far on github:

    Appendix: More Redex Discussion

    This section is an appendix of more in-depth discussion about using Redex specifically, if you’re interested in some of the nitty-gritty details.

    There were several challenges involved in actually modeling wasm in Redex. The first challenge I bumped into is that the wasm model’s reduction rules operate on sequences of values, sometimes even producing an empty sequence as a result.

    This actually makes encoding into Redex a little tricky, as Redex assumes that evaluation contexts are plugged in with a single datum and not a splicing sequence of them. The solution to this, which you can see in the rules shown above, is to explicitly match over the “remaining” sequence of expressions in the instruction stack and handle it explicitly (either discarding it or appending to it).

    Then evaluation contexts can just be plugged with a sequence of instructions or values.

    This requires a few cosmetic modifications to the grammar and rules when compared to the paper. For example, the wasm paper’s evaluation contexts simply define a basic context as (v ... hole e ...) in which the hole could be plugged with a general e ....

    Instead, the Redex model uses a context like (v E) where E can a hole or another nesting level. Then a hole is plugged in with an e* (defined as a list) representing both the thing to plug into the hole in the original rule plus the “remaining” e ... expressions.

    The rules also need to explicitly handle cons-ing and appending of expression sequences. In a sense, this is just making explicit what was implicit in the ... and juxtaposition in the paper’s rules.

    Another challenge was the indexed evaluation contexts of the paper. The paper’s contexts are indexed by a nesting depth, which constrains the kinds of contexts that a rule can apply to. In Redex it’s not possible to add such data-indexed constraints into grammars, so you end up having to apply extra side conditions in the reduction rules where such indexed contexts are used.

    For example, this shows in the rule for branching from inside of a control construct. The evaluation context E below is indexed by a nesting depth k in the paper’s rules:

    (==> ((label n {e*_0} (in-hole E ((br j) e*_1))) e*_2)
         (e*-append (in-hole E_v e*_0) e*_2)
         (where j (label-depth E))
         (where (E_outer E_v) (v-split E n))

    Whereas in the code above, the nesting depth is checked with a metafunction label-depth.

    Finally, the reduction relation in wasm has a rule for operating on the local form (for function invocations), that doesn’t follow the structure of typical evaluation context based reduction.

    Specifically, the rule states that when a model state reduces as:

    s; v*; e* →_i s'; v'*; e'*,

    then you can reduce under a function’s local expression as follows:

    s; v_0*; local_n {i; v*} e* →_j s'; v_0*; local_n {i; v'*} e'*.

    That is, you can reduce under a local if you swap out the call frame and are able to reduce under the swapped out call frame.

    I think this can’t be expressed using a normal evaluation context, but it’s still possible to express in Redex as a recursive call to the reduction relation being defined. Basically, you have to call apply-reduction-relation on the right-hand side of the rule if you encounter a local form.

    To make debugging easier, you also need to use apply-reduction-relation/tag-with-names and the computed-name form for the rule name to make sure the traces form shows the right rule names.

    Here’s what the reduction rule for the local case looks like:

    ;; specifies how to reduce inside a local/frame instruction via a
    ;; recursive use of the reduction relation
    (--> (s_0 F_0 (in-hole E ((local n {i F_1} e*_0) e*_2)) j)
         (s_1 F_0 (in-hole E ((local n {i F_2} e*_1) e*_2)) j)
         ;; apply --> recursively
         (where any_rec
                  wasm-> (term (s_0 F_1 e*_0 i))))
         ;; only apply this rule if this reduction was valid
         (side-condition (not (null? (term any_rec))))
         ;; the relation should be deterministic, so just take the first
         (where (string_tag (s_1 F_2 e*_1 i)) ,(first (term any_rec)))
         (computed-name (term string_tag)))

    by Asumu Takikawa at April 29, 2019 08:12 PM

    April 26, 2019

    Samuel Iglesias

    Igalia coding experience open positions

    The Igalia Coding Experience is an internship program which provides their first exposure to the professional world, working hand in hand with Igalia programmers and learning with them. The internship is aimed to students with background in Computer Science, Information Technology, or Free Software development.

    This program is a great opportunity for those students willing to improve their technical skills by working on the field, learn how to contribute to open-source projects, and work together with the engineers of Igalia, which is a worker-owned company rocking in the Free Software world for more than 18 years!


    We are looking for candidates that are passionate about Free Software philosophy and willing to work on Free Software projects. If you have already contributed to any Free Software project related to our areas of specialization… that’s great! But don’t worry if you have not yet, we encourage you to apply as well!

    The conditions of the program are the following:

    • You will be mentored by one Igalian that is an expert in the respective field, so you are not going to be alone.
    • You will need to spend 450h working in the tasks agreed with the mentor, but you are free to distribute them along the year as it fits better for you. Usually students prefer to distribute them on timetables of 3 months working full-time, or 6 months part-time or even 1 year working 10 hours per week!
    • You are not going to do it for free. We will compensate you with 6500€ for all your work :)

    This year we are offering Coding experience positions on 6 different areas:

    • Implementation of web standards. The intern will become familiar, and contribute to the implementation of W3C standards in open source web engines.

    • WebKit, one of the most important open source web rendering engines. The intern will have the opportunity to help maintain and/or contribute to the development of new features.

    • Chromium, a well-known browser rendering engine. The intern will work on specific features development and/or bug-fixing. Additional tasks may include maintenance of our internal buildbots, and creation of Chromium/Wayland packages for distribution.

    • Compilers, with focus on WebAssembly and JavaScript implementations. The intern will contribute JS engines like V8 or JSC, working on new language features, optimizations or ports.

    • Multimedia and GStreamer, the leading open source multimedia framework. The intern will help develop the Video Editing stack in GStreamer (namely GES and NLE). This work will include adding new features in any part of GStreamer, GStreamer Editing Services or in the Pitivi video editor, as well as fixing bugs in any of those components.

    • Open-source graphics stack. The student will work in the development of specific features in Mesa or in improving any of the open-source testing suites (VkRunner, piglit) used in the Mesa community. Candidates who would like to propose topics of interest to work on, please include them in the cover letter.

    The last area is the one that I have been working for more than 5 years inside the Graphics team at Igalia and I am thrilled we can offer such kind of position this year :-)

    You can find more information about Igalia Coding experience program in the website… don’t forget to apply for it! Happy hacking!

    April 26, 2019 06:30 AM

    April 22, 2019

    Thibault Saunier

    GStreamer Editing Services OpenTimelineIO support

    GStreamer Editing Services OpenTimelineIO support

    OpenTimelineIO is an Open Source API and interchange format for editorial timeline information, it basically allows some form of interoperability between the different post production Video Editing tools. It is being developed by Pixar and several other studios are contributing to the project allowing it to evolve quickly.

    We, at Igalia, recently landed support for the GStreamer Editing Services (GES) serialization format in OpenTimelineIO, making it possible to convert GES timelines to any format supported by the library. This is extremely useful to integrate GES into existing Post production workflow as it allows projects in any format supported by OpentTimelineIO to be used in the GStreamer Editing Services and vice versa.

    On top of that we are building a GESFormatter that allows us to transparently handle any file format supported by OpenTimelineIO. In practice it will be possible to use cuts produced by other video editing tools in any project using GES, for instance Pitivi:

    At Igalia we are aiming at making GStreamer ready to be used in existing Video post production pipelines and this work is one step in that direction. We are working on additional features in GES to fill the gaps toward that goal, for instance we are now implementing nested timeline support and framerate based timestamps in GES. Once we implement them, those features will enhance compatibility of Video Editing projects created from other NLE softwares through OpenTimelineIO. Stay tuned for more information!

    by thiblahute at April 22, 2019 03:21 PM

    April 14, 2019

    Javier Muñoz

    Ceph Days Galicia 2019

    On Wednesday of last week took place the second Ceph Days Galicia in Santiago de Compostela. It was organized by AMTEGA in collaboration with Red Hat, Supermicro, Colabora Ingenieros, Mellanox, Dinahosting, Aitire and Igalia.

    I presented in detail the new archive zone functionality available in Ceph Nautilus. The slides I used in the talk are available here.

    If you could not attend and are interested in the topics we talked about, you can read more about the event here. Félix and Camilo have also published a blog post in Spanish about the event.

    Thank all the people who participated in the organization and actively collaborated to make the event possible. See you at the next one!


    by Javier at April 14, 2019 10:00 PM

    April 08, 2019

    Philippe Normand

    Introducing WPEQt, a WPE API for Qt5

    WPEQt provides a QML plugin implementing an API very similar to the QWebView API. This blog post explains the rationale behind this new project aimed for QtWebKit users.

    Qt5 already provides multiple WebView APIs, one based on QtWebKit (deprecated) and one based on QWebEngine (aka Chromium). WPEQt aims to provide a viable alternative to the former. QtWebKit is being retired and has by now lagged a lot behind upstream WebKit in terms of features and security fixes. WPEQt can also be considered as an alternative to QWebEngine but bear in mind the underlying Chromium web-engine doesn’t support the same HTML5 features as WebKit.

    WPEQt is included in WPEWebKit, starting from the 2.24 series. Bugs should be reported in WebKit’s Bugzilla. WPEQt’s code is published under the same licenses as WPEWebKit, the LGPL2 and BSD.

    At Igalia we have compared WPEQt and QtWebKit using the BrowserBench tests. The JetStream1.1 results show that WPEQt completes all the tests twice as fast as QtWebKit. The Speedometer benchmark doesn’t even finish due to a crash in the QtWebKit DFG JIT. Although the memory consumption looks similar in both engines, the upstream WPEQt engine is well maintained and includes security bug-fixes. Another advantage of WPEQt compared to QtWebKit is that its multimedia support is much stronger, with specs such as MSE, EME and media-capabilities being covered. WebRTC support is coming along as well!

    So to everybody still stuck with QtWebKit in their apps and not yet ready (or reluctant) to migrate to QtWebEngine, please have a look at WPEQt! The remaining of this post explains how to build it and test it.

    Building WPEQt

    For the time being, WPEQt only targets Linux platforms using graphics drivers compatible with wayland-egl. Therefore, the end-user Qt application has to use the wayland-egl Qt QPA plugin. Under certain circumstances the EGLFS QPA might also work, YMMV.

    Using a SVN/git WebKit snapshot

    If you have a SVN/git development checkout of upstream WebKit, then you can build WPEQt with the following commands on a Linux desktop platform:

    $ Tools/wpe/install-dependencies
    $ Tools/Scripts/webkit-flatpak --wpe --wpe-extension=qt update
    $ Tools/Scripts/build-webkit --wpe --cmakeargs="-DENABLE_WPE_QT=ON"

    The first command will install the main WPE host build dependencies. The second command will setup the remaining build dependencies (including Qt5) using Flatpak. The third command will build WPEWebKit along with WPEQt.

    Using the WPEWebKit 2.24 source tarball

    This procedure is already documented in the WPE Wiki page. The only change required is the new CMake option for WPEQt, which needs to be explicitly enabled as follows:

    $ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_WPE_QT=ON -GNinja

    Then, invoke ninja, as documented in the Wiki.

    Using Yocto

    At Igalia we’re maintaining a Yocto overlay for WPE (and WebKitGTK). It was tested for the rocko, sumo and thud Yocto releases. The target platform we tested so far is the Zodiac RDU2 board, which is based on the Freescale i.MX6 QuadPlus SoC. The backend we used is WPEBackend-fdo which fits very naturally in the Mesa open-source graphics environment, inside Weston 5. The underlying graphics driver is etnaviv. In addition to this platform, WPEQt should also run on Raspberry Pi (with the WPEBackend-rdk or -fdo). Please let us know how it goes!

    To enable WPEQt in meta-webkit, the qtwpe option needs to be enabled in the wpewebkit recipe:

    PACKAGECONFIG_append_pn-wpewebkit = " qtwpe"

    The resulting OS image can also include WPEQt’s sample browser application:

    IMAGE_INSTALL_append = " wpewebkit-qtwpe-qml-plugin qt-wpe-simple-browser"

    Then, on device, the sample application can be executed either in Weston:

    $ qt-wpe-simple-browser -platform wayland-egl

    Or with the EGLFS QPA:

    $ # stop weston
    $ qt-wpe-simple-browser -platform eglfs

    Using WPEQt in your application

    A sample MiniBrowser application is included in WebKit, in the Tools/MiniBrowser/wpe/qt directory. If you have a desktop build of WPEQt you can launch it with the following command:

    $ Tools/Scripts/run-qt-wpe-minibrowser -platform wayland <url>

    Here’s the QML code used for the WPEQt MiniBrowser. As you can see it’s fairly straightforward!

    import QtQuick 2.11
    import QtQuick.Window 2.11
    import org.wpewebkit.qtwpe 1.0
    Window {
        id: main_window
        visible: true
        width: 1280
        height: 720
        title: qsTr("Hello WPE!")
        WPEView {
            url: initialUrl
            focus: true
            anchors.fill: parent
            onTitleChanged: {
                main_window.title = title;

    As explained in this blog post, WPEQt is a simple alternative to QtWebKit. Migrating existing applications should be straightforward because the API provided by WPEQt is very similar to the QWebView API. We look forward to hearing your feedback or inquiries on the webkit-wpe mailing list and you are welcome to file bugs in Bugzilla.

    I wouldn’t close this post without acknowledging the support of my company Igalia and Zodiac, many thanks to them!

    by Philippe Normand at April 08, 2019 10:20 AM

    March 28, 2019

    Jacobo Aragunde

    The Chromium startup process

    I’ve been investigating the process of Chromium startup, the classes involved and the calls exchanged between them. This is a summary of my findings!

    There are several implementations of a browser living inside Chromium source code, known as “shells”. Chrome is the main one, of course, but there are other implementations like the content_shell, a minimal browser designed to exercise the content API; the app_shell, a minimal container for Chrome Apps, and several others.

    To investigate the difference between the different shell, we can start checking the binary entry point and find out how it evolves. This is a sequence diagram that starts from the content_shell main() function:

    content_shell and app_shell sequence diagram

    It creates two objects, ShellMainDelegate and ContentMainParams, then hands control to ContentMain() as implemented in the content module.

    Chrome’s main is very similar, it also spawns a couple objects and then hands control to ContentMain(), following exactly the same code path from that point onward:

    Chrome init sequence diagram

    If we took a look to the app_shell, it would be very similar, and it’s probably the same for other shells, so where’s the magic? What’s the difference between the many shells in Chromium? The key is the implementation of that first object created in the main() function:

    ContentMainDelegate class diagram

    Those *MainDelegate objects created in main() are implementations of ContentMainDelegate. This delegate will get the control in key moments of the initialization process, so the shells can customize what happens. Two important events are the calls to CreateContentBrowserClient and CreateContentRendererClient, which will enable the shells to customize the behavior of the Browser and Render processes.

    ContentBrowserClient class diagram

    The diagram above shows how the ContentMainDelegate implementations provided by the different shells intantiate each their own implementation of ContentBrowserClient. This class runs in the UI thread and is able to customize the browser logic, its API is able to enable or disable certain parameters (e.g. AllowGpuLaunchRetryOnIOThread), provide delegates on certain objects (e.g. GetWebContentsViewDelegate), etc. A remarkable responsibility of ContentBrowserClient is providing an implementation of BrowserMainParts, which runs code in certain stages of the initialization.

    There is a parallel hierarchy of ContentRendererClient classes, which works analogously to what we’ve just seen for ContentBrowserClient:

    ContentRendererClient class diagram

    The specific case of extensions::ShellContentRendererClient is interesting because it contains the details to setup the extension API:

    ShellContentRendererClient class diagram

    The purpose of both ExtensionsClient and ExtensionsRendererClient is to set up the extensions system. The difference lies in the knowledge of the renderer process and its concepts by ExtensionsRendererClient, only methods that make use of this knowledge should be there, otherwise they should be part of ExtensionsClient, which has a much bigger API already.
    The specific implementation of ShellExtensionsRendererClient is very simple but it owns an instance of extensions::Dispatcher; this is an important class that sets up extension features on demand whenever necessary.

    The investigation may continue in different directions, and I’ll try to share more report like this one. Finally, these are the source files for the diagrams and a shared document containing the same information in this report, where any comments, corrections and updates are welcome!

    by Jacobo Aragunde Pérez at March 28, 2019 05:03 PM

    March 27, 2019

    Michael Catanzaro

    Epiphany 3.32 and WebKitGTK 2.24

    I’m very pleased to (belatedly) announce the release of Epiphany 3.32 and WebKitGTK 2.24. This Epiphany release contains far more changes than usual, while WebKitGTK continues to improve steadily as well. There are a lot of new features to discuss, so let’s dive in.

    Dazzling New Address Bar

    The most noticeable change is the new address bar, based on libdazzle’s DzlSuggestionEntry. Christian put a lot of effort into designing this search bar to work for both Builder and Epiphany, and Jan-Michael helped integrate it into Epiphany. The result is much nicer than we had before:

    The address bar is a central component of the user interface, and this clean design is important to provide a quality user experience. It should also leave a much better first impression than we had before.

    Redesigned Tabs Menu

    Epiphany 3.24 first added a tab menu at the end of the tab bar. This isn’t very useful if you have only a few tabs open, but if you have a huge number of tabs then it’s useful to help navigate through them. Previously, this menu only showed the page titles of the tabs. For 3.32, Adrien has converted this menu to a nice popover, including favicons, volume indicators, and close buttons. These enhancements were primarily aimed at making the browser easier to use on mobile devices, where there is no tab bar, but they’re nice improvement for desktop users, too.

    (On mobile, the tab rows are much larger, to make touch selection easier.)

    Touchpad Gestures

    Epiphany now supports touchpad gestures. Jan-Michael first added a three-finger swipe to Epiphany, for navigating back and forward. Then Alexander (Exalm) decided to go and rewrite it, pushing the implementation down into WebKit to share as much code as possible with Safari. The end result is a two-finger swipe. This was much more involved than I expected as it required converting a bunch of Apple-specific Objective C++ code into cross-platform C++, but the end result was worth the effort:

    Applications that depend on WebKitGTK 2.24 may opt-in to these gestures using webkit_settings_set_enable_back_forward_navigation_gestures().

    Alexander also added pinch zoom.

    Variable Fonts

    Carlos Garcia decided to devote some attention to WebKit’s FreeType font backend, and the result speaks for itself:

    Emoji 🦇

    WebKit’s FreeType backend has supported emoji for some time, but there were a couple problems:

    • Most emoji combinations were not supported, so while characters like🧟(zombie) would work just fine, characters like 🧟‍♂️(man zombie) and 🧟‍♀️(woman zombie) were broken. Carlos fixed this. (Technically, only emoji combinations using a certain character code were broken, but that was most of them.)
    • There was no code to prefer emoji fonts for rendering emoji, meaning emoji would almost always be displayed in non-ideal fonts, usually DejaVu, resulting in a black and white glyph rather than color. Carlos fixed this, too. This seems to work properly in Firefox on some websites but not others, and it’s currently WONTFIXed in Chrome. It’s good to see WebKit ahead of the game, for once. Note that you’ll see color on this page regardless of your browser, because WordPress replaces the emoji characters with images, but I believe only WebKit can handle the characters themselves. You can test your browser here.

    Improved Adaptive Mode

    First introduced in 3.30, Adrien has continued to improve adaptive mode to ensure Epiphany works well on mobile devices. 3.32 is the first release to depend on libhandy. Adrien has converted various portions of the UI to use libhandy widgets.

    Reader Mode

    Jan-Michael’s reader mode has been available since 3.30, but new to 3.32 are many style improvements and new preferences to choose between dark and light theme, and between sans and serif font, thanks to Adrian (who is, confusingly, not Adrien). The default, sans on light background, still looks the best to me, but if you like serif fonts or dark backgrounds, now you can have them.

    JPEG 2000

    Wait, JPEG 2000? The obscure image standard not supported by Chrome or Firefox? Why would we add support for this? Simple: websites are using it. A certain piece of popular server-side software is serving JPEG 2000 images in place of normal JPEGs and even in place of PNG images to browsers with Safari-style user agents. (The software in question doesn’t even bother to change the file extensions. We’ve found far too many images in the wild ending in .png that are actually JPEG 2000.) Since this software is used on a fairly large number of websites, and our user agent is too fragile to change, we decided to support JPEG 2000 in order to make these websites work properly. So Carlos has implemented JPEG 2000 support, using the OpenJPEG library.

    This isn’t a happy event for the web, because WebKit is only as secure as its least-secure dependency, and adding new obscure image formats is not a step in the right direction. But in this case,  it is necessary.

    Mouse Gestures

    Experimental mouse gesture support is now available, thanks to Jan-Michael, if you’re willing to use the command line to enable it:

    $ gsettings set org.gnome.Epiphany.web:/org/gnome/epiphany/web/ enable-mouse-gestures true

    With this, I find myself closing tabs by dragging the mouse down and then to the right. Down and back up will reload the tab. Straight to the left is Back, straight to the right is Forward. Straight down will open a new tab. I had originally hoped to use the right mouse button for this, as in Opera, but turns out there is a difference in context menu behavior: whereas Windows apps normally pop up the context menu on button release, GTK apps open the menu on button press. That means the context menu would appear at the start of every mouse gesture. And that is certainly no good, so we’ve opted to use the middle mouse button instead. We aren’t sure whether this is a good state of things, and need your feedback to decide where to go with this feature.

    Improved Fullscreen Mode

    A cool side benefit of using libdazzle is that the header bar is now available in fullscreen mode by pressing the mouse towards the top of the screen. There’s even a nice animation to show the header bar sliding up to the top of the screen, so you know it’s there (animation disabled for fullscreen video).

    The New Tab Button

    Some users were disconcerted that the new tab button would jump from the end of the tab bar (when multiple tabs are open) back up to the end of the header bar (when there is only one tab open). Now this button will remain in one place: the header bar. Since it will no longer appear in the tab bar, Jan-Michael has moved it back to the start of the header bar, where it was from 3.12 through 3.22, rather than the end. This is mostly arbitrary, but makes for a somewhat more balanced layout.

    The history of the new tab button is rather fun: when the new tab button was first added in 3.8, it was added at the end of the header bar, but moved to the start in 3.12 to be more consistent with gedit, then moved back to the end in 3.24 to reduce the distance it would need to move to reach the tab bar. So we’ve come full circle here, twice. Only time will tell if this nomadic button will finally be able to stay put.

    New Icon

    Yes, most GNOME applications have a new icon in 3.32, so Epiphany is not special here. But I just can’t resist the urge to show it off. Thanks, Jakub!

    And More…

    It’s impossible to mention all the improvements in 3.32 in a single blog post, but I want to squeeze a few more in.

    Alexander (Exalm) landed several improvements to Epiphany’s theme, especially the incognito mode theme, which needed work to look good with the new Adwaita in 3.32.

    Jan-Michael added an animation for completed downloads, so we don’t need to annoyingly pop open the download popover anymore to let you know that your download has completed.

    Carlos Garcia added support for automation mode. This means Epiphany can now be used for running automated tests with WebDriver (e.g. with Selenium). Using the new automation mode, I’ve upstreamed support for running tests with Epiphany to the Web Platform Tests (WPT) project, the test suite used by web engine developers to test standards conformance.

    Carlos also reworked the implementation of script dialogs so that they are now modal only to their associated web view, not modal to the entire application. This means you can just close the browser tab if a particular website is abusing script dialogs in a problematic way, e.g. by continuously opening new dialogs.

    Patrick has improved the directory layout Epiphany uses to store data on disk to avoid storing non-configuration data under ~/.config, and reworked the internals of the password manager to mitigate Spectre-related concerns. He also implemented Happy Eyeballs support in GLib, so Epiphany will now fall back to an IPv4 connection if IPv6 is available but broken.

    Now Contains 100% Less Punctuation!

    Did you notice any + signs missing in this blog? Following GTK+’s rename to GTK, WebKitGTK+ has been renamed to WebKitGTK. You’re welcome.

    Whither Pop!_OS?

    Extra Credit

    Although Epiphany 3.32 has been the work of many developers, as you’ve seen, I want to give special credit Epiphany’s newest maintainer, Jan-Michael. He has closed a considerable number of bugs, landed too many improvements to mention here, and has been a tremendous help. Thank you!

    Now, onward to 3.34!

    by Michael Catanzaro at March 27, 2019 12:41 PM

    March 19, 2019

    Michael Catanzaro

    Epiphany Technology Preview Upgrade Requires Manual Intervention

    Jan-Michael has recently changed Epiphany Technology Preview to use a separate app ID. Instead of org.gnome.Epiphany, it will now be org.gnome.Epiphany.Devel, to avoid clashing with your system version of Epiphany. You can now have separate desktop icons for both system Epiphany and Epiphany Technology Preview at the same time.

    Because flatpak doesn’t provide any way to rename an app ID, this means it’s the end of the road for previous installations of Epiphany Technology Preview. Manual intervention is required to upgrade. Fortunately, this is a one-time hurdle, and it is not hard:

    $ flatpak uninstall org.gnome.Epiphany

    Uninstall the old Epiphany…

    $ flatpak install gnome-apps-nightly org.gnome.Epiphany.Devel org.gnome.Epiphany.Devel.Debug

    …install the new one, assuming that your remote is named gnome-apps-nightly (the name used locally may differ), and that you also want to install debuginfo to make it possible to debug it…

    $ mv ~/.var/app/org.gnome.Epiphany ~/.var/app/org.gnome.Epiphany.Devel

    …and move your personal data from the old app to the new one.

    Then don’t forget to make it your default web browser under System Settings -> Details -> Default Applications. Thanks for testing Epiphany Technology Preview!

    by Michael Catanzaro at March 19, 2019 06:39 PM

    March 07, 2019

    Brian Kardell

    Interesting Custom Element Data Begins

    Interesting Custom Element Data Begins

    A while back I wrote a piece asking how we begin to think about using data to move forward with standardization, and called for ways to help get data. One thing I did was request a new query from the HTTPArchive including data on “dasherized elements”. Keep in mind that the while the top 1.2 million sites or so in this dataset a lot of data, it is still a small sampling and has its own biases. It reports mostly on a particular ‘kind’ of site which is not representative of the giant bottom of the iceberg that lives beneath the surface, inside of corporate intranets, behind logins and paywalls and so on. Ultimately, we need more - but you have to start somewhere.

    Yesterday, Simon Pieters answered with this tweet linking to an HTTPArchive post and yielding this dataset which is amazing.

    It’s still a little hard to track because we can’t tell whether that is one page that includes an element a bunch of times, or many pages that include them, but this is an awesome start!

    It’s a little hard to view that dataset and, while the attributes are awesome in also helping us know more about what that element is, but it also means some noise and that the counts are slightly confused, so I took that, ran it through some processing and created a few other views (linked where appropriate below)…

    Here’s some preliminary, interesting observations:

    Even from this small sample, the HTTPArchive query that reports on use of HTML elements searches for only 140 known specific elements that are in a standard, but this report shows over 24k different "dasherized tags that appear in the top 1.2 million pages. Wow! What this tells me is that there are a lot of dasherized tags in use.

    It important to note that that doesn’t mean these are “custom elements” proper, but it also doesn’t really matter: What we care about really, is what you were trying to say there, semantically.

    Of these, there are 3,227 different unique prefixes. These may or may not indicate common authors, but they might at least be a helpful way to look for popular ‘sets’ of elements. For example, it’s unsuprising to see the amp- prefix in there given all of the boosts that it gets, and it’s nice to see them all linked in and counted there. I’ve organized a json output that looks like this

    To break them down into some further semi-arbitrary groups for summary:

    • ~7.8k of these occur between 1 and 100 times.
    • 31 of these occur between 101-200 times
    • 14 occur between 200-500 times
    • 4 occur between 500-1000 times
    • 4 occur > 500 times

    One personal note: I’m kind of sad to see that the most popular one is amp-auto-ads occuring a whopping 3718 times and it’s not remotely the only thing that would appear to be about ads. In fact, amp-ad also occurs 395 times and there are many other non-amp elements that appear to be ad related. But... I guess the web has a lot of ads. Who knew.

    More importantly, it’s interesting to look at this file from the bottom up (or the grouped one) though and think about whether we can identify the possible sources of these, or ‘tag’ them according to common purposes somehow. I’d like to think about how we could get this into a format thatIf you feel like you’re potentially interested in digging in and helping think about this, identifying where some of those come from, what their purpose is, etc, getting that data into a a place where we can do that kind of stuff better – whatever, feel free to leave comments on any of these gists or cc me (@briankardell) on twitter.

    March 07, 2019 05:00 AM

    March 01, 2019

    Brian Kardell

    Intuition Bikeshed and Standards Challenges

    Intuition Bikeshed and Standards Challenges

    One thing that's been on my mind quite a lot for the last few years is how we can better communicate both in and out of standards bodies. This past week some things happened which I think make for an interesting review and some thought about whether we did the right things, and how we can do better.

    This week, the CSS Working Group tweeted an informal Twitter poll that looked like this (below) - if you haven't voted or replied to it yet, I would ask that you read this before you do:

    It includes two examples attempting to show a couple of hypothetical uses of a new function in CSS that coerces/concatenates strings and asks what authors what they expect it would be called, offering the following choices in the poll:

    • text()
    • concat()
    • to-string()
    • Other (respond in thread)

    A not insignificant number of folks, some friends and standards people themselves were kind of appalled that string() wasn't in the list. I mean... Of course it should be string(), right? Hurriedly, some of us tried to provide some context but I feel like it really deserves some more words and it has me thinking more about how we can do better (and what that would even mean).


    CSS doesn’t (currently) really "do" string concatenation "generally". However, it now has constructs like CSS Custom Properties which seem to make this sort of thing very desirable. Based on some feedback and discussions an issue was opened in late 2016 by Lea Verou to consider how to make that work.

    Note: Yes, it is now 2019 and while this seems to many of us like a really long time for such a small, basic thing, in standards time, that is nothing. The CSS Working Group has hundreds of issues of all shapes and sizes and doesn’t get to actually prioritize anyone’s actual time (some priorities are set by the company that employs you, and some of us do this in our "spare" time), and... sometimes there are non-immediately obvious complexities. More on all that later.

    In it, she points out that content: currently does a fairly simple form of concatination. If you are unfamilliar with this, allow me to use Lea's Prism to illustrate it with an example from MDN below:

    a::after {
      content: " (" attr(id) ")";

    Great. She suggests that perhaps for a few other "stringy" properties, we should just do that - just make it able to smash those things together into one string. That would be "best" for authors - less "Lispy" and easier to read (the Lisp family of languages are somewhat famous for its use of lots of parenthesis and how it is read)

    But... authors want this ability in a lot of contexts where it isn't so simple or obvious. Having a general thing which gets authors more power, in more places, sooner, also seems good for authors. So, let's start there - powers first, maybe sugar later.

    Without going any further, I'd like to point out that this is a decision - a weighing of values of outcomes. It means that if/when these powers ship, they will be kinda "Lispy". Some authors, lacking the context as to why might that decision was made, might just think that this was a bad decision. Even with that context, some authors would disagree with the choice.

    Anywho.. Discussions progressed about which use cases and constraints there were around a way to say “smash these two things together” as strings and one of the early names proposed for this was “string()”.

    Sure, makes sense to me. Don't all programmers know string? Seems pretty universal. What else would it be even? Obviously it should be string.

    Well, here too, it seems that context matters a lot, in more ways than one.

    It turns out that another spec CSS Generated Content for Paged Media Module (GCPM) has already defined a function called string() and does... not this.

    Still, a lot of us (yes, me too) are sort of like "Yo wait... what? what's that spec? No browser seems to implement it -maybe we could just change that? Surely this is a better use of string()? Seems last updated in 2014, and if no one has implemented it, it's probably dead. This is like... clearly a string, it feels bad that we can't use string for dumb reasons.""

    Yes, all of that is mostly true, and... these things are also true:

    The ideas and work of GCPM, like a lot in CSS, have a long history. Bert Bos (one of the creators of CSS) made the announcement on the mailing list of its first working draft with that title in 2006. In it, he said:

    It describes features typically only used when printing: running headers and footers, footnotes, references to page numbers, floats to the top and bottom of a page, etc. Indeed, we may define explicitly that these features *don't* work in interactive media. (Although I already heard people ask for footnotes to work interactively as well, possibly as a light-weight form of hyperlink, opening a pop-up.)

    In fact, in many ways the ideas of both markup documents and stylesheets originated with print, not screens and a lot of people continued to see that potential of stylesheets to be important way beyond Web browsers. The editor of that spec was Håkon Wium Lie, the other creator of CSS.

    So, part of that is to point out that in the context of framing this, they wanted this concept of, well, kind of named strings and then a way to refer to and use those strings. Strings, strings, strings, and so, well, string() seemed like a perfectly rational name. I mean... Obviously it is a string.

    Then, as you might expect, people from those industries began implementing non-browser implementations. Several supported string, some "forever" by the time this 2014 spec was published.

    But, Brian... This is all stuff doesn't work in browsers. It's for print and it seems almost like Bert was suggesting it might be kinda good to split them? Let's just split them because this is obviously a string!

    Maybe... but there is also this:

    In between the birth of CSS and now, we've also increasingly popularized this "not print really" but also "not currently web browsers" uncanny valley that are things like PDFs or EBooks.

    These have to work on various... er... screens. With different sizes and orientations and now can even be interactive. You know, kind of like a web browser.

    At the same time, Web browsers have evolved too: Ideas like Web Packaging address, in a more general way, ideas in publishing around, for example, distributing an ebook. Ideas like ServiceWorkers make it possible to now take that offline. You know, kind of like an ebook.

    And so on.

    And so, as much as possible, to common ends, we are all try to work together within the W3C, and especially in CSS.

    CSS has never (to the best of my knowledge) overloaded an existing function with a radically different signature/meaning depending on the media. To do so would certainly mean just agreeing that we would never adopt this, which, meh, I don't know? In any case, for at least some authors who have to deal with both it might be really confusing... None of that seems "great" if it is all easily avoided.

    Except, dammit, I am going to be honest: this really feels like a string() to me. How can it not be? That is definitely the best one.

    Resolving and Bikesheds

    And so, all that back story set: There were discussions on this in the face to face meaning. In this context, it seemed we had pretty general agreement on the use cases and how it would work, but we didn't have a name.

    However, there were a few ‘finalist’ suggestions and cogent arguments about why some other names were actually better than string(). Hmm... maybe?

    In the end, weighing all of the things, I think that we just decided it was probably best to just take string() off the table if there was something else that could work -- and people were even making compelling arguments that not only would they work - they were actually better. Meh. I dunno.

    It seemed that the really serious contenders were “concat()” and “text()” and at some point I think people were just trying to move on and it seemed to be settling around text(). The chair asked “Any objections to text()?”

    I squirmed.

    There I really don't want to be "keep the dumb bikeshed going guy" (if you are not familliar with the term, here's an explanation of bikeshedding). I was at this point entirely willing to accept "not string()" but I just found text actually confusing and thought that most of the same problems of "looks good in this light... as well as many others" might apply here... But, you know... maybe that's just me? Should I say something?

    So I finally chimed in and offered simply “I don’t have a better suggestion, but text() is not very clear to me… There are so many ways that I could interpret “text” in the context of CSS” and "sorry".

    It wasn’t a formal objection, I just wanted to see if maybe others felt the similarly. And, well... some did. And... some didn't.

    Everyone, including me, seemed pretty sure that other people would surely find a thing either more intuitive, or actively unintuitive and we didn't agree on what those were.

    This is a hard problem, because I think we all sometimes think that other people will surely see it 'like us'. It's very hard to do otherwise. But, people are diverse. CSS is used by people from many backgrounds, with many perspectives, different primary languages, cultures and so on and, really, it's just very hard to know.

    There was some more discussion. There were a few other suggestions made (some minuted, some not, some serious, some not). One idea that seemed to be perhaps popular was something vaguely “like string, but not exactly string”. Any of these felt kind of more intuitive to me - but again, that's just me. I could imagine even that perhaps "more different" was actually potentially less confusing to potential future developers? I don’t know, honestly.

    But, deep down, I just kind of secretly wish it could be string() because that's the one I just know everyone would get. Dammit.

    All this said, I would like to put this in some actual perspective: No one was remotely cross about any of this. It was not a heated argument. Everyone was perfectly aimiable and all of the rationales were, I believe, entirely 'teachable'. I believe that all parties agreed that we would celebrate getting this power interoperably implemented regardless of which one it wound up being.

    Unfortunately this meant that all we could agree to regarding the name was that we couldn't decide today. Yeah, idk, maybe that's on me. I'm honestly not sure how I feel about it.

    Regardless, because there were good points and different perspectives all around, and because developers have even broader perspectives than the folks in that room - we thought that perhaps asking developers for input would help us clarify our thoughts, so Tab put together a poll.

    Optimization Problem Problems

    The thing is, the most interesting thing in this poll to me is just how many actual users of CSS thought concat() was actually the best one, even without all of this context, and how few chose all of the others.

    Realistically, text() is the only 100% clear, readable, requires no context alternative in that poll and yet, as it currently stands, a mere 1 in 5 people chose it.

    Unfortunately to-string() was just one of several possible 'almost strings' and several people who chose other also said something string or string-like -- but... even still - it's not that many actually. My manual efforts to tally them up as well as I can still seem to indicate that concat() has still way more support than all of the others combined.

    It honestly wouldn't have been my guess going into all this. But... really, the more I look at it... maybe that is better, even without all the context. That is exactly what this function does.

    Anyways.. I think that it is impossible to optimize something like “intuitiveness” without actual input from developers themselves and the really tough thing here is that there is that at best it is still merely optimization. There is no clearly defined 'perfect' that will make everyone happy, especially without context or being heard.

    As I have argued before, I also believe that we really haven’t figured out how to do that well yet.

    In order to give good input, any of us require at least a good framing of context. That's hard because, often there is a lot and it's hard to know what matters and what doesn't. The higher the "tax" of participating, the less likely it is that we will even get a good sampling of average users. Our time is just pulled in too many directions. It takes 2 seconds to read the poll and vote, many minutes to read this post.

    Further, to give really quality feedback and input, probably you need the ability to ask some questions or something. But, even then: even if you sink in time and ask questions, the truth is, it is impossibly difficult to really weigh vapor.

    That is, until we really sit down to use things, get some actual experience with it, try to apply it, stretch it, pull it, live with it a while and so on… it’s just… really hard to say. This is part of why I have advocated for the Extensible Web and Houdini work, and, why I argue that it is important to:

    I feel like we’re making progress on all of these fronts, but I think we still have a very long way to go. Things like Custom Functions, for example, might go a long way toward answering future questions of similar ilk. It might be very plausible to get value quickly and have it compete and fail and adapt and grow and, maybe settle in in some ground we haven't quite defined yet that is "not part of CSS proper" but "widely used in publishing". I don't know.

    I am very happy to say (and will probably write about) the fact that Houdini had some really good discussion in these meetings and, actually, I'm very pleasantly suprised by who I heard arguing what.

    So... that's where we are, how we got there and some of the problems... How can we manage all of this better? Is there more we could do?

    Thanks to my friends Amelia Bellamy-Royds and Jon Neal for their talks and helpful comments in reviewing some iteration of this piece.

    March 01, 2019 05:00 AM

    February 27, 2019

    Maksim Sisov

    Review of Igalia’s Chromium team’s activities (2018/H2).

    A first semiyearly report, which overviews our Chromium team’s activities and accomplishments, focusing on the activity of the second semester of year 2018.

    Contributions to the Chromium mainline repository:

    • Ozone/Wayland support in Chromium browser.

    Igalia has been working on Ozone/Wayland implementation for the Chromium browser sponsored by Renesas support since the end of 2016. In the beginning, the plan was to extend a so called mus service (mojo ui service, which had been intended to be used only by ChromeOS) to support external window mode, when each top level window including menus and popups were backed up by many native accelerated widgets. The result of that work can be found from our previous blog posts: Chromium, ozone, wayland and beyond, Chromium Mus/Ozone update (H1/2017): wayland, x11 and Chromium with Ozone/Wayland: BlinkOn9, dmabuf and more refactorings….

    The project was firstly run in the downstream GitHub repository and its design was based on the mus service.

    In the end, after lots of discussions with our colleagues from Google, we moved away from mus and made a platform integration directly into the aura layer. The patches in the downstream repository were refactored and merged into the Chromium mainline repository.

    Currently, our Igalians Maksim Sisov and Antonio Gomes have ownership of the Ozone/Wayland in the Chromium mainline repository and continue to maintain it. The downstream repository still has been rebased on a weekly basis and contains only few patches being tested.

    A meta bug for Ozone/Wayland support exists and it is constantly updated.

    • Maintenance of the upstream meta-browser recipe.

    Igalia has also been contributing to the upstream Yocto layer called meta-browser. We constantly update the recipe, which allows Chromium with native Wayland support to be built for embedded devices. Currently, the recipe is based on the latest Chromium Linux stable channel and uses Chromium version 72.0.3626.109. To provide good user experience, we backport Ozone/Wayland patches, which are not included into the source code of the stable channel, and test them on Raspberry Pi 3 and Renesas R-car M3.

    • Web Application Manager for Automotive Grade Linux (AGL).

    Automotive Grade Linux is an operating system for embedded devices targeted to automotive. It is even more than an operating system and brings together automakers, suppliers and technology companies to accelerate the development and adoption of a fully open software stack for the connected car.

    At some point, the AGL community decided that they need a Web Application Manager capable of running web applications and providing the same as native applications user experience, which can attract web developers to design and create applications for automotive industry.

    Igalia has been happy to provide its help and developed a Web Runtime based on recently released Web Application Manager initially targeted for WebOSOSE with some guidance and support from LGe engineers.

    The recent work was demoed at CES 2019 in Las Vegas and Chromium M68 with integration to the Web Runtime showcased to run HTML5 applications with the same degree of integration and security as native apps.

    By the time of writing, the Web Application Manager was integrated into the Grumpy Guppy branch. and became available for web applications developers.

    • Servicification effort in Chromium browser.

    Chromium code base is moving towards a service-oriented model to produce reusable components and reduce code duplication.

    Our Chromium team at Igalia has been taking part of that effort and has been helping Google engineers to achieve that goal. Our contributions are spread around the Chromium codebase and include patches to

    • network stack (including //services/network and //net) and,
    • the identity service (//services/identity and //component/signin/core/browser).

    The total number of patches is about 650 since 08.04.2018 by 21.02.2019.

    By the time of writing this blog post, Igalia contributed to the Chromium mainline repository by servicifying network and identity services, which are included in the canary, dev, beta and stable channels for desktop (Windows, MacOS and Linux) and ChromeOS platforms.

    • General contributions to the Chromium browser.

    Igalia has also been doing general contributions to the Chromium mainline repository and the Blink engine.

    To name a few, we contributed memory pressure support to //cc (Chromium compositor), resume/suspend active tasks of blink in the content layer. We also been contributing fixes and changes according to web platform specs like Implement Origin-Signed HTTP Exchanges (for WebPackage Loading or css grid support – [css-grid] Issue with abspos element which containing block is the grid container and [css-grid] The grid is by itself causing its grid container to overflow.

    Also, we implemented new API operations for the webview tag to enable or disable spatial navigation inside the webview contents independently from the global settings, and to check its state. They are available in Chromium since version 71.

    More changes and fixes can be found on chromium-review.

    Our contributions count about 640 patches for the past year, which makes us 3rd largest contributor after and organization (71927 + 13735 patches), (777 patches) and (652 patches).

    • Contributions to downstream forks of Chromium, such as the ones in EndlessOS, WebOS OSE, or the Brave browser:

    Igalia has been also helping downstream forks of Chromium to develop their products. For example,
    we have been helping Endless Mobile with the maintenance of the Chromium browser for the different versions of Endless OS for Intel and ARM. We have been taking care of doing the periodic rebases of the adaptations made to Chromium following the updates of the stable channel by Google.

    Also, we take part in the development of the Brave browser. Our contributions include on/offline installer and update features integrated into the Omaha(Windows) and Sparkle(MacOS) framework. We have also made Brave browser to have multi channel releases, which include stable, beta, dev and nightly channels for Windows/MacOS/Linux. In addition to that, we worked on customized search engine provider feature, native/web UI, theme, branding, Widevine, brave scheme support and etc. What is more, we

    Our contributions can also be found in the LGE’s WebOS OSE. For example, we have been participating the periodic
    rebases and adaptations made to Chromium and other activities.

    • Committers and ownership of components in the Chromium browser:

    We appreciate that contributions of our Chromium team are valued in the Chromium community, During the past half a year, Igalia gained ownership in three components:

    • third_party/blink/renderer/modules/navigatorcontentutils/ owned by Gyuyoung Kim (
    • ui/ozone/common/linux/ owned by Maksim Sisov (
    • ui/ozone/platform/wayland/ owned by Maksim Sisov and Antonio Gomes (

    We also target to have all our team members to be committers of the Chromium project.

    During the past half a year, our two members, Jose Dapena Paz and Mario Sanchez Prada , gained the committership.

    • Events attended and talks given:

    Our Chromium team has always been targeting to have as much visibility in the open-source community as possible.

    For the past half a year, we attended the following conferences:

    • the Web Engines Hackfest 2018 and spoke about “The pathway to Chromium on Wayland” (by Antonio Gomes ( and Julie Jeongeun Kim (
    • the W3C HTML5 Conference 2018 and gave a talk about “The pathway to Chromium on Wayland” (by Julie Jeongeun Kim (

    Besides the events mentioned above, it is also worth mentioning the following events for the sake of completeness, as there has not been a H1/2018 report about our team’s activities:

    also AGL AMM and AGL F2F meetings in Dresden and Yokohama, and other events, where we presented our projects.

    • Other contributions:

    We have also been writing various blog posts about icecc and ccache usage with Chromium.

    Recently, we have posted a new blog post about enabling cross-compilation for Windows from a Linux/Mac host. The support has already been in the Chromium repository but only worked for Google employees, we have added the remaining bits to make it available for everyone.

    by msisov at February 27, 2019 01:22 PM

    February 26, 2019

    Frédéric Wang

    Review of Igalia's Web Platform activities (H2 2018)

    This blog post reviews Igalia’s activity around the Web Platform, focusing on the second semester of 2018.



    During 2018 we have continued discussions to implement MathML in Chromium with Google and people interested in math layout. The project was finally launched early this year and we have encouraging progress. Stay tuned for more details!


    As mentioned in the previous report, Igalia has proposed and developed the specification for BigInt, enabling math on arbitrary-sized integers in JavaScript. We’ve continued to land patches for BigInt support in SpiderMonkey and JSC. For the latter, you can watch this video demonstrating the current support. Currently, these two support are under a preference flag but we hope to make it enable by default after we are done polishing the implementations. We also added support for BigInt to several Node.js APIs (e.g. fs.Stat or process.hrtime.bigint).

    Regarding “object-oriented” features, we submitted patches private and public instance fields support to JSC and they are pending review. At the same time, we are working on private methods for V8

    We contributed other nice features to V8 such as a spec change for template strings and iterator protocol, support for Object.fromEntries, Symbol.prototype.description, miscellaneous optimizations.

    At TC39, we maintained or developed many proposals (BigInt, class fields, private methods, decorators, …) and led the ECMAScript Internationalization effort. Additionally, at the WebAssembly Working Group we edited the WebAssembly JS and Web API and early version of WebAssembly/ES Module integration specifications.

    Last but not least, we contributed various conformance tests to test262 and Web Platform Tests to ensure interoperability between the various features mentioned above (BigInt, Class fields, Private methods…). In Node.js, we worked on the new Web Platform Tests driver with update automation and continued porting and fixing more Web Platform Tests in Node.js core.

    We also worked on the new Web Platform Tests driver with update automation, and continued porting and fixing more Web Platform Tests in Node.js core. Outside of core, we implemented the initial JavaScript API for llnode, a Node.js/V8 plugin for the LLDB debugger.


    Igalia has continued its involvement at the W3C. We have achieved the following:

    We are also collaborating with Google to implement ATK support in Chromium. This work will make it possible for users of the Orca screen reader to use Chrome/Chromium as their browser. During H2 we began implementing the foundational accessibility support. During H1 2019 we will continue this work. It is our hope that sufficient progress will be made during H2 2019 for users to begin using Chrome with Orca.

    Web Platform Predictability

    On Web Platform Predictability, we’ve continued our collaboration with AMP to do bug fixes and implement new features in WebKit. You can read a review of the work done in 2018 on the AMP blog post.

    We have worked on a lot of interoperability issues related to editing and selection thanks to financial support from Bloomberg. For example when deleting the last cell of a table some browsers keep an empty table while others delete the whole table. The latter can be problematic, for example if users press backspace continuously to delete a long line, they can accidentally end up deleting the whole table. This was fixed in Chromium and WebKit.

    Another issue is that style is lost when transforming some text into list items. When running execCommand() with insertOrderedList/insertUnorderedList on some styled paragraph, the new list item loses the original text’s style. This behavior is not interoperable and we have proposed a fix so that Firefox, Edge, Safari and Chrome behave the same for this operation. We landed a patch for Chromium. After discussion with Apple, it was decided not to implement this change in Safari as it would break some iOS rich text editor apps, mismatching the required platform behavior.

    We have also been working on CSS Grid interoperability. We imported Web Platform Tests into WebKit (cf bugs 191515 and 191369 and at the same time completing the missing features and bug fixes so that browsers using WebKit are interoperable, passing 100% of the Grid test suite. For details, see 191358, 189582, 189698, 191881, 191938, 170175, 191473 and 191963. Last but not least, we are exporting more than 100 internal browser tests to the Web Platform test suite.


    Bloomberg is supporting our work to develop new CSS features. One of the new exciting features we’ve been working on is CSS Containment. The goal is to improve the rendering performance of web pages by isolating a subtree from the rest of the document. You can read details on Manuel Rego’s blog post.

    Regarding CSS Grid Layout we’ve continued our maintenance duties, bug triage of the Chromium and WebKit bug trackers, and fixed the most severe bugs. One change with impact on end users was related to how percentages row tracks and gaps work in grid containers with indefinite size, the last spec resolution was implemented in both Chromium and WebKit. We are finishing the level 1 of the specification with some missing/incomplete features. First we’ve been working on the new Baseline Alignment algorithm (cf. CSS WG issues 1039, 1365 and 1409). We fixed related issues in Chromium and WebKit. Similarly, we’ve worked on Content Alignment logic (see CSS WG issue 2557) and resolved a bug in Chromium. The new algorithm for baseline alignment caused an important performance regression for certain resizing use cases so we’ve fixed them with some performance optimization and that landed in Chromium.

    We have also worked on various topics related to CSS Text 3. We’ve fixed several bugs to increase the pass rate for the Web Platform test suite in Chromium such as bugs 854624, 900727 and 768363. We are also working on a new CSS value ‘break-spaces’ for the ‘white-space’ property. For details, see the CSS WG discussions: issue 2465 and pull request. We implemented this new property in Chromium under a CSSText3BreakSpaces flag. Additionally, we are currently porting this implementation to Chromium’s new layout engine ‘LayoutNG’. We have plans to implement this feature in WebKit during the second semester.


    • WebRTC: The libwebrtc branch is now upstreamed in WebKit and has been tested with popular servers.
    • Media Source Extensions: WebM MSE support is upstreamed in WebKit.
    • We implemented basic support for <video> and <audio> elements in Servo.

    Other activities

    Web Engines Hackfest 2018

    Last October, we organized the Web Engines Hackfest at our A Coruña office. It was a great event with about 70 attendees from all the web engines, thank you to all the participants! As usual, you can find more information on the event wiki including link to slides and videos of speakers.

    TPAC 2018

    Again in October, but this time in Lyon (France), 12 people from Igalia attended TPAC and participated in several discussions on the different meetings. Igalia had a booth there showcasing several demos of our last developments running on top of WPE (a WebKit port for embedded devices). Last, Manuel Rego gave a talk on the W3C Developers Meetup about how to contribute to CSS.

    This.Javascript: State of Browsers

    In December, we also participated with other browser developers to the online This.Javascript: State of Browsers event organized by ThisDot. We talked more specifically about the current work in WebKit.

    New Igalians

    We are excited to announce that new Igalians are joining us to continue our Web platform effort:

    • Cathie Chen, a Chinese engineer with about 10 years of experience working on browsers. Among other contributions to Chromium, she worked on the new LayoutNG code and added support for list markers.

    • Caio Lima a Brazilian developer who recently graduated from the Federal University of Bahia. He participated to our coding experience program and notably worked on BigInt support in JSC.

    • Oriol Brufau a recent graduate in math from Barcelona who is also involved in the CSSWG and the development of various browser engines. He participated to our coding experience program and implemented the CSS Logical Properties and Values in WebKit and Chromium.

    Coding Experience Programs

    Last fall, Sven Sauleau joined our coding experience program and started to work on various BigInt/WebAssembly improvements in V8.


    We are thrilled with the web platform achievements we made last semester and we look forward to more work on the web platform in 2019!

    February 26, 2019 11:00 PM

    Miguel A. Gómez

    Hole punching in WPE

    As you may (or may not) know, WPE (and WebKitGtk+ if the proper flags are enabled) uses OpengGL textures to render the video frames during playback.

    In order to do this, WPE creates a playbin and uses a custom bin as videosink. This bin is composed by some GStreamer-GL components together with an appsink. The GL components ensure that the video frames are uploaded to OpenGL textures, while the appsink allows the player to get a signal when a new frame arrives. When this signal is emitted, the player gets the frame as a texture from the appsink and sends it to the accelerated compositor to be composed with the rest of the layers of the page.

    This process is quite fast due to the hardware accelerated drawings, and as the video frames are just another layer that is composited, it allows them to be transformed and animated: the video can be scaled, rotated, moved around the page, etc.

    But there are some platforms where this approach is not viable, maybe because there’s no OpenGL support, or it’s too slow, or maybe the platform has some kind of fast path support to take the decoded frames to the display. For these cases, the typical solution is to draw a transparent rectangle on the brower, in the position where the video should be, and then use some platform dependent way to put the video frames in a display plane below the browser, so they are visible through the transparent rectangle. This approach is called hole punching, as it refers to punching a hole in the browser to be able to see the video.

    At Igalia we think that supporting this feature is interesting, and following our philosophy of collaborating upstream as much as possible, we have added two hole punching approaches to the WPE upstream trunk: GStreamer hole punch and external hole punch.

    GStreamer hole punch

    The idea behind this implementation is to use the existent GStreamer based MediaPlayer to perform the media playback, but replace the appsink (and maybe other GStreamer elements) with a platform dependant video sink that is in charge of putting the video frames on the display. This can be enabled with the -DUSE_GSTREAMER_HOLEPUNCH flag.

    Of course, the current implementation is not complete cause the platform dependent bits need to be added to have the full functionality. What it currently does is to use a fakevideosink so the video frames are not shown, and draw the transparent rectangle on the position where the video should be. If you enable the feature and play a video, you’ll see the transparent rectangle and you’ll be able to hear the video sound (and even use the video controls as they work), but nothing else will happen.

    In order to have the full functionality there are a couple of places in the code that need to be modified to create the appropriate platform dependend elements. These two places are inside MediaPlayerPrivateGStreamerBase.cpp, and they are the createHolePunchVideoSink() and setRectangleToVideoSink() functions.

    GstElement* MediaPlayerPrivateGStreamerBase::createHolePunchVideoSink()
        // Here goes the platform-dependant code to create the videoSink. As a default
        // we use a fakeVideoSink so nothing is drawn to the page.
        GstElement* videoSink =  gst_element_factory_make("fakevideosink", nullptr);
        return videoSink;
    static void setRectangleToVideoSink(GstElement* videoSink, const IntRect& rect)
        // Here goes the platform-dependant code to set to the videoSink the size
        // and position of the video rendering window. Mark them unused as default.

    The first one, createHolePunchVideoSink() needs to be modified to create the appropriate video sink to use for the platform. This video sink needs to have some method that allows setting the position where the video frames are to be displayed, and the size they should have. And this is where setRectangleToVideoSink() comes into play. Whenever the transparent rectangle is painted by the browser, it will tell the video sink to render the frames to the appropriate position, and it does so using that function. So you need to modify that function to use the appropriate way to set the size and position to the video sink.

    And that’s all. Once those changes are made the feature is complete, and the video should be placed exactly where the transparent rectangle is.

    Something to take into account is that the size and position of the video rectangle are defined by the CSS values of the video element. The rectangle won’t be adjusted to fit the aspect ratio of the video, as that must be done by the platform video sink.

    Also, the video element allows some animations to be performed: it can be translated and scaled, and it will properly notify the video sink about the animated changes. But, of course, it doesn’t support rotation or 3D transformations (as the normal video playback does). Take into account that there might be a small desynchronization between the transparent rectangle and the video frames size and position, due to the asynchronicity of some function calls.

    Playing a video with GStreamer hole punch enabled.

    External hole punch

    Unlike the previous feature, this one doesn’t rely at all on GStreamer to perform the media playback. Instead, it just paints the transparent rectangle and lets the playback to be handled entirely by an external player.

    Of course, there’s still the matter about how to synchronize the transparent rectangle position and the external player. There would be two ways to do this:

    • Implement a new WebKit MediaPlayerPrivate class that would communicate with the external player (through sockets, the injected bundle or any other way). WPE would use that to tell the platform media player what to play and where to render the result. This is completely dependant of the platform, and the most complex solution, but it would allow to use the browser to play content from any page without any change. But precisely as it’s completely platform dependant, this is not valid approach for upstream.
    • Use javascript to communicate with the native player, telling it what to play and where, and WPE would just paint the transparent rectangle. The problem with this is that we need to have control of the page to add the javascript code that controls the native player, but on the other hand, we can implement a generic approach on WPE to paint the transparent rectangle. This is the option that was implemented upstream.

    So, how can this feature be used? It’s enabled by using the -DUSE_EXTERNAL_HOLEPUNCH flag, and what it does is add a new dummy MediaPlayer to WPE that’s selected to play content of type video/holepunch. This dummy MediaPlayer will draw the transparent rectangle on the page, according to the CSS values defined, and won’t do anything else. It’s up to the page owners to add the javascript code required to initiate the playback with the native player and position the output in the appropriate place under the transparent rectangle. To be a bit more specific, the dummy player will draw the transparent rectangle once the type has been set to video/holepunch and load() is called on the player. If you have any doubt about how to make this work, you can give a look to the video-player-holepunch-external.html test inside the ManualTests/wpe directory.

    This implementation doesn’t support animating the size and position of the video… well, it really does, as the transparent rectangle will be properly animated, but you would need to animate the native player’s output as well, and syncing the rectangle area and the video output is going to be a challenging task.

    As a last detail, controls can be enabled using this hole punch implementation, but they are useless. As WPE doesn’t know anything about the media playback that’s happening, the video element controls can’t be used to handle it, so it’s just better to keep them disabled.

    Using both implementations together

    You may be wondering, is it possible to use both implementations at the same time? Indeed it is!! You may be using the GStreamer holepunch to perform media playback with some custom GStreamer elements. At some point you may find a video that is not supported by GStreamer and you can just set the type of the video element to video/holepunch and start the playback with the native player. And once that video is finished, start using the GStreamer MediaPlayer again.


    Both hole punch features will be available on the upcoming stable 2.24 release (and, of course, on 2.25 development releases). I hope they are useful for you!

    by magomez at February 26, 2019 10:25 AM

    February 25, 2019

    Andrés Gómez

    Review of Igalia’s Graphics activities (2018)

    This is the first report about Igalia’s activities around Computer Graphics, specifically 3D graphics and, in particular, the Mesa3D Graphics Library (Mesa), focusing on the year 2018.

    GL_ARB_gl_spirv and GL_ARB_spirv_extensions

    GL_ARB_gl_spirv is an OpenGL extension whose purpose is to enable an OpenGL program to consume SPIR-V shaders. In the case of GL_ARB_spirv_extensions, it provides a mechanism by which an OpenGL implementation would be able to announce which particular SPIR-V extensions it supports, which is a nice complement to GL_ARB_gl_spirv.

    As both extensions, GL_ARB_gl_spirv and GL_ARB_spirv_extensions, are core functionality in OpenGL 4.6, the drivers need to provide them in order to be compliant with that version.

    Although Igalia picked up the already started implementation of these extensions in Mesa back in 2017, 2018 is a year in which we put a big deal of work to provide the needed push to have all the remaining bits in place. Much of this effort provides general support to all the drivers under the Mesa umbrella but, in particular, Igalia implemented the backend code for Intel‘s i965 driver (gen7+). Assuming that the review process for the remaining patches goes without important bumps, it is expected that the whole implementation will land in Mesa during the beginning of 2019.

    Throughout the year, Alejandro Piñeiro gave status updates of the ongoing work through his talks at FOSDEM and XDC 2018. This is a video of the latter:


    The ETC and EAC formats are lossy compressed texture formats used mostly in embedded devices. OpenGL implementations of the versions 4.3 and upwards, and OpenGL/ES implementations of the versions 3.0 and upwards must support them in order to be conformant with the standard.

    Most modern GPUs are able to work directly with the ETC2/EAC formats. Implementations for older GPUs that don’t have that support but want to be conformant with the latest versions of the specs need to provide that functionality through the software parts of the driver.

    During 2018, Igalia implemented the missing bits to support GL_OES_copy_image in Intel’s i965 for gen7+, while gen8+ was already complying through its HW support. As we were writing this entry, the work has finally landed.


    Igalia finished the work to provide support for the Vulkan extension VK_KHR_16bit_storage into Intel’s Anvil driver.

    This extension allows the use of 16-bit types (half floats, 16-bit ints, and 16-bit uints) in push constant blocks, and buffers (shader storage buffer objects).  This feature can help to reduce the memory bandwith for Uniform and Storage Buffer data accessed from the shaders and / or optimize Push Constant space, of which there are only a few bytes available, making it a precious shader resource.


    Igalia added Vulkan’s optional feature shaderInt16 to Intel’s Anvil driver. This new functionality provides the means to operate with 16-bit integers inside a shader which, ideally, would lead to better performance when you don’t need a full 32-bit range. However, not all HW platforms may have native support, still needing to run in 32-bit and, hence, not benefiting from this feature. Such is the case for operations associated with integer division in the case of Intel platforms.

    shaderInt16 complements the functionality provided by the VK_KHR_16bit_storage extension.

    SPV_KHR_8bit_storage and VK_KHR_8bit_storage

    SPV_KHR_8bit_storage is a SPIR-V extension that complements the VK_KHR_8bit_storage Vulkan extension to allow the use of 8-bit types in uniform and storage buffers, and push constant blocks. Similarly to the the VK_KHR_16bit_storage extension, this feature can help to reduce the needed memory bandwith.

    Igalia implemented its support into Intel’s Anvil driver.


    Igalia implemented the support for VK_KHR_shader_float16_int8 into Intel’s Anvil driver. This is an extension that enables Vulkan to consume SPIR-V shaders that use Float16 and Int8 types in arithmetic operations. It extends the functionality included with VK_KHR_16bit_storage and VK_KHR_8bit_storage.

    In theory, applications that do not need the range and precision of regular 32-bit floating point and integers, can use these new types to improve performance. Additionally, its implementation is mostly API agnostic, so most of the work we did should also help to have a proper mediump implementation for GLSL ES shaders in the future.

    The review process for the implementation is still ongoing and is on its way to land in Mesa.


    VK_KHR_shader_float_controls is a Vulkan extension which allows applications to query and override the implementation’s default floating point behavior for rounding modes, denormals, signed zero and infinity.

    Igalia has coded its support into Intel’s Anvil driver and it is currently under review before being merged into Mesa.


    VkRunner is a Vulkan shader tester based on shader_runner in Piglit. Its goal is to make it feasible to test scripts as similar as possible to Piglit’s shader_test format.

    Igalia initially created VkRunner as a tool to get more test coverage during the implementation of GL_ARB_gl_spirv. Soon, it was clear that it was useful way beyond the implementation of this specific extension but as a generic way of testing SPIR-V shaders.

    Since then, VkRunner has been enabled as an external dependency to run new tests added to the Piglit and VK-GL-CTS suites.

    Neil Roberts introduced VkRunner at XDC 2018. This is his talk:


    During 2018, Igalia has also started contributing to the freedreno Mesa driver for Qualcomm GPUs. Among the work done, we have tackled multiple bugs identified through the usual testing suites used in the graphic drivers development: Piglit and VK-GL-CTS.

    Khronos Conformance

    The Khronos conformance program is intended to ensure that products that implement Khronos standards (such as OpenGL or Vulkan drivers) do what they are supposed to do and they do it consistently across implementations from the same or different vendors.

    This is achieved by producing an extensive test suite, the Conformance Test Suite (VK-GL-CTS or CTS for short), which aims to verify that the semantics of the standard are properly implemented by as many vendors as possible.

    In 2018, Igalia has continued its work ensuring that the Intel Mesa drivers for both Vulkan and OpenGL are conformant. This work included reviewing and testing patches submitted for inclusion in VK-GL-CTS and continuously checking that the drivers passed the tests. When failures were encountered we provided patches to correct the problem either in the tests or in the drivers, depending on the outcome of our analysis or, even, brought a discussion forward when the source of the problem was incomplete, ambiguous or incorrect spec language.

    The most important result out of this significant dedication has been successfully passing conformance applications.

    OpenGL 4.6

    Igalia helped making Intel’s i965 driver conformant with OpenGL 4.6 since day zero. This was a significant achievement since, besides Intel Mesa, only nVIDIA managed to do this too.

    Igalia specifically contributed to achieve the OpenGL 4.6 milestone providing the GL_ARB_gl_spirv implementation.

    Vulkan 1.1

    Igalia also helped to make Intel’s Anvil driver conformant with Vulkan 1.1 since day zero, too.

    Igalia specifically contributed to achieve the Vulkan 1.1 milestone providing the VK_KHR_16bit_storage implementation.

    Mesa Releases

    Igalia continued the work that was already carrying on in Mesa’s Release Team throughout 2018. This effort involved a continuous dedication to track the general status of Mesa against the usual test suites and benchmarks but also to react quickly upon detected regressions, specially coordinating with the Mesa developers and the distribution packagers.

    The work was obviously visible by releasing multiple bugfix releases as well as doing the branching and creating a feature release.


    Continuous Integration is a must in any serious SW project. In the case of API implementations it is even critical since there are many important variables that need to be controlled to avoid regressions and track the progress when including new features: agnostic tests that can be used by different implementations, different OS platforms, CPU architectures and, of course, different GPU architectures and generations.

    Igalia has kept a sustained effort to keep Mesa (and Piglit) CI integrations in good health with an eye on the reported regressions to act immediately upon them. This has been a key tool for our work around Mesa releases and the experience allowed us to push the initial proposal for a new CI integration when the FreeDesktop projects decided to start its migration to GitLab.

    This work, along with the one done with the Mesa releases, lead to a shared presentation, given by Juan Antonio Suárez during XDC 2018. This is the video of the talk:

    XDC 2018

    2018 was the year that saw A Coruña hosting the X.Org Developer’s Conference (XDC) and Igalia as Platinum Sponsor.

    The conference was organized by GPUL (Galician Linux User and Developer Group) together with University of A Coruña, Igalia and, of course, the X.Org Foundation.

    Since A Coruña is the town in which the company originated and where we have our headquarters, Igalia had a key role in the organization, which was greatly benefited by our vast experience running events. Moreover, several Igalians joined the conference crew and, as mentioned above, we delivered talks around GL_ARB_gl_spirv, VkRunner, and Mesa releases and CI testing.

    The feedback from the attendees was very rewarding and we believe the conference was a great event. Here you can see the Closing Session speech given by Samuel Iglesias:

    Other activities


    As usual, Igalia was present in many graphics related conferences during the year:

    New Igalians in the team

    Igalia’s graphics team kept growing. Two new developers joined us in 2018:

    • Hyunjun Ko is an experienced Igalian with a strong background in multimedia. Specifically, GStreamer and Intel’s VAAPI. He is now contributing his impressive expertise into our Graphics team.
    • Arcady Goldmints-Orlov is the latest addition to the team. His previous expertise as a graphics developer around the nVIDIA GPUs fits perfectly for the kind of work we are pushing currently in Igalia.


    Thank you for reading this blog post and we look forward to more work on graphics in 2019!


    by tanty at February 25, 2019 02:50 PM

    February 18, 2019

    Neil Roberts

    VkRunner at FOSDEM

    I attended FOSDEM again this year thanks to funding from Igalia. This time I gave a talk about VkRunner in the graphics dev room. It’s now available on Igalia’s YouTube channel below:

    I thought this might be a good opportunity to give a small status update of what has happened since my last blog post nearly a year ago.

    Test suite integration

    The biggest news is that VkRunner is now integrated into Khronos’ Vulkan CTS test suite and Mesa’s Piglit test suite. This means that if you work on a feature or a bugfix in your Vulkan driver and you want to make sure it doesn’t get regressed, it’s now really easy to add a VkRunner test for it and have it collected in one of these test suites. For Piglit all that is needed is to give the test script a .vk_shader_test extension and drop it anywhere under the tests/vulkan folder and it will automatically be picked up by the Piglit framework. As an added bonus, these tests are also run automatically on Intel’s CI system, so if your test is related to i965 in Mesa you can be sure it will not be regressed.

    On the Khronos CTS side the integration is currently a little less simple. Along with help from Samuel Iglesias, we have merged a branch into master that lays the groundwork for adding VkRunner tests. Currently there are only proof-of-concept tests to show how the tests could work. Adding more tests still requires tweaking the C++ code so it’s not quite as simple as we might hope.


    When VkRunner is built, in now also builds a static library containing a public API. This can be used to integrate VkRunner into a larger test suite. Indeed, the Khronos CTS integration takes advantage of this to execute the tests using the VkDevice created by the test suite itself. This also means it can execute multiple tests quickly without having to fork an external process.

    The API is intended to be very highlevel and is as close to possible as just having simple functions to ask VkRunner to execute a test script and return an enum reporting whether the test succeeded or not. There is an example of its usage in the README.

    Precompiled shader scripts

    One of the concerns raised when integrating VkRunner into CTS is that it’s not ideal to have to run glslang as an external process in order to compile the shaders in the scripts to SPIR-V. To work around this, I added the ability to have scripts with binary shaders. In this case the 32-bit integer numbers of the compiled SPIR-V are just listed in ASCII in the shader test instead of the GLSL source. Of course writing this by hand would be a pain, so the VkRunner repo includes a Python script to precompile a bunch of shaders in a batch. This can be really useful to run the tests on an embedded device where installing glslang isn’t practical.

    However, in the end for the CTS integration we took a different approach. The CTS suite already has a mechanism to precompile all of the shaders for all tests. We wanted to take advantage of this also when compiling the shaders from VkRunner tests. To make this work, Samuel added some functions to the VkRunner API to query the GLSL in a VkRunner shader script and then replace them with binary equivalents. That way the CTS suite can use these functions to replace the shaders with its cached compiled versions.

    UBOs, SSBOs and compute shaders

    One of the biggest missing features mentioned in my last post was UBO and SSBO support. This has now been fixed with full support for setting values in UBOs and SSBOs and also probing the results of writing to SSBOs. Probing SSBOs is particularily useful alongside another added feature: compute shaders. Thanks to this we can run our shaders as compute shaders to calculate some results into an SSBO and probe the buffer to see whether it worked correctly. Here is an example script to show how that might look:

    [compute shader]
    #version 450
    /* UBO input containing an array of vec3s */
    layout(binding = 0) uniform inputs {
            vec3 input_values[4];
    /* A matrix to apply to these values. This is stored in a push
     * constant. */
    layout(push_constant) uniform transforms {
            mat3 transform;
    /* An SSBO to store the results */
    layout(binding = 1) buffer outputs {
            vec3 output_values[];
            uint i = gl_WorkGroupID.x;
            /* Transform one of the inputs */
            output_values[i] = transform * input_values[i];
    # Set some input values in the UBO
    ubo 0 subdata vec3 0 \
      3 4 5 \
      1 2 3 \
      1.2 3.4 5.6 \
      42 11 9
    # Create the SSBO
    ssbo 1 1024
    # Store a matrix uniform to swap the x and y
    # components of the inputs
    push mat3 0 \
      0 1 0 \
      1 0 0 \
      0 0 1
    # Run the compute shader with one instance
    # for each input
    compute 4 1 1
    # Check that we got the expected results in the SSBO
    probe ssbo vec3 1 0 ~= \
      4 3 5 \
      2 1 3 \
      3.4 1.2 5.6 \
      11 42 9

    Extensions in the requirements section

    The requirements section can now contain the name of any extension. If this is done then VkRunner will check for the availability of the extension when creating the device and enable it. Otherwise it will report that the test was skipped. A lot of the Vulkan extensions also add an extended features struct to be used when creating the device. These features can also be queried and enabled for extentions that VkRunner knows about simply by listing the name of the feature in that struct. For example if shaderFloat16 in listed in the requirements section, VkRunner will check for the VK_KHR_shader_float16_int8 extension and the shaderFloat16 feature within its extended feature struct. This makes it really easy to test optional features.

    Cross-platform support

    I spent a fair bit of time making sure VkRunner works on Windows including compiling with Visual Studio. The build files have been converted to CMake which makes building on Windows even easier. It also compiles for Android thanks to patches from Jaebaek Seo. The repo contains Android build files to build the library and the vkrunner executable. This can be run directly on a device using adb.

    User interface

    There is a branch containing the beginnings of a user interface for editing VkRunner scripts. It presents an editor widget via GTK and continuously runs the test script in the background as you are editing it. It then displays the results in an image and reports any errors in a text field. The test is run in a separate process so that if it crashes it doesn’t bring down the user interface. I’m not sure whether it makes sense to merge this branch into master, but in the meantime it can be a convenient way to fiddle with a test when it fails and it’s not obvious why.

    And more…

    Lots of other work has been going on in the background. The best way to get to more details on what VkRunner can do is to take a look at the README. This has been kept up-to-date as the source of documentation for writing scripts.

    by nroberts at February 18, 2019 05:23 PM

    February 17, 2019

    Eleni Maria Stea

    i965: Improved support for the ETC/EAC formats on Intel Gen 7 and previous GPUs

    This post is about a recent contribution I’ve done to the i965 mesa driver to improve the emulation of the ETC/EAC texture formats on the Intel Gen 7 and older GPUs, as part of my work for the Igalia‘s graphics team. Demo: The video mostly shows the behavior of some GL calls and operations with … Continue reading i965: Improved support for the ETC/EAC formats on Intel Gen 7 and previous GPUs

    by hikiko at February 17, 2019 04:45 PM