Planet Igalia

January 17, 2020

Iago Toral

Raspberry Pi 4 V3D driver gets OpenGL ES 3.1 conformance

So continuing with the news, here is a fairly recent one: as the tile states, I am happy to announce that the Raspberry Pi 4 is now an OpenGL ES 3.1 conformant product!. This means that the Mesa V3D driver has successfully passed a whole lot of tests designed to validate the OpenGL ES 3.1 feature set, which should be a good sign of driver quality and correctness.

It should be noted that the Raspberry Pi 4 shipped with a V3D driver exposing OpenGL ES 3.0, so this also means that on top of all the bugfixes that we implemented for conformance, the driver has also gained new functionality! Particularly, we merged Eric’s previous work to enable Compute Shaders.

All this work has been in Mesa master since December (I believe there is only one fix missing waiting for us to address review feedback), and will hopefully make it to Raspberry Pi 4 users soon.

by Iago Toral at January 17, 2020 10:02 AM

Raspberry Pi 4 V3D driver gets Geometry Shaders

I actually landed this in Mesa back in December but never got to announce it anywhere. The implementation passes all the tests available in the Khronos Conformance Tests Suite (CTS). If you give this a try and find any bugs, please report them here with the V3D tag.

This is also the first large feature I land in V3D! Hopefully there will be more coming in the future.

by Iago Toral at January 17, 2020 09:45 AM

I am working on the Raspberry Pi 4 Mesa V3D driver

Yeah… this blog post is well overdue, but better late than never! So yes, I am currently working on progressing the Raspberry Pi 4 Mesa driver stack, together with my Igalian colleagues Piñeiro and Chema, continuing the fantastic work started by Eric Anholt on the Mesa V3D driver.

The Raspberry Pi 4 sports a Video Core VI GPU that is capable of OpenGL ES 3.2, so it is a big update from the Raspberry Pi 3, which could only do OpenGL ES 2.0. Another big change with the Raspberry Pi 4 is that the Mesa v3d driver is the driver used by default with Raspbian. Because both GPUs are quite different, Eric had to write an all new driver for the Raspberry Pi 4, and that is why there are two drivers in Mesa: the VC4 driver is for the Raspberry Pi 3, while the V3D driver targets the Raspberry Pi 4.

As for what we have been working on exactly, I wrote a long post on the Raspberry Pi blog some months ago with a lot of the details, but for those looking for the quick summary:

  • Shader compiler optimizations.
  • Significant Transform Feedback fixes and improvements.
  • Implemented OpenGL Logic Operations.
  • A bunch of bugfixes for Piglit test failures.
  • Set up a Continuous Integration system to identify regressions.
  • Rebased and merge Eric’s work on Compute Shaders.
  • Many bug fixes targeting the Khronos OpenGL ES Conformance Test Suite (CTS).

So that’s it for the late news. I hope to do a better job keeping this blog updated with the news this year, and to start with that I will be writing a couple of additional posts to highlight a few significant development milestones we achieved recently, so stay tuned for more!

by Iago Toral at January 17, 2020 09:31 AM

January 08, 2020

Víctor Jáquez

GStreamer-VAAPI 1.16 and libva 2.6 in Debian

Debian has migrated libva 2.6 into testing. This release includes a pull request that changes how the drivers are selected to be loaded and used. As the pull request mentions:

libva will try to load iHD firstly, if it failed. then it will load i965.

Also, Debian testing has imported that iHD driver with two flavors: intel-media-driver and intel-media-driver-non-free. So basically iHD driver is now the main VAAPI driver for Intel platforms, though it only supports the new chips, the old ones still require i965-va-driver.

Sadly, for current GStreamer-VAAPI stable, the iHD driver is not included in its driver white list. And this will pose a problem for users that have installed either of the intel-media-driver packages, because, by default, such driver is ignored and the VAAPI GStreamer elements won’t be registered.

There are three temporal workarounds (mutually excluded) for those users (updated):

  1. Uninstall intel-media-driver* and install (or keep) the old i965-va-driver-shaders/i965-va-driver.
  2. Export, by default in your session, export LIBVA_DRIVER_NAME=i965. Normally this is done adding the variable exportation in $HOME/.profile file. This environment variable will force libva to load the i965 driver.
  3. And finally, export, by default in your sessions, GST_VAAPI_ALL_DRIVERS=1. This is not advised since many applications, such as Epiphany, might fail.

We prefer to not include iHD in the stable white list because most of the work done for that driver has occurred after release 1.16.

In the case of GStreamer-VAAPI master branch (actively in develop) we have merged the iHD in the white list, since the Intel team has been working a lot to make it work. Though, it will be released for GStreamer version 1.18.

by vjaquez at January 08, 2020 04:36 PM

Angelos Oikonomopoulos

A Dive Into JavaScriptCore

This post is an attempt to both document some aspects of JSC at the source level and to show how one can figure out what a piece of code actually does in a source base as large and complex as JSC's.

January 08, 2020 12:00 PM

January 01, 2020

Brian Kardell

Unlocking Colors

Unlocking Colors

Despite the fact that I draw and paint, and am involved with CSS, I am actually generally not good at design. Color, however, is endlessly fascinating. As there have been many color-oriented proposals for CSS, to enable people who are good at design, I thought it might be nice to write about an important, but underdiscussed bit that's necessary in order to really unlock a lot - and how we can help.

First... What even is color?

When you look at real physical things, what you perceive as color is a reflection of which wavelengths of light bounce back at you without being 'swallowed up' by the pigment. Pigments are subtractive, light is additive. That is, if you combine all of the colors of light, you get a pure white - and if you combine all of the pigments you get a pure black.

There is 'real information' that exists in the universe, but there are also real constraints when talking about color: If the light shining on that pigment doesn't contain all of the possible wavelengths, that color will look different, for example. Or, if your monitor can't physically create true pure white or pure black, things are a little skewed.

When we're doing things digitally there's another important constraint: We also have to store colors with some number of bits and format, express those somehow, and be able to reason about them. Storing 'really accurate true values' would both require a lot of bits and a lot of bits, and machines (at various points in time) weren't capable of creating, managing or displaying - either efficiently, or at all. As a result, over the years we've created a lot of interesting tradeoffs with regard to color.

Color spaces

If you can imagine a true rainbow, with all of the real possible colors in the universe, the set of those you can actually represent (and how you move about in it) is called a 'color space'. The limits of a color space are called its gammut. Because your monitor works with light, it is an additive process and 'Red' 'Green' and 'Blue' combined to create white (once upon a time, with actual CRTs). Along the way, we created the sRGB color space. The sRGB color space is a limited section of the rainbow, which is kind of narrow compared to all of the possible colors that exist in nature, but is very optimized for machines and common cases. Below is an example from wikipedia illustrating sRBG and color spaces illustrating the sRGB color space. The colors within the triangle are part of the sRGB color space.

sRGB color space, visualized

But, the point is that these tradeoffs aren't always desirable.

Today, for example, we have machines capable of recording or displaying things with a much wider gammut -- the stuff outside the triangle. Your monitor does, and if you have a TV capable of HDR (hi dynamic range) that's got a wider gammut, for example... And then, there's the math…

Humans vs machines

It's more than just efficiency. sRGB, as I said, is really built for machines - and humans and machines are quite different. If I showed you a color and asked you to guess the value in RGB, mapped that to sRGB space and showed you on your monitor, chances that you could get it very close are pretty low in general. You can test this theory with Further, there's not great odds that you could efficiently zero in on closer and closer simply because it's a little unnatural. This is why we have created things like like hsla which some people think is easier to reason about 'more saturated' and 'lighter' and a circle of color is potentially a little easier to think about. But it's not just that one uses RGB either, it's that the color space itself has weird characteristics.

This plays in especially if we want to reason about things mathematically - for example - in creating design systems.

Our design systems, ideally, want to reason about colors by 'moving through the color space'. Doing things like mixing colors, lightening, darkening, do this well if they a sense of how our eyes really work rather than how machines like to think about storing and displaying. That is, if I move this one aspect just a little bit, I expect it to have the same effect no matter what the color I start with is... But in sRGB space, that really isn't true at all. The sRGB space is not perceptually uniform. The same mathematical movement has different degrees of perceived effect depending on where you are at in the color space. If you want to read a designer's experience with this, here's an interesting example which does a good job struggling to do well.

Some examples…

This is, of course, very hard to understand if you don't live in this space because we're just not used to it. It doesn't sound intuitive, or maybe it sounds unnecessarily complex - but it really isn't. It's kind of simple: the RGB color spaces aren't built for humans to reason about mathmetically, really, that's it. Here are some examples where it is easy to see in action...

See the Pen NWWZPdL by Brian Kardell (@bkardell) on CodePen.

Recently, my friend and co-author of the CSS Colors specification, Adam Argyle also ran a twitter poll asking this question "Which of these is ligher". Polls are tricky for stuff like this because people "assume" it's a trick question somehow. Nevertheless, there was overwhelming agreement that #2 was lighter&hellip Because, well, it is.

There's another great illustration of this in that earlier article struggling with this

The perceived brightness of all of the hues in a spectrum with the same saturation and lightness... It's quite clear they're different.
This matters a lot...

This matters a lot, for example, if you wanted to create a design system which mathematically reasons about color contrast (just like the author of that article).

There are color spaces like Lab and LCH which deal with the full spectrum and have qualities like perceptual uniformness. Thus, if we want great color functions for use in real design systems everyone seems to agree that having support to do said math in the Lab/LCH color spaces is the ideal enabling feature. It's not exactly a prerequisite to doing useful things. In fact, a lot of designers it seems aren't even aware of this quality because we've lived so long with the sRGB space. However, realistically, we get the most value if we invest in having support for these two color spaces first. They are also well defined, don't require a lot of debate and are non-controversial. Having them would make everything else so much the better, because they give us the right tools to help the actual humans.

Recently, Adam opened this issue in Chromium.

Let's get this done.

To recap... We have two CSS color related functions which:

  • Are not controversial
  • Aree quite mature and widely used color standards
  • Are important for accesibility
  • Are important for design systems
  • Make all of the other design system related work much more attractive
  • Are actually some of the easiest things we could do in CSS
  • Just aren't getting done

Chances are pretty good, I think, that as a designer or developer reading this, you'll find that last bullet irritating. You might be asking yourself "But why isn't it getting done? Wouldn't that help so much?". The answer is neither as offensive nor complex as you might imagine, it's kind of simple really: Yes, it would - but yhere's just so much in the pipeline already. In the end, everyone (including browser vendors) has to prioritize the work in the queue with available resources and skills. There are only so many people with the particular skills to be working on this available, and there are lots of importan tor interesting things competing for their prioritization.

I recently wrote a post about this problem and how Igalia enables us to move things like this. We have shown that we can get big things done - like CSS Grid. This is a small thing in terms of effort and cost with an outsized impact. We don't need to wait, ask for the browsers to move this in their priority queues, we just need someone to see the value in funding the work - and it's not huge. We can get this done.

So, if your organization is interested in design systems, or how you can help move important standards work, this is a great entry point for discussion and I'd love to talk with you. My Twitter dms are open as well.

January 01, 2020 05:00 AM

December 23, 2019

Mario Sanchez Prada

End of the year Update: 2019 edition

It’s the end of December and it seems that yet another year has gone by, so I figured that I’d write an EOY update to summarize my main work at Igalia as part of our Chromium team, as my humble attempt to make up for the lack of posts in this blog during this year.

I did quit a few things this year, but for the purpose of this blog post I’ll focus on what I consider the most relevant ones: work on the Servicification and the Blink Onion Soup projects, the migration to the new Mojo APIs and the BrowserInterfaceBroker, as well as a summary of the conferences I attended, both as a regular attendee and a speaker.

But enough of an introduction, let’s dive now into the gory details…

Servicification: migration to the Identity service

As explained in my previous post from January, I’ve started this year working on the Chromium Servicification (s13n) Project. More specifically, I joined my team mates in helping with the migration to the Identity service by updating consumers of several classes from the sign-in component to ensure they now use the new IdentityManager API instead of directly accessing those other lower level APIs.

This was important because at some point the Identity Service will run in a separate process, and a precondition for that to happen is that all access to sign-in related functionality would have to go through the IdentityManager, so that other process can communicate with it directly via Mojo interfaces exposed by the Identity service.

I’ve already talked long enough in my previous post, so please take a look in there if you want to know more details on what that work was exactly about.

The Blink Onion Soup project

Interestingly enough, a bit after finishing up working on the Identity service, our team dived deep into helping with another Chromium project that shared at least one of the goals of the s13n project: to improve the health of Chromium’s massive codebase. The project is code-named Blink Onion Soup and its main goal is, as described in the original design document from 2015, to “simplify the codebase, empower developers to implement features that run faster, and remove hurdles for developers interfacing with the rest of the Chromium”. There’s also a nice slide deck from 2016’s BlinkOn 6 that explains the idea in a more visual way, if you’re interested.

“Layers”, by Robert Couse-Baker (CC BY 2.0)

In a nutshell, the main idea is to simplify the codebase by removing/reducing the several layers of located between Chromium and Blink that were necessary back in the day, before Blink was forked out of WebKit, to support different embedders with their particular needs (e.g. Epiphany, Chromium, Safari…). Those layers made sense back then but these days Blink’s only embedder is Chromium’s content module, which is the module that Chrome and other Chromium-based browsers embed to leverage Chromium’s implementation of the Web Platform, and also where the multi-process and sandboxing architecture is implemented.

And in order to implement the multi-process model, the content module is split in two main parts running in separate processes, which communicate among each other over IPC mechanisms: //content/browser, which represents the “browser process” that you embed in your application via the Content API, and //content/renderer, which represents the “renderer process” that internally runs the web engine’s logic, that is, Blink.

With this in mind, the initial version of the Blink Onion Soup project (aka “Onion Soup 1.0”) project was born about 4 years ago and the folks spearheading this proposal started working on a 3-way plan to implement their vision, which can be summarized as follows:

  1. Migrate usage of Chromium’s legacy IPC to the new IPC mechanism called Mojo.
  2. Move as much functionality as possible from //content/renderer down into Blink itself.
  3. Slim down Blink’s public APIs by removing classes/enums unused outside of Blink.

Three clear steps, but definitely not easy ones as you can imagine. First of all, if we were to remove levels of indirection between //content/renderer and Blink as well as to slim down Blink’s public APIs as much as possible, a precondition for that would be to allow direct communication between the browser process and Blink itself, right?

In other words, if you need your browser process to communicate with Blink for some specific purpose (e.g. reacting in a visual way to a Push Notification), it would certainly be sub-optimal to have something like this:

…and yet that is what would happen if we kept using Chromium’s legacy IPC which, unlike Mojo, doesn’t allow us to communicate with Blink directly from //content/browser, meaning that we’d need to go first through //content/renderer and then navigate through different layers to move between there and Blink itself.

In contrast, using Mojo would allow us to have Blink implement those remote services internally and then publicly declare the relevant Mojo interfaces so that other processes can interact with them without going through extra layers. Thus, doing that kind of migration would ultimately allow us to end up with something like this:

…which looks nicer indeed, since now it is possible to communicate directly with Blink, where the remote service would be implemented (either in its core or in a module). Besides, it would no longer be necessary to consume Blink’s public API from //content/renderer, nor the other way around, enabling us to remove some code.

However, we can’t simply ignore some stuff that lives in //content/renderer implementing part of the original logic so, before we can get to the lovely simplification shown above, we would likely need to move some logic from //content/renderer right into Blink, which is what the second bullet point of the list above is about. Unfortunately, this is not always possible but, whenever it is an option, the job here would be to figure out what of that logic in //content/renderer is really needed and then figure out how to move it into Blink, likely removing some code along the way.

This particular step is what we commonly call “Onion Soup’ing //content/renderer/<feature>(not entirely sure “Onion Soup” is a verb in English, though…) and this is for instance how things looked before (left) and after (right) Onion Souping a feature I worked on myself: Chromium’s implementation of the Push API:

Onion Soup’ing //content/renderer/push_messaging

Note how the whole design got quite simplified moving from the left to the right side? Well, that’s because some abstract classes declared in Blink’s public API and implemented in //content/renderer (e.g. WebPushProvider, WebPushMessagingClient) are no longer needed now that those implementations got moved into Blink (i.e. PushProvider and PushMessagingClient), meaning that we can now finally remove them.

Of course, there were also cases where we found some public APIs in Blink that were not used anywhere, as well as cases where they were only being used inside of Blink itself, perhaps because nobody noticed when that happened at some point in the past due to some other refactoring. In those cases the task was easier, as we would just remove them from the public API, if completely unused, or move them into Blink if still needed there, so that they are no longer exposed to a content module that no longer cares about that.

Now, trying to provide a high-level overview of what our team “Onion Soup’ed” this year, I think I can say with confidence that we migrated (or helped migrate) more than 10 different modules like the one I mentioned above, such as android/, appcache/, media/stream/, media/webrtc, push_messaging/ and webdatabase/, among others. You can see the full list with all the modules migrated during the lifetime of this project in the spreadsheet tracking the Onion Soup efforts.

In my particular case, I “Onion Soup’ed” the PushMessagingWebDatabase and SurroundingText features, which was a fairly complete exercise as it involved working on all the 3 bullet points: migrating to Mojo, moving logic from //content/renderer to Blink and removing unused classes from Blink’s public API.

And as for slimming down Blink’s public API, I can tell that we helped get to a point where more than 125 classes/enums were removed from that Blink’s public APIs, simplifying and reducing the Chromium code- base along the way, as you can check in this other spreadsheet that tracked that particular piece of work.

But we’re not done yet! While overall progress for the Onion Soup 1.0 project is around 90% right now, there are still a few more modules that require “Onion Soup’ing”, among which we’ll be tackling media/ (already WIP) and accessibility/ (starting in 2020), so there’s quite some more work to be done on that regard.

Also, there is a newer design document for the so-called Onion Soup 2.0 project that contains some tasks that we have been already working on for a while, such as “Finish Onion Soup 1.0”, “Slim down Blink public APIs”, “Switch Mojo to new syntax” and “Convert legacy IPC in //content to Mojo”, so definitely not done yet. Good news here, though: some of those tasks are already quite advanced already, and in the particular case of the migration to the new Mojo syntax it’s nearly done by now, which is precisely what I’m talking about next…

Migration to the new Mojo APIs and the BrowserInterfaceBroker

Along with working on “Onion Soup’ing” some features, a big chunk of my time this year went also into this other task from the Onion Soup 2.0 project, where I was lucky enough again not to be alone, but accompanied by several of my team mates from Igalia‘s Chromium team.

This was a massive task where we worked hard to migrate all of Chromium’s codebase to the new Mojo APIs that were introduced a few months back, with the idea of getting Blink updated first and then having everything else migrated by the end of the year.

Progress of migrations to the new Mojo syntax: June 1st – Dec 23rd, 2019

But first things first: you might be wondering what was wrong with the “old” Mojo APIs since, after all, Mojo is the new thing we were migrating to from Chromium’s legacy API, right?

Well, as it turns out, the previous APIs had a few problems that were causing some confusion due to not providing the most intuitive type names (e.g. what is an InterfacePtrInfo anyway?), as well as being quite error-prone since the old types were not as strict as the new ones enforcing certain conditions that should not happen (e.g. trying to bind an already-bound endpoint shouldn’t be allowed). In the Mojo Bindings Conversion Cheatsheet you can find an exhaustive list of cases that needed to be considered, in case you want to know more details about these type of migrations.

Now, as a consequence of this additional complexity, the task wouldn’t be as simple as a “search & replace” operation because, while moving from old to new code, it would often be necessary to fix situations where the old code was working fine just because it was relying on some constraints not being checked. And if you top that up with the fact that there were, literally, thousands of lines in the Chromium codebase using the old types, then you’ll see why this was a massive task to take on.

Fortunately, after a few months of hard work done by our Chromium team, we can proudly say that we have nearly finished this task, which involved more than 1100 patches landed upstream after combining the patches that migrated the types inside Blink (see bug 978694) with those that tackled the rest of the Chromium repository (see bug 955171).

And by “nearly finished” I mean an overall progress of 99.21% according to the Migration to new mojo types spreadsheet where we track this effort, where Blink and //content have been fully migrated, and all the other directories, aggregated together, are at 98.64%, not bad!

On this regard, I’ve been also sending a bi-weekly status report mail to the chromium-mojo and platform-architecture-dev mailing lists for a while (see the latest report here), so make sure to subscribe there if you’re interested, even though those reports might not last much longer!

Now, back with our feet on the ground, the main roadblock at the moment preventing us from reaching 100% is //components/arc, whose migration needs to be agreed with the folks maintaining a copy of Chromium’s ARC mojo files for Android and ChromeOS. This is currently under discussion (see chromium-mojo ML and bug 1035484) and so I’m confident it will be something we’ll hopefully be able to achieve early next year.

Finally, and still related to this Mojo migrations, my colleague Shin and I took a “little detour” while working on this migration and focused for a while in the more specific task of migrating uses of Chromium’s InterfaceProvider to the new BrowserInterfaceBroker class. And while this was not a task as massive as the other migration, it was also very important because, besides fixing some problems inherent to the old InterfaceProvider API, it also blocked the migration to the new mojo types as InterfaceProvider did usually rely on the old types!

Architecture of the BrowserInterfaceBroker

Good news here as well, though: after having the two of us working on this task for a few weeks, we can proudly say that, today, we have finished all the 132 migrations that were needed and are now in the process of doing some after-the-job cleanup operations that will remove even more code from the repository! \o/

Attendance to conferences

This year was particularly busy for me in terms of conferences, as I did travel to a few events both as an attendee and a speaker. So, here’s a summary about that as well:

As usual, I started the year attending one of my favourite conferences of the year by going to FOSDEM 2019 in Brussels. And even though I didn’t have any talk to present in there, I did enjoy my visit like every year I go there. Being able to meet so many people and being able to attend such an impressive amount of interesting talks over the weekend while having some beers and chocolate is always great!

Next stop was Toronto, Canada, where I attended BlinkOn 10 on April 9th & 10th. I was honoured to have a chance to present a summary of the contributions that Igalia made to the Chromium Open Source project in the 12 months before the event, which was a rewarding experience but also quite an intense one, because it was a lightning talk and I had to go through all the ~10 slides in a bit under 3 minutes! Slides are here and there is also a video of the talk, in case you want to check how crazy that was.

Took a bit of a rest from conferences over the summer and then attended, also as usual, the Web Engines Hackfest that we at Igalia have been organising every single year since 2009. Didn’t have a presentation this time, but still it was a blast to attend it once again as an Igalian and celebrate the hackfest’s 10th anniversary sharing knowledge and experiences with the people who attended this year’s edition.

Finally, I attended two conferences in the Bay Area by mid November: first one was the Chrome Dev Summit 2019 in San Francisco on Nov 11-12, and the second one was BlinkOn 11 in Sunnyvale on Nov 14-15. It was my first time at the Chrome Dev Summit and I have to say I was fairly impressed by the event, how it was organised and the quality of the talks in there. It was also great for me, as a browsers developer, to see first hand what are the things web developers are more & less excited about, what’s coming next… and to get to meet people I would have never had a chance to meet in other events.

As for BlinkOn 11, I presented a 30 min talk about our work on the Onion Soup project, the Mojo migrations and improving Chromium’s code health in general, along with my colleague Antonio Gomes. It was basically a “extended” version of this post where we went not only through the tasks I was personally involved with, but also talked about other tasks that other members of our team worked on during this year, which include way many other things! Feel free to check out the slides here, as well as the video of the talk.

Wrapping Up

As you might have guessed, 2019 has been a pretty exciting and busy year for me work-wise, but the most interesting bit in my opinion is that what I mentioned here was just the tip of the iceberg… many other things happened in the personal side of things, starting with the fact that this was the year that we consolidated our return to Spain after 6 years living abroad, for instance.

Also, and getting back to work-related stuff here again, this year I also became accepted back at Igalia‘s Assembly after having re-joined this amazing company back in September 2018 after a 6-year “gap” living and working in the UK which, besides being something I was very excited and happy about, also brought some more responsibilities onto my plate, as it’s natural.

Last, I can’t finish this post without being explicitly grateful for all the people I got to interact with during this year, both at work and outside, which made my life easier and nicer at so many different levels. To all of you,  cheers!

And to everyone else reading this… happy holidays and happy new year in advance!

by mario at December 23, 2019 11:13 PM

December 16, 2019

Brian Kardell

2019, That's a Wrap.

2019, That's a Wrap.

As 2019 draws to a close and we look toward a new year (a new decade, in fact) Igalians like to take a moment and look back on the year. As it is my first year at Igalia, I am substantially impressed by the long list of things we accomplished. I thought it was worth, then, highlighting some of that work and putting it into context: What we did, how, and why I think this work is important the open web platform and to us all…

All the Browsers…

All of the browser projects are open source now and we are active and trusted committers to all of them. That's important because browser diversity matters and we're able to jump in and lend a hand wherever necessary. The rate at which we contribute to any browser can vary from year to year for reasons (many explained in this post), but I'd like to illustrate what that actually looked like in 2019. Let's look at the commits for 2019…

Commits are an imperfect kind of measure. Not all commits are of the same value (in fact, value is hard to qualify here). It doesn't account for potentially lots and lots of downstream work that went into getting one upstream commit. Even figuring out how to count them can be tricky. Looking at them purely as a vague relative scale of "how much did we do", they're one useful illustration. That's how I'd like to you view these statisics - about our slice of the commit pie, and not anyone elses.


During BlinkOn this year, you might have seen a number of mentions about our contributions to Chromium in 2019: We made the second most commits after Google this year.

Just one of the slides shared from Google’s presentation at blinkon 2019 showing 1782 commits from Igalia at that point in 2019…

Chromium, if you're not aware, is a very large and busy project with lots of big company contributors, so it's quite exciting to show the level at which we are contributing here. (Speaking of imperfections in this measure - all of our MathML work was done downstream and thus commits toward this very significant project aren't even counted here!)


This year, our contributions to Mozilla projects were relatively small by comparison. Nevertheless, 9 Igalians still actively contributed over 160 commits to servo and mozilla-central in 2019 placing us in the top 10 contributors to servo and the top 20 to mozilla-central.

A little over 1% of contributions to Servo were by Igalians in 2019 (over 3.4% of non-Mozillan commits).

As an aside, it's pretty great to see that the #1 committer to mozilla-central this year is our friend Emilio Cobos (by a wide margin, over 2-to-1). Emilio completed an Igalia Coding Experience in 2017… and gave a great talk at least years Web Engines Hackfest. Congratulations Emilio!


Very importantly: We are also the #2 contributor to WebKit. In 2019 37 Igalians made over 1100 commits to WebKit this year, delivering ~11% of the total commits.

Almost 11% of all commits to WebKit in 2019 were from Igalians.

In fact, if we zoom out and look at what that looks like without Apple, our level of committment is even more evident…

60% of all non-Apple commits to WebKit in 2019 were from Igalians.

I think this is important because there's a larger, untold story here which I look forward to talking more about next year. In short though: While you might think of WebKit as “Apple”, the truth is that there are a number of companies invested and interested in keeping WebKit alive and increasingly competitive, and Igalia is key among them. Why? Because 3 of the 5 downloads available from are actually maintained by Igalia. These WebKit browsers are used on hundreds of millions of devices already, and that's growing. Chances are even pretty good that you've encountered one, maybe even have one, and just didn't even know it.

A Unique Role…

As I explained in my post earlier this year Beyond Browser Vendors: Igalia does this work by uniquely expanding the ability to prioritize work and allowing us all to better collectively improve the commons and share the load.

Bloomberg, as you might know, funded the development of CSS Grid. In 2019 they continue to fund a lot great efforts in both JavaScript and CSS standardization and implementation. To name just a few in 2019: In CSS they funded some paint optimizations, development of white-space: break-spaces, line-break: anywhere, overflow-wrap: anywhere, ::marker and list-style-type: <string>. In JavaScript, BigInt, Public and Private Class Fields, Decorators and Records and Tuples.

But they weren't remotely alone: 2019 saw many diverse clients funding some kind of upstream work for all sorts of reasons. Google AMP, as another example, funded a lot of great work in our partnership for Predictability efforts. Several sponsors helped fund work on MathML. Our work with embedded browsers has helped introduce a number of new paths for contributions as well. We worked on Internationalization, accessibility, module loading, top-level-await, a proposal for operator overloading and more.

I'm also very proud of fact that we are an example of what we preach in this respect: Among those who sponsor our work is… us. Igalia cares about doing things that matter, and while we are great at finding ways to help fund the advancement of work that matters, we're also willing to invest ourselves.

So, what matters?

Lots of things matter to Igalia, here are a few broad technical areas that we priortized in 2019…


This year MDN organized a giant survey which turned up a lot of interesting findings about what things caused developers pain, or that they would like to see improved. Several of these had to do with uneven, inconsistent (buggy) or lacking implementations of standards.

Thanks to lots of clients, and some of our own investment, we were able to help here in a big way. This year we were able to resolve a number of incompatibilities, fix bugs, create a lot of tests, and fix bugs that cause a lot of developer pain. AMP, notably, has sponsored a lot of these important things in partnership with us, even funding 'last implementations' of important missing features to WebKit like ResizeObserver and preloading for responsive images.

In 2019, 20 Igalians made 343 commits to Web Platform Tests, adding thousands of new tests and placing us among the top few contributors..

In 2019, Igalia was the #3 contributor to Web Platform Tests, after Google and Mozilla.

We were also among the top contributors to Test262 as well… Great contributions here from our friends at Bocoup! We are two very unique companies in this space - contributing to the open web with a different kind of background than most. It's excellent to see how much we're all accomplishing together - just between the two of us, that's approaching 60% of the commits!

We're among the top contributors to Test262 as well.

“Accessibility is very important to us” might sound like a nice thing to say, but at Igalia it's very true. We are probably accessibility on Linux's greatest champions. Here's what Igalia brought to the table in 2019:

As the maintainers of the Orca screen reader for the Gnome desktop we did an enormous amount of work this year with over 460 commits (over 83%) in 2019 to Orca itself.

Over 83% of all commits to Orca in 2019 were from Igalians.

We were also active contributors for lots of the underlying parts (at-spi2-atk, at-spi2-core). And, thanks to investment from Google, we were also able to properly implement ATK support in Chromium, making Chrome for Linux now very accessible with a screen reader. We also actively contributed to Accerciser, a popular Python based interactive accessibility explorer.


Of course, all of our implentation work is predicated on open standards, and we care deeply about them, and I'm proud of our participation. Here are a few highlights from 2019:

Igalia's Joanmarie Diggs is the chair the ARIA Working Group, and co-editor of several of its related specifications. Igalia had the second most commits to the ARIA specification itself - almost 20% in 2019 behind only Adobe (well done Adobe!).

Almost 1/5th of contributions to ARIA came from Igalians in 2019.

We were also the top 3 contributor to the HTML Accessibility API Mappings 1.0 and actively contributed to Accessible Name and Description Computation.

This year, during the CSS Working Group's Face-to-Face meeting at the Mozilla offices in Toronto, our colleage Oriol Brufau was unanimously, and quickly appointed as co-editor of the CSS Grid Specification.

2019 also saw the hiring of Nikolas Zimmerman, bringing the original authors of KSVG (the original SVG implementation) back together at Igalia. We've nominated Niko to the SVG working group and we we're so excited to share what he's working on (see end of post):

Math on the Web

Enabling authors to share text was an original goal of the Web. That math was an important aspect of this was evident from the very earliest days at CERN, however, because of a complex history and lacking implementation in Chromium, this remained elusive and without a starting point for productive conversation. We believe that this matters to society, and resolving and enabling a way to move forward it is the right thing to do.

With some initial funding from sponsors, this year we also helped to steadily advance and define an interoperable, modern and rigorous MathML Core specification which is well integrated with the Web Platform to provide a starting point. Igalia's Fred Wang is the co-editor of MathML Core, and we've implemented alongside and completed an initial implementation thanks to lots of hard work from Rob Buis and Fred, and worked to write tests and align high quality interoperability.

Modern interoperabilty and high quality rendering by July 2019.

We filed the Intent to Implement and began the process of upstreaming. As part of this, we have also created a corresponding explainer and filed for a review with the W3C Technical Architecture Group (TAG). We've seen positive movement and feedback and several patches have already begun to land upstream toward this long, popular and overdue issue. Thanks to our work, we will soon be able to say (interoperably), that the Web can handle original use cases defined at CERN in the early 1990s.

Outreach and Community

Standards are a complex process and we believe that enabling good standardization requires input from a considerably larger audience than is historically able to engage in standards meetings. Bringing diverse parts of the community together is important. This has historically been quite a challenge to achieve. As such, we've been doing a lot in this space.


So, I think we did some amazing and important things in 2019. But, as amazing as it was: Just wait until you see what we do 2020.

In fact, we're excited about so many things, we couldn't even wait to give you a taste...

December 16, 2019 05:00 AM

December 12, 2019

Nikolas Zimmermann

CSS 3D transformations & SVG

As mentioned in my first article, I have a long relationship with the WebKit project, and its SVG implementation. In this post I will explain some exciting new developments and possible advances, and I present some demos of the state of the art (if you cannot wait, go and watch them, and come back for the details). To understand why these developments are both important and achievable now though, we’ll have to first understand some history.

by (Nikolas Zimmermann) at December 12, 2019 12:00 AM

December 09, 2019

Frédéric Wang

Review of my year 2019 at Igalia

Co-owner and mentorship

In 2016, I was among the new software engineers who joined Igalia. Three years later I applied to become co-owner of the company and the legal paperwork was completed in April. As my colleague Andy explained on his blog, this does not change a lot of things in practice because most of the decisions are taken within the assembly. However, I’m still very happy and proud of having achieved this step 😄

One of my new duty has been to be the “mentor” of Miyoung Shin since February and to help with her integration at Igalia. Shin has been instrumental in Igalia’s project to improve Chromium’s Code Health with an impressive number of ~500 commits. You can watch the video of her BlinkOn lightning talk on YouTube. In addition to her excellent technical contribution she has also invested her energy in company’s life, helping with the organization of social activities during our summits, something which has really been appreciated by her colleagues. I’m really glad that she has recently entered the assembly and will be able to take a more active role in the company’s decisions!

Julie and Shin at BlinkOn11
Julie (left) and Shin (right) at BlinkOn 11.

Working with new colleagues

Igalia also hired Cathie Chen last December and I’ve gotten the chance to work with her as part of our Web Platform collaboration with AMP. Cathie has been very productive and we have been able to push forward several features, mostly related to scrolling. Additionally, she implemented ResizeObserver in WebKit, which is a quite exciting new feature for web developers!

Cathie Chen signing 溜溜溜
Cathie executing traditional Chinese sign language.

The other project I’ve been contributed to this year is MathML. We are introducing new math and font concepts into Chromium’s new layout engine, so the project is both technically challenging and fascinating. For this project, I was able to rely on Rob’s excellent development skills to complete this year’s implementation roadmap on our Chromium branch and start upstreaming patches.

In addition, a lot of extra non-implementation effort has been done which led to consensus between browser implementers, MathML enthusiasts and other web developer groups. Brian Kardell became a developer advocate for Igalia in March and has been very helpful talking to different people, explaining our project and introducing ideas from the Extensible Web, HTML or CSS Communities. I strongly recommend his recent Igalia chat and lightning talk for a good explanation of what we have been doing this year.

Photo of Brian
Brian presenting his lightning talk at BlinkOn11.


These are the developer conferences I attended this year and gave talks about our MathML project:

Me at BlinkOn 11
Me, giving my BlinkOn 11 talk on "MathML Core".

As usual it was nice to meet the web platform and chromium communities during these events and to hear about the great projects happening. I’m also super excited that the assembly decided to try something new for my favorite event. Indeed, next year we will organize the Web Engines Hackfest in May to avoid conflicts with other browser events but more importantly, we will move to a bigger venue so that we can welcome more people. I’m really looking forward to seeing how things go!

Paris - A Coruña by train

Environmental protection is an important topic discussed in our cooperative. This year, I’ve tried to reduce my carbon footprint when traveling to Igalia’s headquarters by using train instead of plane. Obviously the latter is faster but the former is much more confortable and has less security constraints. It is possible to use high-speed train from Paris to Barcelona and then a Trenhotel to avoid a hotel night. For a quantitative comparison, let’s consider this table, based on my personal travel experience:

Traject Transport Duration Price CO2/passenger
Paris - Barcelona TGV ~6h30 60-90€ 6kg
Barcelona - A Coruña Trenhotel ~15h 40-60€ Unspecified
Paris - A Coruña
(via Madrid)
Iberia 4h30-5h 100-150€ 192kg

The price depends on how early the tickets are booked but it is more or less the same for train and plane. Renfe does not seem to indicate estimated CO2 emission and trains to Galicia are probably not the most eco-friendly. However, using this online estimator, the 1200 kilometers between Barcelona and A Coruña would emit 50kg by national train. This would mean a reduction of 71% in the emission of CO2, which is quite significant. Hopefully things will get better when one day AVE is available between Madrid and A Coruña… 😉

December 09, 2019 11:00 PM

December 08, 2019

Philippe Normand

HTML overlays with GstWPE, the demo

Once again this year I attended the GStreamer conference and just before that, Embedded Linux conference Europe which took place in Lyon (France). Both events were a good opportunity to demo one of the use-cases I have in mind for GstWPE, HTML overlays!

As we, at Igalia, usually have a booth at ELC, I thought a GstWPE demo would be nice to have so we can show it there. The demo is a rather simple GTK application presenting a live preview of the webcam video capture with an HTML overlay blended in. The HTML and CSS can be modified using the embedded text editor and the overlay will be updated accordingly. The final video stream can even be streamed over RTMP to the main streaming platforms (Twitch, Youtube, Mixer)! Here is a screenshot:

The code of the demo is available on Igalia’s GitHub. Interested people should be able to try it as well, the app is packaged as a Flatpak. See the instructions in the GitHub repo for more details.

Having this demo running on our booth greatly helped us to explain GstWPE and how it can be used in real-life GStreamer applications. Combining the flexibility of the Multimedia framework with the wide (wild) features of the Web Platform will for sure increase the synergy and foster collaboration!

As a reminder, GstWPE is available in GStreamer since the 1.16 version. On the roadmap for the mid-term I plan to add Audio support, thus allowing even better integration between WPEWebKit and GStreamer. Imagine injecting PCM audio coming from WPEWebKit into an audio mixer in your GStreamer-based application! Stay tuned.

by Philippe Normand at December 08, 2019 02:00 PM

December 04, 2019

Manuel Rego

Web Engines Hackfest 2020: New dates, new venue!

Igalia is pleased to announce the 12th annual Web Engines Hackfest. It will take place on May 18-20 in A Coruña, and in a new venue: Palexco. You can find all the information, together with the registration form, on the hackfest website:

Mission and vision

The main goal behind this event is to have a place for people from different parts of the web platform community to meet together for a few days and talk, discuss, draft, prototype, implement, etc. on different topics of interest for the whole group.

There are not many events where browser implementors from different engines can sit together and talk about their last developments, their plans for the future, or the controversial topics they have been discussing online.

However this is an event not only for developers, other roles that are part of the community, like people working on standards, are welcomed to the event.

It’s really nice to have people from different backgrounds and working on a variety of things around the web, to reach better solutions, enlighten the conversations and draft higher quality conclusions during the discussions.

We believe the combination of all these factors make the Web Engines Hackfest an unique opportunity to push forward the evolution of the web.

2020 edition

We realized that autumn is usually full of browser events (TPAC, BlinkOn, WebKit Contributors Meeting, … just to name a few), and most of the people coming to the hackfest are also attending some of them. For that reason we thought it would be a good idea to move the event from fall to spring, in order to better accommodate everyone’s schedules and avoid unfortunate conflicts or unnecessary hard choices. So next year the hackfest will happen on May from Monday 18th to Wednesday 20th (both days included).

At this stage the event is becoming popular and during the past three years we have been around 60-70 people. Igalia office has been a great venue for the hackfest during all this time, but on the last occasions we were using it as its full capacity. So this time we decided to move the hackfest to a new venue, which will allow us to grow to 100 or more participants, let’s see how things go. The venue would be Palexco, a lovely conferences building in A Coruña port, which is very close to the city center. We really hope you like the new place and enjoy it.

New venue: Palexco (picture by Jose Luis Cernadas Iglesias) New venue: Palexco (picture by Jose Luis Cernadas Iglesias)

Having more people and the new venue bring us lots of challenges but also new possibilities. So we’re changing a little bit the format of the event, we’ll have a first day in a more regular conference fashion (with some talks and lighting talks) but also including some space for discussions and hacking. And then the last 2 days will be more the usual unconference format with a bunch of breakout sessions, informal discussions, etc. We believe the conversations and discussions that happen during the hackfest are one of the best things of the event, and we hope this new format will work well.

Join us

Thanks to the changes on the venue, the event is no longer invitation-only (as it used to be). We’ll be still sending the invitations to the people usually interested on the hackfest, but you can already register by yourself just filling the registration form.

Soon we will open a call for papers for the talks, stay tuned! We’ll also have room for ligthing talks, so people attending can take advantage of them to explain their work and plans on the event.

Last but not least, Arm, Google and Igalia will be sponsoring 2020 edition, thank you very much! We hope more companies join the trend and help us to arrange the event with their support. If your company is willing to sponsor the hackfest, please don’t hesitate to contact us at

Some historical information

Igalia has been organizing and hosting this event since 2009. Back then, the event was called the “WebKitGTK+ Hackfest”. The WebKitGTK+ project was, on those days, in early stages. There was lots of work to do around the project, and a few people (11 to be specific) decided to work together for a whole week to move the project forward. The event was really successful and it was happening on a similar fashion for 5 years.

On 2014 we decided to make broader the scope of the event and not restrict it to people working only on WebKitGTK+ (or WebKit), but open it to members from all parts of the web platform community (including folks working on other engines like Blink, Servo, Gecko). We changed the name to “Web Engines Hackfest”, we got a very positive response and the event has been running on yearly since then, growing more and more every year.

And now we’re looking forward to 2020 edition, in a new venue and with more people than ever. Let’s hope everything goes great.

December 04, 2019 11:00 PM

November 27, 2019

Jacobo Aragunde

Initialization of the Chromium extension API system

Chromium has infrastructure in place to extend the JavaScript API that is available to web clients that fulfill some conditions. Those web clients can be Chrome extensions, certain websites or local resources like the chrome:// pages. Also, different shell implementations can provide their own extensions. This is an overview of how it works, and what would be required for a custom shell implementation to provide its own API extensions.

Render process

In our previous notes about the Chromium initialization process, we had identified ChromeMainDelegate as the main initialization class. It creates an instance of ChromeContentRendererClient to setup the render process.

Two important classes related with the extension API system are to be initialized by the ChromeContentRendererClient: ExtensionsClient and ExtensionsRendererClient. Both are singletons, the content renderer client sets them up to contain the corresponding Chrome* implementations: ChromeExtensionsClient and ChromeExtensionsRendererClient.

ExtensionClients class diagram

The ExtensionsClient class maintains a list of ExtensionsAPIProvider objects. The constructor of the specific implementation of the client will create instances of the relevant ExtensionsAPIProvider and add them to the list by calling the parent method AddAPIProvider(). In the case of ChromeExtensionsClient, it will add CoreExtensionsAPIProvider and ChromeExtensionsAPIProvider.

ExtensionsAPIProvider class diagram

The purpose of the ExtensionAPIProvider objects is to act as a proxy to a set of json configuration files. On one hand, we have _api_features.json, _permission_features.json, _manifest_features.json and _behavior_features.json. These files are documented in, their purpose is to define the requirements to extension features. The first file is for access to API features: you specify in which contexts or for which kinds of extensions a certain API would be available. The next two files can limit extension usage of permission and manifest entries, and the last one for miscellaneous extension behaviors. The ExtensionAPIProvider class provides four methods to run the code autogenerated from these four files. Notice that not all of them are required.

The providers are later used by ExtensionsClient::CreateFeatureProvider. This method will be called with “api”, “permission”, “manifest” or “behavior” parameters, and it will run the corresponding method for every registered ExtensionsAPIProvider.

There is also a method called AddAPIJSONSources, where the *_features.json files are expected to be loaded into a JSONFeatureProviderSource. To be able to do that, the files should have been included and packed as resources in one of the shell’s .grd files.

The ExtensionAPIProvider class also proxies the json files that define the extended JS API features. These files are located together with the *_features.json files. The methods GetAPISchema and IsAPISchemaGenerated provide access to them.

If you are creating a custom shell, a correct file should be placed together with the aforementioned json files, to generate the necessary code based on them. A production-level example is located at ./extensions/common/api/, or a simpler one at ./extensions/shell/common/api/ for the app_shell.

The ExtensionsRendererClient contains hooks for specific events in the life of the renderer process.
It’s also the place where a specific delegate for the Dispatcher class is created, if needed. In this context, it’s interesting the call to RenderThreadStarted, where an instance of ChromeExtensionsDispatcherDelegate is created. The Chrome implementation of this delegate has many purposes, but in the context of the extension system, we are interested in the methods PopulateSourceMap, where any custom bindings can be registered, and RequireAdditionalModules, which can require specific modules from the module system.

Dispatcher (Delegate) class diagram

The DispatcherDelegate has mainly the purpose of fine-tuning the extension system by:
* Registering native handlers in the module system at RegisterNativeHandlers
* Injecting custom JS bindings and other JS sources in PopulateSourceMap
* Binding delegate classes to any extension API that needs them in InitializeBindingsSystem
* Require specific modules from the module system in RequireAdditionalModules. Otherwise, modules would be required by extensions, according to the permissions in their manifest, when they are enabled.

The specific case of the ChromeExtensionsDispatcherDelegate implements the first three methods, but not RequireAdditionalModules. This one could be added in a setup where extension APIs are expected to be used outside the context of an extension and, for that reason, there is not a manifest listing them.

Browser process

In the browser process, there is a related interface called ExtensionsBrowserClient, used for extensions to make queries specific to the browser process. In the case of Chrome, the implementation is called ChromeExtensionsClient and it has knowledge of profiles, browser contexts or incognito mode.

ExtensionsBrowserClient class diagram

Similarly to the ExtensionsClient in the render process, the ExtensionsBrowserClient maintains a list of ExtensionsBrowserAPIProvider objects. The ChromeExtensionsClient instantiates two ExtensionsBrowserAPIProvider, the Core ExtensionsBrowserAPIProvider and the Chrome ExtensionsBrowserAPIProvider, and adds them to the list to be used by the function RegisterExtensionFunctions, which calls a method of the same name for every registered ExtensionsBrowserAPIProvider.

These objects are just expected to implement that method, which populates the registry with the functions declared by the API in that provider. This registry links the API with the actual native functions that implement it. From the point of view of the ExtensionsBrowserAPIProvider:: RegisterExtensionFunctions, they just have to call the generated function GeneratedFunctionRegistry::RegisterAll(), which should have been generated from the json definition of the API.

A cheatsheet to define new extension APIs in custom shells

This is a summary of all the interfaces that must be implemented in a custom shell, and which files must be added or modified, to define new extension APIs.

  • Write your custom _api_features.json and a .json or .idl definition of the API.
  • Write your so it triggers the necessary code generators for those files.
  • Add your _api_features.json as a resource of type “BINDATA” to one of the shell’s .grd files.
  • Implement your own ExtensionsBrowserAPIProvider.
  • Implement your own ExtensionsBrowserClient and add the corresponding ExtensionsBrowserAPIProvider.
  • Implement your own ExtensionsAPIProvider.
  • Implement your own ExtensionsClient and add the corresponding ExtensionsAPIProvider.
  • Implement your own DispatcherDelegate.
  • Implement your own ExtensionsRendererClient and set the corresponding delegate to the Dispatcher.
  • Implement your API extending UIThreadExtensionFunction for every extension function you declared.
  • If the API is meant to be used in webpage-like contexts, it must be added to kWebAvailableFeatures in ExtensionBindingsSystem.

The contents of this post are available as a live document here. Comments and updates are welcome.

by Jacobo Aragunde Pérez at November 27, 2019 05:00 PM

November 25, 2019

Nikolas Zimmermann

Back in town

Welcome to my blog!

Finally I’m back after my long detour to physics :-)

Some of you might know that my colleague Rob Buis and me founded the ksvg project a little more than 18 years ago (announcement mail to kfm-devel) and met again after many years in Galicia last month.

by (Nikolas Zimmermann) at November 25, 2019 12:00 AM

November 13, 2019

Manuel Rego

Web Engines Hackfest 2019

A month ago Igalia hosted another edition of the Web Engines Hackfest in our office in A Coruña. This is my personal summary of the event, obviously biased as I’m part of the organization.


During the event we arranged six talks about a variety range of topics:

Emilio Cobos during his talk at the Web Engines Hackfest 2019 Emilio Cobos during his talk at the Web Engines Hackfest 2019

Web Platform Tests (WPT)

Apart from the talks, the main and most important part of the hackfest (at least from my personal point of view) are the breakout sessions and discussions we organize about different interest topics.

During one of these sessions we talked about the status of things regarding WPT. WPT is working really fine for Chromium and Firefox, however WebKit is still lagging behind as synchronization is still manual there. Let’s hope things will improve in the future on the WebKit side.

We also highlighted that the number of dynamic tests on WPT are less than expected, we discarded issues with the infrastructure and think that the problems are more on the side of people writing the tests, that somehow forget to cover cases when things changes dynamically.

Apart from that James Graham put over the table the results from the last MDN survey, which showed interoperability as one of the most important issues for web developers. WPT is helping with interop but despite of the improvements on that regard this is still a big problem for authors. We didn’t have any good answer about how to fix that, in my case I shared some ideas that could help to improve things at some point:

  • Mandatory tests for specs: This is already happening for some specs like HTML but not for all of them. If we manage to reach a point where every change on a spec comes with a test, probably interoperability on initial implementations will be much better. It’s easy to understand why this is not happening as people working on specs are usually very overloaded.
  • Common forum to agree on shipping features: This is a kind of utopia, as each company has their own priorities, but if we had a place were the different browser vendors talk in order to reach an agreement about when to ship a feature, that would make web author’s lifes much easier. We somehow managed to do that when we shipped CSS Grid Layout almost simultaneously in the different browsers, if we could repeat that success story for more features in the future that would be awesome.

Debugging tools

One of the afternoons we did a breakout session related to debugging tools.

First Christian Biesinger showed us JdDbg which is an amazing tool to explore data structures in the web browser (like the DOM, layout or accessibility trees). All the information is updated live while you debug your code, and you can access all of them on a single view very comfortably.

Afterwards Emilio Cobos explained how to use the reverse debugger rr. With this tool you can record a bug and then replay it as many times as you need going back and forward in time. Also Emilio showed how to annotate all the output so you can go directly to that moment in time, or how to randomize the execution to help caught race conditions. As a result of this explanation we got a bug fixed in WebKitGTK+.


About MathML Fred’s talk finished sending the intent-to-implement mail to blink-dev officially announcing the beginning of the upstreaming process. Since then a bunch of patches have already landed behind a runtime flag, you can follow the progress on Chromium issue #6606 if you’re interested.

On the last day a few of us even attended the CSS Working Group confcall during the hackfest, which worked as a test for Igalia office’s infrastructure thinking on the face-to-face meeting we’ll be hosting next January.

People attending the CSSWG confcall (from left to right: Oriol, Emilio, fantasai, Christian and Brian) People attending the CSSWG confcall (from left to right: Oriol, Emilio, fantasai, Christian and Brian)

As a side note, this time we arranged a guided city tour around A Coruña and, despite of the weather, people seemed to have enjoyed it.


Thanks to everyone coming, we’re really happy for the lovely feedback you always share about the event, you’re so kind! ☺

Of course, kudos to the speakers for the effort working on such a nice set of interesting talks. 👏

Last, but not least, big thanks to the hackfest sponsors: Arm, Google, Igalia and Mozilla. Your support is critical to make this event possible, you rock. 🎸

Web Engines Hackfest 2019 sponsors: Google and Igalia Web Engines Hackfest 2019 sponsors: Arm, Google, Igalia and Mozilla

See you all next year. Some news about the next edition will be announced very soon, stay tuned!

November 13, 2019 11:00 PM

October 28, 2019

Adrián Pérez

The Road to WebKit 2.26: a Six Month Retrospective

Now that version 2.26 of both WPE WebKit and WebKitGTK ports have been out for a few weeks it is an excellent moment to recap and take a look at what we have achieved during this development cycle. Let's dive in!

  1. New Features
  2. Security
  3. Cleanups
  4. Releases, Releases!
  5. Buildbot Maintenance
  6. One More Thing

New Features

Emoji Picker

The GTK emoji picker has been integrated in WebKitGTK, and can be accessed with Ctrl-Shift-; while typing on input fields.

GNOME Web showing the GTK emoji picker.

Data Lists

WebKitGTK now supports the <datalist> HTML element (reference), which can be used to list possible values for an <input> field. Form fields using data lists are rendered as an hybrid between a combo box and a text entry with type–ahead filtering.

GNOME Web showing an <input> entry with completion backed by <datalist>.

WPE Renderer for WebKitGTK

The GTK port now supports reusing components from WPE. While there are no user-visible changes, with many GPU drivers a more efficient buffer sharing mechanism—which takes advantage of DMA-BUF, if available—is used for accelerated compositing under Wayland, resulting in better performance.

Packagers can disable this feature at build time passing -DUSE_WPE_RENDERER=OFF to CMake, which could be needed for systems which cannot provide the needed libwpe and WPEBackend-fdo libraries. It is recommended to leave this build option enabled, and it might become mandatory in the future.

In-Process DNS Cache

Running a local DNS caching service avoids doing queries to your Internet provider’s servers when applications need to resolve the same host names over and over—something web browsers do! This results in faster browsing, saves bandwidth, and partially compensates for slow DNS servers.

Patrick and Carlos have implemented a small cache inside the Network Process which keeps in memory a maximum of 400, valid for 60 seconds. Even though it may not seem like much, this improves page loads because most of the time the resources needed to render a page are spread across a handful of hosts and their cache entries will be reused over and over.

Promotional image of the “Gone in 60 Seconds” movie.
This image has nothing to do with DNS, except for the time entries are kept in the cache.

While it is certainly possible to run a full-fledged DNS cache locally (like dnsmasq or systemd-resolved, which many GNU/Linux setups have configured nowadays), WebKit can be used in all kinds of devices and operating systems which may not provide such a service. The caching benefits all kinds of systems, with embedded devices (where running an additional service is often prohibitive) benefiting the most, and therefore it is always enabled by default.


Remember Meltdown and Spectre? During this development cycle we worked on mitigations against side channel attacks like these. They are particularly important for a Web engine, which can download and execute code from arbitrary servers.

Subprocess Sandboxing

Both WebKitGTK and WPE WebKit follow a multi-process architecture: at least there is the “UI Process”, an application that embeds WebKitWebView widget; the “Web Process” (WebKitWebProcess, WPEWebProcess) which performs the actual rendering, and the “Network Process” (WebKitNetworkProcess, WPENetworkProcess) which takes care of fetching content from the network and also manages caches and storage.

Patrick Griffis has led the effort to add support in WebKit to isolate the Web Process from the rest of the system, running it with restricted access to the rest of the system. This is achieved using Linux namespaces—the same underlying building blocks used by containerization technologies like LXC, Kubernetes, or Flatpak. As a matter of fact, we use the same building blocks as Flatpak: Bubblewrap, xdg-dbus-proxy, and libseccomp. This not only makes it more difficult for a website to snoop on other processes' data: it also limits potential damage to the rest of the system caused by maliciously crafted content, because the Web Process is where most of which it is parsed and processed.

This feature is built by default, and using it in applications is only one function call away.


Process Swap On (cross-site) Navigation is a new feature which makes it harder for websites to steal information from others: rendering of pages from different sites always takes place in different processes. In practice, each security origin uses a different Web Process (see above) for rendering, and while navigating from one page to another new processes will be launched or terminated as needed. Chromium's Site Isolation works in a similar way.

Unfortunately, the needed changes ended up breaking a few important applications which embed WebKitGTK (like GNOME Web or Evolution) and we had to disable the feature for the GTK port just before its 2.26.0 release—it is still enabled in WPE WebKit.

Our plan for the next development cycle is keep the feature disabled by default, and to provide a way for applications to opt-in. Unfortunately it cannot be done the other way around because the public WebKitGTK API has been stable for a long time and we cannot afford breaking backwards compatibility.


This security mechanism helps protect websites against protocol downgrade attacks: Web servers can declare that clients must interact using only secure HTTPS connections, and never revert to using unencrypted HTTP.

During the last few months Claudio Saavedra has completed the support for HTTP Strict Transport Security in libsoup—our networking backend—and the needed support code in WebKit. As a result, HSTS support is always enabled.

New WebSockets Implementation

The WebKit source tree includes a cross-platform WebSockets implementation that the GTK and WPE ports have been using. While great for new ports to be able to support the feature, it is far from optimal: we were duplicating network code because libsoup also implements them.

Now that HSTS support is in place, Claudio and Carlos decided that it was a good moment to switch libsoup's implementation so WebSockets can now also benefit from it. This also made possible to provide the RFC-7692 permessage-deflate extension, which enables applications to request compression of message payloads.

To ensure that no regressions would be introduced, Claudio also added support in libsoup for running the Autobahn 🛣 test suite, which resulted in a number of fixes.


During this release cycle we have deprecated the single Web Process mode, and trying to enable it using the API is a no-op. The motivation for this is twofold: in the same vein of PSON and sanboxing, we would rather not allow applications to make side channel attacks easier; not to mention that the changes needed in the code to accommodate PSON would make it extremely complicated to keep the existing API semantics. As this can potentially be trouble for some applications, we have been in touch with packagers, supporting them as best as we can to ensure that the new WebKitGTK versions can be adopted without regressions.

Another important removal was the support for GTK2 NPAPI browser plug-ins. Note that NPAPI plug-ins are still supported, but if they use GTK they must use version 3.x—otherwise they will not be loaded. The reason for this is that GTK2 cannot be used in a program which uses GTK3, and vice versa. To circumvent this limitation, in previous releases we were building some parts of the WebKit source code twice, each one using a different version of GTK, resulting in two separate binaries: we have only removed the GTK2 one. This allowed for a good clean up of the source tree, reduced build times, and killed one build dependency. With NPAPI support being sunsetted in all browsers, the main reason to keep some degree of support for it is the Flash plug-in. Sadly its NPAPI version uses GTK2 and it does not work starting with WebKitGTK 2.26.0; on the other hand, it is still possible to run the PPAPI version of Flash through FreshPlayerPlugin if needed.

Releases, Releases!

Last March we released WebKitGTK 2.24 and WPE WebKit 2.24 in sync, and the same for the current stable, 2.26. As a matter of fact, most releases since 2.22 have been done in lockstep and this has been working extremely well.

Hannibal Smith, happy about the simultaneous releases.

Both ports share many of their components, so it makes sense to stabilize and prepare them for a new release series at the same time. Many fixes apply to both ports, and the few that not hardly add noise to the branch. This allows myself and Carlos García to split the effort of backporting fixes to the stable branch as well—though I must admit that Carlos has often done more.

Buildroot ♥ WebKit

Those using Buildroot to prepare software images for various devices will be happy to know that packages for the WPE WebKit components have been imported a while ago into the source tree, and have been available since the 2019.05 release.

Two years ago I dusted off the webkitgtk package, bringing it up to the most recent version at the time, keeping up with updates and over time I have been taking care of some of its dependencies (libepoxy, brotli, and woff2) as well. Buildroot LTS releases are now receiving security updates, too.

Last February I had a great time meeting some of the Buildroot developers during FOSDEM, where we had the chance of discussing in person how to go about adding WPE WebKit packages to Buildroot. This ultimately resulted in the addition of packages libwpe, wpebackend-fdo, wpebackend-fdo, and cog to the tree.

My plan is to keep maintaining the Buildroot packages for both WebKit ports. I have also a few improvements in the pipeline, like enabling the sandboxing support (see this patch set) and usage of the WPE renderer in the WebKitGTK package.

Buildbot Maintenance

Breaking the Web is not fun, so WebKit needs extensive testing. The source tree includes tens of thousands of tests which are used to avoid regressions, and those are ran on every commit using Buildbot. The status can be checked at

Additionally, there is another set of builders which run before a patch has had the chance of being committed to the repository. The goal is to catch build failures and certain kinds of programmer errors as early as possible, ensuring that the source tree is kept “green”—that is: buildable. This is the EWS, short for Early Warning System, which trawls Bugzilla for new—or updated—patches, schedules builds with them applied, and adds a set of status bubbles in Bugzilla next to them. Igalia also contributes with EWS builders

EWS bot bubbles as shown in Bugzilla
For each platform the EWS adds a status bubble after trying a patch.

Since last April there is an ongoing effort to revamp the EWS infrastructure, which is now using Buildbot as well. Carlos López has updated our machines recently to Debian Buster, then I switched them to the new EWS at This is based on Buildbot as well, which brings niceties in the user interface like being able to check the status for the GTK and for the WPE WebKit port conveniently in realtime. Most importantly, this change has brought the average build time from thirteen minutes down to eight, making the “upload patch, check EWS build status” cycle shorter for developers.

Big props to Aakash Jain, who has been championing all the EWS improvements.

One More Thing

Finally, I would like to extend our thanks to everybody who has contributed to WebKit during the 2.26 development cycle, and in particular to the Igalia Multimedia team, who have been hard at work improving our WebRTC support and the GStreamer back-end 🙇.

by aperez ( at October 28, 2019 12:30 AM

October 24, 2019

Xabier Rodríguez Calvar

VCR to WebM with GStreamer and hardware encoding

My family had bought many years ago a Panasonic VHS video camera and we had recorded quite a lot of things, holidays, some local shows, etc. I even got paid 5000 pesetas (30€ more than 20 years ago) a couple of times to record weddings in a amateur way. Since my father passed less than a year ago I have wanted to convert those VHS tapes into something that can survive better technologically speaking.

For the job I bought a USB 2.0 dongle and connected it to a VHS VCR through a SCART to RCA cable.

The dongle creates a V4L2 device for video and is detected by Pulseaudio for audio. As I want to see what I am converting live I need to tee both audio and video to the corresponding sinks and the other part would go to to the encoders, muxer and filesink. The command line for that would be:

gst-launch-1.0 matroskamux name=mux ! filesink location=/tmp/test.webm \
v4l2src device=/dev/video2 norm=255 io-mode=mmap ! queue ! vaapipostproc ! tee name=video_t ! \
queue ! vaapivp9enc rate-control=4 bitrate=1536 ! mux.video_0 \
video_t. ! queue ! xvimagesink \
pulsesrc device=alsa_input.usb-MACROSIL_AV_TO_USB2.0-02.analog-stereo ! 'audio/x-raw,rate=48000,channels=2' ! tee name=audio_t ! \
queue ! pulsesink \
audio_t. ! queue ! vorbisenc ! mux.audio_0

As you can see I convert to WebM with VP9 and Vorbis. Something interesting can be passing norm=255 to the v4l2src element so it’s capturing PAL and the rate-control=4 for VBR to the vaapivp9enc element, otherwise it will use cqp as default and file size would end up being huge.

You can see the pipeline, which is beatiful, here:

As you can see, we’re using vaapivp9enc here which is hardware enabled and having this pipeline running in my computer was consuming more or less 20% of CPU with the CPU absolutely relaxed, leaving me the necessary computing power for my daily work. This would not be possible without GStreamer and GStreamer VAAPI plugins, which is what happens with other solutions whose instructions you can find online.

If for some reason you can’t find vaapivp9enc in Debian, you should know there are a couple of packages for the intel drivers and that the one you should install is intel-media-va-driver. Thanks go to my colleague at Igalia Víctor Jáquez, who maintains gstreamer-vaapi and helped me solving this problem.

My workflow for this was converting all tapes into WebM and then cutting them in the different relevant pieces with PiTiVi running GStreamer Editing Services both co-maintained by my colleague at Igalia, Thibault Saunier.

by calvaris at October 24, 2019 10:27 AM

Brian Kardell

ES Private Class Features: 2 Minute Standards

ES Private Class Features: 2 Minute Standards

A number of proposals come together to provide 'private' versions of most of the class-related concepts from ES6. Let's have a #StandardsIn2Min oriented look...

The new private class features (fields and methods, both instance and static) all use a new # to talk about private-ness. These offer compile time guarantees that only the code in this class, or instances of this class can access these things. In defining and using a private property, for example, you could write:

class Point {
  #x = 0
  #y = 0
  constructor(x=0, y=0) {
    this.#x = x
    this.#y = y
  toString() {
    return `[${this.#x}, ${this.#y}]`

let p = new Point(1,2)

The result is that you can now create a Point object which encapsulates its actual maintenance to these variables. These private properties are not accessible from the outside, as we'll see.

The # sigil and 'private space'

The very new and perhaps most important bit to understand is that the name of private things must begin with the # sigil, and its use is symmetrical. That is - in order to declare a class property as private, its name must begin with a #. To access it again later, you'll use the name beginning with # again, as in this.#x. Attempts to access it from the outside will throw.

// x isn't #x it will log undefined..

// #x used from an instance outside throws.
> Uncaught SyntaxError: Undefined private field undefined: must be declared in an enclosing class

What's subtle but important about this is that the symmetrical use of the sigil lets it be known unambiguously to both compilers and readers whether we intended to access or set something in public space, or private space. This also means that while you can choose the same readable characters, you can't actually wind up with properties with the same names in both spaces: The private ones will always include the #

This makes a lot of sense because, for example, the names that you choose for public fields should not be dictated by the internal, private fields of your super class. The following then is entirely possible and valid:

class X {
  name = 'Brian'
  #name = 'NotBrian
  constructor() {
    console.log( // Brian
    console.log(this.#name) // notBrian

let x = new X()

console.log( // Brian
// (attempting to access x.#name throws)

Since the name simply begins with the sigil, it's tempting to think that you can access private fields dynamically, for example, via [] notation. However, you can't. You also cannot access private fields of a Proxy target through the Proxy, they aren't accessible through property descriptors or Object.keys or anything like that. This is by design as doing so, would be self-defeating, making things available outside the class as well (as you can read about in the FAQ).

Why the #, specifically?

Why this particular character was chosen was very widely discussed, and lots of options were weighed. You can read more about this in the FAQ as well, but the simplest answer is: That's the one, single character option, which we could make work without fouling up something else.

All the private things

Once you understand the above, the rest flows pretty naturally. We get private methods that are pretty self explanitory:

class Counter extends HTMLElement {
  #handleClicked() {
    // do complex internal work

  constructor() {
    this.#handleClicked = this.#handleClicked.bind(this);

  connectedCallback() {  
    this.addEventLister('onclick', this.#handleClicked)


and private static fields too...

class Colors {
  static #red = "#ff0000";
  static #green = "#00ff00";
  static #blue = "#0000ff";
Learn More

This is, of course, merely an 2 minute introduction. To learn a lot more details about this, including it's history, rationale about design choices, find links to the spec draft, and much more, check out the TC39 class fields proposal repository.

October 24, 2019 04:00 AM

October 18, 2019

Lauro Moura

WebAssembly Runtimes - WASI and Wasmtime

After a while (and a release at work), here we are again. To continue the series, we will take a quick look at wasmtime and WASI.


Wasmtime is a "standalone wasm-only runtime for WebAssembly using the Cranelift JIT". It is part of CraneStation, an "umbrella" project for compiler tools where Mozilla is one (if not the main) parties involved.

The Cranelift JIT is kinda similar to LLVM. Other runtimes like Wasmer - which we will talk about in a future post can switch between Cranelift and LLVM as the backend to generate the code.

But before we go into more wasmtime details, it is important to talk about how we will be interacting with these runtimes.


As I said in the first post, portability is an important target of WebAssembly. Not only about running a given .wasm file in any place, but also about WebAssembly providing a - as much as possible - portable interface to build programs on top of it. This led to the development of the WebAssembly System Interface - WASI (here is a great post by the great Lin Clark about what is a system interface and why, what, how and who is developing one for WebAssembly).

While system interfaces like POSIX served us well for decades, the challenges of a truly universal system interface in the context of WASM are more strict. This happens because, like a webpage in the browser, you could be running WASM programs from any source and this demands a new approach to security.

WASI thus is being developed with a capability-oriented approach. Instead of the POSIX way of a program inheriting all access rights of the user, in WASI it behaves kinda like installing an app on your phone. Each time a WASM program tries to access a restricted resource, the runtime checks if the user granted access to that resource. We will see more details later in a WASI example.

To begin, we need to install the wasi-sdk that will make it easier for us to create WASI-ready wasm files. It is a clang-based toolchain that is also part of the CraneStation initiative. There are packages available in Github. In my Ubuntu install, the executables were installed to /opt/wasi-sdk/bin. The tutorial for wasi-sdk is here.

Getting wasmtime

The easiest way to get wasmtime is by running their installer with the command below. It will install the runtime to $HOME/.wasmtime/bin and automatically set up the profile scripts to add it to the $PATH when you open a new terminal. Don't forget to re-add the wasi-sdk bin folder to PATH in this new terminal.

$ curl -sSf | bash

As expected a hello world in wasm/wasi is the same as in the desktop:

#include <stdio.h>

int main(void)
    printf("Hello, world!\n");
    return 0;

To compile, you use the wasi-sdk clang just like you would with a regular C compiler:

$ clang hello.c -o hello.wasm

And to run, just call the wasmtime executable:

$ wasmtime hello.wasm
Hello, World

That's it. You should have the message printed in your terminal. WASI allows the standard output and error descriptors (stdout and stderr) to be freely accessed.

Now, what if we tried to access files? For example, let's implement a cat program. It just reads a file and prints it to the terminal. Save the program below to a file named cat.c.

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>

int main(int argc, char* argv[])
    if (argc != 2) {
        fprintf(stderr, "usage: %s <input file>\n", argv[0]);

    int n = 0;
    char buf[BUFSIZ];

    int f = open(argv[1], O_RDONLY);
    if (f < 0) {
        fprintf(stderr, "error opening file %s: %s\n", argv[1], strerror(errno));

    while((n = read(f, buf, BUFSIZ)) > 0) {
        printf("%.*s", n, buf);

    if (n < 0) {
        fprintf(stderr, "read error: %s\n", strerror(errno));
    return 0;

Like the previous example, compile...

$ clang cat.c -o cat.wasm

And run...

$ wasmtime cat.wasm hello.c
error opening file hello.c: Capabilities insufficient

Ooops. Looks like reading files is a restricted operation. Which makes sense, given files could hold sensitive information. So, how do we grant access to them? With wasmtime, just use the option --dir=<path>:

$ wasmtime --dir=. cat.wasm hello.c
#include <stdio.h>...


This was just a gentle introduction to using wasi-sdk and wasmtime to play with WASI-based Webassembly programs. In the next post, we will see how to embed wasmtime into other programs and do more complex interactions.

by Lauro Moura at October 18, 2019 02:54 AM

October 08, 2019

Andy Wingo

thoughts on rms and gnu

Yesterday, a collective of GNU maintainers publicly posted a statement advocating collective decision-making in the GNU project. I would like to expand on what that statement means to me and why I signed on.

For many years now, I have not considered Richard Stallman (RMS) to be the head of the GNU project. Yes, he created GNU, speaking it into existence via prophetic narrative and via code; yes, he inspired many people, myself included, to make the vision of a GNU system into a reality; and yes, he should be recognized for these things. But accomplishing difficult and important tasks for GNU in the past does not grant RMS perpetual sovereignty over GNU in the future.

ontological considerations

More on the motivations for the non serviam in a minute. But first, a meta-point: the GNU project does not exist, at least not in the sense that many people think it does. It is not a legal entity. It is not a charity. You cannot give money to the GNU project. Besides the manifesto, GNU has no by-laws or constitution or founding document.

One could describe GNU as a set of software packages that have been designated by RMS as forming part, in some way, of GNU. But this artifact-centered description does not capture movement: software does not, by itself, change the world; it lacks agency. It is the people that maintain, grow, adapt, and build the software that are the heart of the GNU project -- the maintainers of and contributors to the GNU packages. They are the GNU of whom I speak and of whom I form a part.

wasted youth

Richard Stallman describes himself as the leader of the GNU project -- the "chief GNUisance", he calls it -- but this position only exists in any real sense by consent of the people that make GNU. So what is he doing with this role? Does he deserve it? Should we consent?

To me it has been clear for many years that to a first approximation, the answer is that RMS does nothing for GNU. RMS does not write software. He does not design software, or systems. He does hold a role of accepting new projects into GNU; there, his primary criteria is not "does this make a better GNU system"; it is, rather, "does the new project meet the minimum requirements".

By itself, this seems to me to be a failure of leadership for a software project like GNU. But unfortunately when RMS's role in GNU isn't neglect, more often as not it's negative. RMS's interventions are generally conservative -- to assert authority over the workings of the GNU project, to preserve ways of operating that he sees as important. See for example the whole glibc abortion joke debacle as an example of how RMS acts, when he chooses to do so.

Which, fair enough, right? I can hear you saying it. RMS started GNU so RMS decides what it is and what it can be. But I don't accept that. GNU is about practical software freedom, not about RMS. GNU has long outgrown any individual contributor. I don't think RMS has the legitimacy to tell this group of largely volunteers what we should build or how we should organize ourselves. Or rather, he can say what he thinks, but he has no dominion over GNU; he does not have majority sweat equity in the project. If RMS actually wants the project to outlive him -- something that by his actions is not clear -- the best thing that he could do for GNU is to stop pretending to run things, to instead declare victory and retire to an emeritus role.

Note, however, that my personal perspective here is not a consensus position of the GNU project. There are many (most?) GNU developers that still consider RMS to be GNU's rightful leader. I think they are mistaken, but I do not repudiate them for this reason; we can work together while differing on this and other matters. I simply state that I, personally, do not serve RMS.

selective attrition

Though the "voluntary servitude" questions are at the heart of the recent joint statement, I think we all recognize that attempts at self-organization in GNU face a grave difficulty, even if RMS decided to retire tomorrow, in the way that GNU maintainers have selected themselves.

The great tragedy of RMS's tenure in the supposedly universalist FSF and GNU projects is that he behaves in a way that is particularly alienating to women. It doesn't take a genius to conclude that if you're personally driving away potential collaborators, that's a bad thing for the organization, and actively harmful to the organization's goals: software freedom is a cause that is explicitly for everyone.

We already know that software development in people's free time skews towards privilege: not everyone has the ability to devote many hours per week to what is for many people a hobby, and it follows of course that those that have more privilege in society will be more able to establish a position in the movement. And then on top of these limitations on contributors coming in, we additionally have this negative effect of a toxic culture pushing people out.

The result, sadly, is that a significant proportion of those that have stuck with GNU don't see any problems with RMS. The cause of software freedom has always run against the grain of capitalism so GNU people are used to being a bit contrarian, but it has also had the unfortunate effect of creating a cult of personality and a with-us-or-against-us mentality. For some, only a traitor would criticise the GNU project. It's laughable but it's a thing; I prefer to ignore these perspectives.

Finally, it must be said that there are a few GNU people for whom it's important to check if the microphone is on before making a joke about rape culture. (Incidentally, RMS had nothing to say on that issue; how useless.)

So I honestly am not sure if GNU as a whole effectively has the demos to make good decisions. Neglect and selective attrition have gravely weakened the project. But I stand by the principles and practice of software freedom, and by my fellow GNU maintainers who are unwilling to accept the status quo, and I consider attempts to reduce GNU to founder-loyalty to be mistaken and without legitimacy.

where we're at

Given this divided state regarding RMS, the only conclusion I can make is that for the foreseeable future, GNU is not likely to have a formal leadership. There will be affinity groups working in different ways. It's not ideal, but the differences are real and cannot be papered over. Perhaps in the medium term, GNU maintainers can reach enough consensus to establish a formal collective decision-making process; here's hoping.

In the meantime, as always, happy hacking, and: no gods! No masters! No chief!!!

by Andy Wingo at October 08, 2019 03:34 PM

October 02, 2019

Paulo Matos

A Brief Look at the WebKit Workflow

As I learn about the workflow for contributing to JSC (the JavaScript Compiler) in WebKit, I took a few notes as I went along. However, I decided to write them as a post in the hope that they are useful for you as well. If you use git, a Unix based system, and want to start contributing to WebKit, keep on reading.


by Paulo Matos at October 02, 2019 03:56 PM

September 28, 2019

Lauro Moura

WebAssembly runtimes overview - intro

Write once, run anywhere - the Holy Grail of Portability

Since long time ago, one of the holy grails of computing is portability. In the 90's Java promised, "write once, run anywhere (WORA)". But while you could do that with simple programs, you probably would quickly start fighting the details of the different OS's, devices, versions, you had to run. WORA then quickly becomes WODE (Write Once, Debug Everywhere). Another issue is that you would end up depending on the JVM being available on the target and having to program mainly with just Java or other JVM languages like - at the time - Groovy, JRuby, etc.

Fast forward to the 2010s, and another "Java" - I know, I know - is what had got us the closest to the WORA world so far. Javascript slowly overcame some hurdles from being something like a joke to become a - slightly more - respectable language and the universal programming language of the Web.

But we are still kinda tied to a single language. Transpilers try to mitigate this issue but you could need to develop systems for the Web in other programming languages completely different from JavaScript, to reuse existing components, for example.

Enter WebAssembly

All this led to the creation of WebAssembly. It is a compact binary format aimed at fast execution and small binaries sizes. Like Javascript, it runs sandboxed, providing security features that allow running code from the Web without - mostly - putting your system at risk. It already allows running C, C++, Rust and many other languages (thanks, LVVM) "anywhere".

The quotes are because, unfortunately, there is still the issue of dealing with the details of where you run. To try to reduce this task, there is an ongoing work to standardize the basic operations in a system interface (WASI).

So, what now?

Sometimes you need portability without all the bloat of HTML/CSS/JS parsing/rendering/etc. This led to the creation of platforms like Docker, where you could reuse the software from practically any language and run anywhere you had the docker engine running.

While the main showcase of WebAssembly initially was the web browser, there is new exciting work on using it outside it too. Like Solomon Hykes (of docker itself) said:

This series of posts is intended to be a quick overview of some of the main tools that allow running WebAssembly outside the browser.

The initial list of runtimes we'll be checking:

We'll cover the scenarios each runtime tries to address. From running standalone wasm binaries to embedding the runtime into an actual application.

A more extensive list of WebAssembly runtimes can be found on Github.

So, on to the runtimes.

by Lauro Moura at September 28, 2019 03:30 AM