Planet Igalia Chromium

January 29, 2019

Mario Sanchez Prada

Working on the Chromium Servicification Project

It’s been a few months already since I (re)joined Igalia as part of its Chromium team and I couldn’t be happier about it: right since the very first day, I felt perfectly integrated as part of the team that I’d be part of and quickly started making my way through the -fully upstream- project that would keep me busy during the following months: the Chromium Servicification Project.

But what is this “Chromium servicification project“? Well, according to the Wiktionary the word “servicification” means, applied to computing, “the migration from monolithic legacy applications to service-based components and solutions”, which is exactly what this project is about: as described in the Chromium servicification project’s website, the whole purpose behind this idea is “to migrate the code base to a more modular, service-oriented architecture”, in order to “produce reusable and decoupled components while also reducing duplication”.

Doing so would not only make Chromium a more manageable project from a source code-related point of view and create better and more stable interfaces to embed chromium from different projects, but should also enable teams to experiment with new features by combining these services in different ways, as well as to ship different products based in Chromium without having to bundle the whole world just to provide a particular set of features. 

For instance, as Camille Lamy put it in the talk delivered (slides here) during the latest Web Engines Hackfest,  “it might be interesting long term that the user only downloads the bits of the app they need so, for instance, if you have a very low-end phone, support for VR is probably not very useful for you”. This is of course not the current status of things yet (right now everything is bundled into a big executable), but it’s still a good way to visualise where this idea of moving to a services-oriented architecture should take us in the long run.

With this in mind, the idea behind this project would be to work on the migration of the different parts of Chromium depending on those components that are being converted into services, which would be part of a “foundation” base layer providing the core services that any application, framework or runtime build on top of chromium would need.

As you can imagine, the whole idea of refactoring such an enormous code base like Chromium’s is daunting and a lot of work, especially considering that currently ongoing efforts can’t simply be stopped just to perform this migration, and that is where our focus is currently aimed at: we integrate with different teams from the Chromium project working on the migration of those components into services, and we make sure that the clients of their old APIs move away from them and use the new services’ APIs instead, while keeping everything running normally in the meantime.

At the beginning, we started working on the migration to the Network Service (which allows to run Chromium’s network stack even without a browser) and managed to get it shipped in Chromium Beta by early October already, which was a pretty big deal as far as I understand. In my particular case, that stage was a very short ride since such migration was nearly done by the time I joined Igalia, but still something worth mentioning due to the impact it had in the project, for extra context.

After that, our team started working on the migration of the Identity service, where the main idea is to encapsulate the functionality of accessing the user’s identities right through this service, so that one day this logic can be run outside of the browser process. One interesting bit about this migration is that this particular functionality (largely implemented inside the sign-in component) has historically been located quite high up in the stack, and yet it’s now being pushed all the way down into that “foundation” base layer, as a core service. That’s probably one of the factors contributing to making this migration quite complicated, but everyone involved is being very dedicated and has been very helpful so far, so I’m confident we’ll get there in a reasonable time frame.

If you’re curious enough, though, you can check this status report for the Identity service, where you can see the evolution of this particular migration, along with the impact our team had since we started working on this part, back on early October. There are more reports and more information in the mailing list for the Identity service, so feel free to check it out and/or subscribe there if you like.

One clarification is needed, tough: for now, the scope of this migrations is focused on using the public C++ APIs that such services expose (see //services/<service_name>/public/cpp), but in the long run the idea is that those services will also provide Mojo interfaces. That will enable using their functionality regardless of whether you’re running those services as part of the browser’s process, or inside their own & separate processes, which will then allow the flexibility that chromium will need to run smoothly and safely in different kind of environments, from the least constrained ones to others with a less favourable set of resources at their disposal.

And this is it for now, I think. I was really looking forward to writing a status update about what I’ve been up to in the past months and here it is, even though it’s not the shortest of all reports.

One last thing, though: as usual, I’m going to FOSDEM this year as well, along with a bunch of colleagues & friends from Igalia, so please feel free to drop me/us a line if you want to chat and/or hangout, either to talk about work-related matters or anything else really.

And, of course, I’d be also more than happy to talk about any of the open job positions at Igalia, should you consider applying. There are quite a few of them available at the moment for all kind of things (most of them available for remote work): from more technical roles such as graphicscompilersmultimedia, JavaScript engines, browsers (WebKitChromium, Web Platform) or systems administration (this one not available for remotes, though), to other less “hands-on” types of roles like developer advocatesales engineer or project manager, so it’s possible there’s something interesting for you if you’re considering to join such an special company like this one.

See you in FOSDEM!

by mario at January 29, 2019 06:35 PM

January 17, 2019

Gyuyoung Kim

The story of the webOS Chromium contribution over the past year

In this article, I share how I started webOS Chromium upstream, what webOS patches were contributed by LG Electronics, and how I’ve contributed to Chromium.

First, let’s briefly describe the history of the webOS. WebOS was created by Palm, Inc. Palm Inc. was acquired by HP in 2010 and HP made the platform open source, so it then became open webOS. In January 2014 the operation system was sold to LG Electronics. LG Electronics has been shipping the webOS for their TV and signage products since. LG Electronics has also been spreading the webOS to more of their products.

The webOS uses Chromium to run web applications. So, Chromium is a very important component in the webOS. As other Chromium embedders, the webOS also has many downstream patches. So, LG Electronics has tried to contribute own downstream patches to the Chromium open source project to reduce the effort to catch up to the latest Chromium version as well as to improve the quality of the downstream patches. As one of LG Electronics contractors for the last one and half years, I’ve started to work on the webOS Chromium contribution since September 2017. So, let’s start to explain the process of contributing:

1. The Corporate CLA

The Chromium project only accepts patches after the contributor signs a Contributor License Agreement (CLA). There are two kinds of CLA, one for individual and one corporate contributors. If a company signs the corporate CLA, then the individual contributors are exempt from signing an individual CLA, however, they must use their corporate email address as well as join the google group which was created when the corporate CLA was signed. LG Electronics signed up the corporate CLA and they were added to AUTHOR file.

  • Corporate Contributor License Agreement (Link)
  • Individual Contributor License Agreement (Link)

  • 2. List upstreamable patches in webOS

    After finishing the registration of the corporate CLA, I started to list up upstreamable webOS patches. It seemed to me that there were two categories in the patches. One was new features for the webOS. The other one was bug fixes. In the case of new features, the patches were mainly to improve the performance or to make LG products like TV and signage use less memory. I tried to list upstreamable patches among those patches. The patch criteria was either to improve the performance or obtain a benefit on the desktop. I thought this would allow owners to accept and merge the patch into the mainline more easily.

    3. What patches have been merged to Chromium mainline?

    Before uploading webOS patches, I merged the patches to replace deprecated WTF or base utilities (WTF::RefPtr, base::MakeUnique) with c++ standard things. I thought that it would be good to show that LG Electronics started to contribute to Chromium. After replacing all of them, I could start to contribute webOS patches in earnest. I’ve since merged webOS patches to reduce memory usage, release more used memory under OOM, add a new content API to suspend/resume DOM operation, and so on. Below is the list of the main patches I successfully merged.

    1. New content API to suspend/resume DOM operation
    2. Release more used memory under an out-of-memory situation
      • Note: According to the Chromium performance bot, the patches to reduce the memory usage in RenderThreadImpl::ClearMemory could reduce the memory usage until 2MB in the background.
  • Note: OnMemoryPressure listener was added to the compositor layers through these patches. So, the compositor has been releasing more used memory under OOM through the OOM handler. In my opinion, this is very good contribution from the webOS.
  • Introduce new command line switches for embedded devices
  • 4. Trace LG Electronics contribution stats

    As more webOS patches have been merged to Chromium mainline, I thought that it would be good if we run a tool to chase all LG Electronics Chromium contributions so that LG Electronics’s Chromium contribution efforts are well documented. To do this I set up the LG Electronics Chromium contribution stats using the GitStats tool. The tool has been generating the stats every day.

    I was happy to work on the webOS upstream project over the past year. It was challenging work because the downstream patch should show some benefits in Chromium mainline. I’m sure that LG Electronics will continue to keep contributing good patches to webOS and I hope they’re going to become a good partner as well as a contributor in Chromium.

    by gyuyoung at January 17, 2019 09:12 AM

    January 10, 2019

    Manuel Rego

    An introduction to CSS Containment

    Igalia has been recently working on the implementation of css-contain in Chromium by providing some fixes and optimizations based on this standard. This is a brief blog post trying to give an introduction to the spec, explain the status of things, the work done during past year, and some plans for the future.

    What’s css-contain?

    The main goal of CSS Containment standard is to improve the rendering performance of web pages, allowing the isolation of a subtree from the rest of the document. This specification only introduces one new CSS property called contain with different possible values. Browser engines can use that information to implement optimizations and avoid doing extra work when they know which subtrees are independent of the rest of the page.

    Let’s explain what is this about and why this can be can bring performance improvements in complex websites. Imagine that you have a big HTML page which generates a complex DOM tree, but you know that some parts of that page are totally independent of the rest of the page and the content in those parts is modified at some point.

    Browser engines usually try to avoid doing more work than needed and use some heuristics to avoid spending more time than required. However there are lots of corner cases and complex situations in which the browser needs to actually recompute the whole webpage. To improve these scenarios the author has to identify which parts (subtrees) of their website are independent and isolate them from the rest of the page thanks to the contain property. Then when there are changes in some of those subrees the rendering engine will be able to avoid doing any work outside of the subtree boundaries.

    Not everything is for free, when you use contain there are some restrictions that will affect those elements, so the browser is totally certain it can apply optimizations without causing any breakage (e.g. you need to manually set the size of the elment if you want to use size containment).

    The CSS Containment specification defines four values for the contain property, one per each type of containment:

    • layout: The internal layout of the element is totally isolated from the rest of the page, it’s not affected by anything outside and its contents cannot have any effect on the ancestors.
    • paint: Descendants of the element cannot be displayed outside its bounds, nothing will overflow this element (or if it does it won’t be visible).
    • size: The size of the element can be computed without checking its children, the element dimensions are independent of its contents.
    • style: The effects of counters and quotes cannot escape this element, so they are isolated from the rest of the page.
      Note that regarding style containment there is an ongoing discussion on the CSS Working Group about how useful it is (due to the narrowed scope of counters and quotes).

    You can combine the different type of containments as you wish, but the spec also provides two extra values that are a kind of “shorthand” for the other four:

    • content: Which is equivalent to contain: layout paint style.
    • strict: This is the same than having all four types of containment, so it’s equivalent to contain: layout paint size style.

    Example

    Let’s show an example of how CSS Containment can help to improve the performance of a webpage.

    Imagine a page with lots of elements, in this case 10,000 elements like this:

      <div class="item">
        <div>Lorem ipsum...</div>
      </div>

    And that it modifies the content of one of the inner DIVs trough textContent attribute.

    If you don’t use css-contain, even when the change is on a single element, Chromium spends a lot of time on layout because it traverses the whole DOM tree (which in this case is big as it has 10,000 elements).

    CSS Containment Example DOM Tree CSS Containment Example DOM Tree

    Here is when contain property comes to the rescue. In this example the DIV item has fixed size, and the contents we’re changing in the inner DIV will never overflow it. So we can apply contain: strict to the item, that way the browser won’t need to visit the rest of the nodes when something changes inside an item, it can stop checking things on that element and avoid going outside.

    Notice that if the content overflows the item it would get clipped, also if we don’t set a fixed size for the item it’ll be rendered as an empty box so nothing would be visible (actually in this example the borders would be present but they would be the only visible thing).

    CSS Containment Example CSS Containment Example

    Despite how simple is each of the items in this example, we’re getting a big improvement by using CSS Containment in layout time going down from ~4ms to ~0.04ms which is a huge difference. Imagine what would happen if the DOM tree has very complex structures and contents but only a small part of the page gets modified, if you can isolate that from the rest of the page you could get similar benefits.

    State of the art

    This is not a new spec, Chrome 52 shipped the initial support by July 2016, but during last year there has been some extra development related to it and that’s what I want to highlight in this blog post.

    First of all many specification issues have been fixed and some of them imply changes on the implementations, most of this work has been carried on by Florian Rivoal in collaboration with the CSS Working Group.

    Not only that but on the tests side Gérard Talbot has completed the test suite in the web-platform-tests (WPT) repository, which is really important to fix bugs on the implementations and ensure interoperability.

    In my case I’ve been working on the Chromium implementation fixing several bugs and interoperability issues and getting it up to date according to the last specification changes. I took advantage of the WPT test suite to do this work and also contributed back a bunch of tests there. I also imported Firefox tests into Chromium to improve interop (even did a small Firefox patch as part of this work).

    Last, it’s worth to notice that Firefox has been actively working on the implementation of css-contain during last year (you can test it by enabling the runtime flag layout.css.contain.enabled). Hopefully that would bring a second browser engine shipping the spec in the future.

    Wrap-up

    CSS Containment is a nice and simple specification that can be useful to improve web rendering performance in many different use cases. It’s true that currently it’s only supported by Chromium (remember that Firefox is working on it too) and that more improvements and optimizations can be implemented based on it, still it seems to have a huge potential.

    Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

    One more time all the work from Igalia related to css-contain has been sponsored by Bloomberg as part of our ongoing collaboration.

    Bloomberg has some complex UIs that are taking advantage of css-contain to improve the rendering performance, in future blog posts we’ll talk about some of these cases and the optimizations that have been implemented on the rendering engine to improve them.

    January 10, 2019 11:00 PM

    November 07, 2018

    Javier Fernández

    CSS Grid on LayoutNG: a Web Engines Hackfest story

    I had the pleasure to attend the Web Engines Hackfest last week, hosted and organized by Igalia in its HQ in A Coruña. I’m really proud of what we are achieving with this event, thank so much to everybody involved in the organization but, specially, to the people attending. This year we had a lot of talent in a single place, hacking and sharing their expertise on the Web Platform; I really think we all have pushed the Web Platform forward during these days.

    As you may already know, I’m part of the Web Platform team at Igalia working on the implementation of the CSS Grid Layout feature for Blink and WebKit web engines. This work has been sponsored by Bloomberg, as part of the collaboration we started several years ago to improve the Web Platform in many areas, from JS (V8, JSC and even ChakraCore) to different modules of the layout engine (eg. CSS features, editing/selection).

    Since the day I received the invitation to attend the hackfest I knew that one of the tasks I wanted to hack on was the implementation of the CSS Grid feature in the new Chromium’s layout engine (still experimental), known as LayoutNG. Having Chistinan Biesinger, one of the Google engineers working on LayoutNG, here during 3 days was so good to let pass by the oportunity to, at least, start this task. I asked him to give a lighting talk during the layout breakout session about the current status of the LayoutNG project, a brief explanation of some of the most relevant details of its logic and its advantages with the current layout.

    Layout Breakout Session

    A small group of people interested on layout met to discuss about the future of layout in different web engines. I attended with some folks representing Igalia, and other people from Mozilla, Google, WebKit and ARM.

    Christian described the key parts of the new LayoutNG, which provides a simpler code and generally better performance (although results are still preliminary since its under development phase). The concept of fragments gained relevance in this new layout model and currently, inline and block layout is basically complete; the multicolumn layout is quite advanced while Flexbox is still in the early stages (although a substantial portion of the layout tests are passing now).

    We discussed about the different strategies that Firefox, Chrome and Safari are following to redesign the layout logic, which has a huge legacy codebase required to support the old web along the last years. However, browsers need to adapt to the new layout models that a modern Web Platform requires. Chrome with LayoutNG implies a clear bet, with a big team and strong determination; it seems it’ll be ready to ship in the first months of 2019. Firefox is also starting to implement a new layout design with Servo, although I couldn’t get details about its current status and plans. Finally, WebKit started a few months ago a new project called Next-Generation layout which tries to implement a new Layout Formatting Context (LFC) logic from scratch, getting rid f the huge technical debt acquired during the last years; although I couldn’t get confirmation, my opinion is that it’s still an experimental project.

    We also had time to talk about the effort ARM is doing towards a better parallelization of the CSS parsing and style recalc logic, following a similar approach to Mozilla’s Sylo in Servo. It’s a very intresting initiative, but still quite experimental. There is some progress on specific codepahts, but still dealing with Oilpan (Blink’s Garbage Collector) which is the root cause of several issues that prevents to obtain an effective parallelization.

    Hacking, hacking, hacking, ….

    As I commented, this event is designed precisely to gather together some of the most brilliant minds in the Web Platform to discuss, analyze and hack on complex topics that are usually very difficult to handle when working remotely. I had a clear hacking task this time, so that’s why I decided to focus a bit more on coding. Although I already had assumed that implementing CSS Grid in LayoutNG would be a huge challenge, I decided to take it and at least start the task. I took as reference the Flexible Box implementation, which is under development right now and something Christian was partially involved on.

    As it happened with the Flexbible Box implementation, the first step was to redesign the logic so that we can get rid of the dependency of the old Layout Tree, in this case, the LayoutGrid class. This has ben a complex and long task, which took me a quite big part of my time during the hackfest. The following diagrams show the redesign effort achieved, which I’d admit is still a preliminary approach that needs to be refined:

    The next step was to implement an skeleton of the new layout-ng grid algorithm. Thanks to Christian’s direction, I quickly figure out how to do it and it looks like something like this:

    namespace blink {
    
    NGGridLayoutAlgorithm::NGGridLayoutAlgorithm(NGBlockNode node,
                                                 const NGConstraintSpace& space,
                                                 NGBreakToken* break_token)
        : NGLayoutAlgorithm(node, space, ToNGBlockBreakToken(break_token)) {}
    
    scoped_refptr NGGridLayoutAlgorithm::Layout() {
      return container_builder_.ToBoxFragment();
    }
    
    base::Optional NGGridLayoutAlgorithm::ComputeMinMaxSize(
        const MinMaxSizeInput& input) const {
      // TODO Implement this.
      return base::nullopt;
    }
    
    }  // namespace blink
    

    Finally, I tried to implement the Grid layout’s algorithm, according to the CSS Grid Layout feature’s specification, using the new LayoutNG APIs. This is the more complex tasks, since I still have to learn how sizing and positioning functions are used on the new layout logic, specially how to use the new Fragments and ContainerBuilder concepts.

    I submitted a WIP CL so that anybody can take a look, give suggestions or continue with the work. My plan is to devote some time to this challenge, every now and then, but I can’t set specifc goals or schedule for the time being. If anybody wants to speed up this task, perhaps it’d be possible to fund a project, which Igalia would be happy to participate.

    Other Web Engines Hackfest stories

    I also tried to attend some of the talks give during the hackfest and participate in a few breakout sessions. I’ll give now my impression of some of the ones I liked more.

    I enjoyed a lot the one given by Camille Lamy, Colin Blundell and Robert Kroeger (Google) about the Chrome’s Servicification project. The new services design they are implementing is awesome and it will improve for sure Chrome modularity and codebase maintenance.

    I participated in the MathML breakout session, which has somehow related to LayoutNG. Igalia launched a crowfunding campaign to implement the MathML specification in Chrome, using the new LayoutNG APIs. We thin that MathML could be a great success case for the new LayoutNG APIs, which has the goal of provide a stable API to implement new and complex layout models. This model will provide flexibility to the web engine, proving an easier way to implement new layout models without depending too much on the Chrome development cycle. In a way, this development model could be similar to a polyfill, but it’s integrated in the browser as native code instead of via external libraries.

    by jfernandez at November 07, 2018 12:23 PM

    October 20, 2018

    Manuel Rego

    Igalia at TPAC 2018

    Just a quick update before boarding to Lyon for TPAC 2018. This year 12 igalians will be at TPAC, 10 employees (Álex García Castro, Daniel Ehrenberg, Javier Fernández, Joanmarie Diggs, Martin Robinson, Rob Buis, Sergio Villar, Thibault Saunier and myself) and 2 coding experience students (Oriol Brufau and Sven Sauleau). We will represent Igalia in the different working groups and breakout sessions.

    On top of that Igalia will have a booth in the solutions showcase where we’ll have a few demos of our last developments like: WebRTC, MSE, CSS Grid Layout, CSS Box Alignment, MathML, etc. Showing them in some low-end boards like the Raspebrry Pi using WPE an optimized WebKit port for embedded platforms.

    Thread by W3C Developers announcing my talk.

    In my personal case I’ll be attending the CSS Working Group (CSSWG) and Houdini Task Force meetings to follow the work Igalia has been doing on the implementation of different standards. In addition, I’ll be giving a talk about how to contribute to the evolution of CSS on the W3C Developer Meetup that happens on Monday. I’ll try to explain how easy is nowadays to provide feedback to the CSSWG and have some influence on the different specifications.

    Tweet by Daniel Ehrenberg about the Web Platform position.

    Last but not least, Igalia Web Platform Team is hiring, we’re looking for people willing to work on web standards from the implementation on the different browser engines, to the discussions with the standard bodies or the definition of test suites. If you’re attending TPAC and you want to work on a flat company focused on free software development, probably you are a good candidate to join us. Read the position announcement and don’t hesitate to talk to any of us there about that.

    See you at TPAC tomorrow!

    October 20, 2018 10:00 PM

    October 11, 2018

    Andy Wingo

    heap object representation in spidermonkey

    I was having a look through SpiderMonkey's source code today and found something interesting about how it represents heap objects and wanted to share.

    I was first looking to see how to implement arbitrary-length integers ("bigints") by storing the digits inline in the allocated object. (I'll use the term "object" here, but from JS's perspective, bigints are rather values; they don't have identity. But I digress.) So you have a header indicating how many words it takes to store the digits, and the digits follow. This is how JavaScriptCore and V8 implementations of bigints work.

    Incidentally, JSC's implementation was taken from V8. V8's was taken from Dart. Dart's was taken from Go. We might take SpiderMonkey's from Scheme48. Good times, right??

    When seeing if SpiderMonkey could use this same strategy, I couldn't find how to make a variable-sized GC-managed allocation. It turns out that in SpiderMonkey you can't do that! SM's memory management system wants to work in terms of fixed-sized "cells". Even for objects that store properties inline in named slots, that's implemented in terms of standard cell sizes. So if an object has 6 slots, it might be implemented as instances of cells that hold 8 slots.

    Truly variable-sized allocations seem to be managed off-heap, via malloc or other allocators. I am not quite sure how this works for GC-traced allocations like arrays, but let's assume that somehow it does.

    Anyway, the point of this blog post. I was looking to see which part of SpiderMonkey reserves space for type information. For example, almost all objects in V8 start with a "map" word. This is the object's "hidden class". To know what kind of object you've got, you look at the map word. That word points to information corresponding to a class of objects; it's not available to store information that might vary between objects of that same class.

    Interestingly, SpiderMonkey doesn't have a map word! Or at least, it doesn't have them on all allocations. Concretely, BigInt values don't need to reserve space for a map word. I can start storing data right from the beginning of the object.

    But how can this work, you ask? How does the engine know what the type of some arbitrary object is?

    The answer has a few interesting wrinkles. Firstly I should say that for objects that need hidden classes -- e.g. generic JavaScript objects -- there is indeed a map word. SpiderMonkey calls it a "Shape" instead of a "map" or a "hidden class" or a "structure" (as in JSC), but it's there, for that subset of objects.

    But not all heap objects need to have these words. Strings, for example, are values rather than objects, and in SpiderMonkey they just have a small type code rather than a map word. But you know it's a string rather than something else in two ways: one, for "newborn" objects (those in the nursery), the GC reserves a bit to indicate whether the object is a string or not. (Really: it's specific to strings.)

    For objects promoted out to the heap ("tenured" objects), objects of similar kinds are allocated in the same memory region (in kind-specific "arenas"). There are about a dozen trace kinds, corresponding to arena kinds. To get the kind of object, you find its arena by rounding the object's address down to the arena size, then look at the arena to see what kind of objects it has.

    There's another cell bit reserved to indicate that an object has been moved, and that the rest of the bits have been overwritten with a forwarding pointer. These two reserved bits mostly don't conflict with any use a derived class might want to make from the first word of an object; if the derived class uses the first word for integer data, it's easy to just reserve the bits. If the first word is a pointer, then it's probably always aligned to a 4- or 8-byte boundary, so the low bits are zero anyway.

    The upshot is that while we won't be able to allocate digits inline to BigInt objects in SpiderMonkey in the general case, we won't have a per-object map word overhead; and we can optimize the common case of digits requiring only a word or two of storage to have the digit pointer point to inline storage. GC is about compromise, and it seems this can be a good one.

    Well, that's all I wanted to say. Looking forward to getting BigInt turned on upstream in Firefox!

    by Andy Wingo at October 11, 2018 02:33 PM

    October 09, 2018

    Manuel Rego

    Web Engines Hackfest 2018

    One year more and a new edition of the Web Engines Hackfest was arranged by Igalia. This time it was the tenth edition, the first five ones using the WebKitGTK+ Hackfest name and another five editions with the new broader name Web Engines Hackfest. A group of igalians, including myself, have been organizing this event. It has been some busy days for us, but we hope everyone enjoyed it and had a great time during the hackfest.

    This was the biggest edition ever, we were 70 people from 15 different companies including Apple, Google and Mozilla (three of the main browser vendors). It seems the hackfest is getting more popular, several people attending are repeating in the next editions, so that shows they enjoy it. This is really awesome and we’re thrilled about the future of this event.

    Talks

    The presentations are not the main part of the event, but I think it’s worth to do a quick recap about the ones we had this year:

    • Behdad Esfahbod and Dominik Röttsches from Google talked about Variable Fonts and the implementation in Chromium. It’s always amazing to check the possibilities of this new technology.

    • Camille Lamy, Colin Blundell and Robert Kroeger from Google presented the Servicification effort in the Chromium project. Which is trying to modularize Chromium in smaller parts.

    • Žan Doberšek from Igalia gave an update on WPE WebKit. The port is now official and it’s used everyday in more and more low-end devices.

    • Thibault Saunier from Igalia complemented Žan’s presentation talking about the GStreamer based WebRTC implementation in WebKitGTK+ and WPE ports. Really cool to see WebRTC arriving to more browsers and web engines.

    • Antonio Gomes and Jeongeun Kim from Igalia explained the status of Chromium on Wayland and it’s way to become fully supported upstream. This work will help to use Chromium on embedded systems.

    • Youenn Fablet from Apple closed the event talking about Service Workers support on WebKit. This is a key technology for Progressive Web Apps (PWA) and is now available in all major browsers.

    The slides of the talks are available on the website and wiki. The videos will be published soon in our YouTube channel.

    Some pictures from Web Engines Hackfest 2018 Some pictures from Web Engines Hackfest 2018 (Flickr album)

    Other topics

    During the event there were breakout sessions about many different topics. In this section I’m going to talk about the ones I’m more interested on.

    • Web Platform Tests (WPT)

      This is a key topic to improve interoperability on the web platform. Simon Pieters started the session with an introduction to WPT just in case someone was not aware of the repository and how it works. For the rest of the session we discussed the status of WPT on the different browsers.

      Chromium and Firefox are doing an automatic two ways (import/export) synchronization process so the tests can be easily shared between both implementations. On the other side WebKit still has some kind of manual process over the table, neither import or export is totally automatic, there are some scripts that help with the process though.

      Apart from that, WPT is a first-class citizen in Chromium, and the encouraged way to do new developments. In Firefox it’s still not there, as the test suites are not run in all the possible configurations yet (but they’re getting there).

      Finally the WPT dashboard is showing results for the most recent unstable releases of the different browsers, which is really cool despite being somehow hidden on the UI: https://wpt.fyi/results/?label=experimental.

    • LayoutNG

      Christian Biesinger gave an introduction to LayoutNG project in Blink, where Google is rewriting Chromium’s layout engine. He showed the main ideas and concepts behind this effort and navigated the code showing some examples. According to Christian things are getting ready and LayoutNG could be shipping in the coming months for inline and block layout.

      On top of questions about LayoutNG, we briefly mentioned how other browsers are also trying to improve the layout code: Firefox with Servo layout and WebKit with Layout Formatting Context (LFC) aka Layout Reloaded. It seems quite clear that the current layout engines are getting to their limits and people are looking for new solutions.

    • Chromium downstream

      Several companies (Google included) have to maintain downstream forks Chromium with their own customizations to fit their particular use cases and hardware platforms.

      Colin Blundell was explaining how it was the process of maintaining the downstream version of Chrome for iOS. After trying many different strategies the best solution was rebasing their changes 2-3 times per day. That way the conflicts they had to deal with were much simpler to resolve, otherwise it was not possible for them to cope with all the upstream changes. Note that he mentioned that one (rotatory) full-time resource was required to perform this job in time.

      It was good to share the experiences of different companies that are facing very similar issues for this kind of work.

    Thank you very much

    Just to close this post, big thanks to all the people attending the event, without you the hackfest wouldn’t have any sense at all. People are key for this event where discussions and conversations are one of the main parts of it.

    Of course special acknowledgments to the speakers for the hard work they put on their lovely talks.

    Finally I couldn’t forget to thank the Web Engines Hackfest 2018 sponsors: Google and Igalia. Without their support this event won’t be possible.

    Web Engines Hackfest 2018 sponsors: Google and Igalia Web Engines Hackfest 2018 sponsors: Google and Igalia

    Looking forward for a new edition!

    October 09, 2018 10:00 PM