CoreFoundation (used by macOS and iOS, and thus Safari)
CFNet (used by iTunes on Windows… I think only iTunes?)
cURL (used by most Windows applications, also PlayStation)
libsoup (used by WebKitGTK+ and WPE WebKit)
One guess which of those we’re going to be talking about in this post. Yeah, of course, libsoup! If you’re not familiar with libsoup, it’s the GNOME HTTP library. Why is it called libsoup? Because before it was an HTTP library, it was a SOAP library. And apparently somebody thought that when Mexican people say “soap,” it often sounds like “soup,” and also thought that this was somehow both funny and a good basis for naming a software library. You can’t make this stuff up.
Anyway, libsoup is built on top of GIO’s sockets APIs. Did you know that GIO has Object wrappers for BSD sockets? Well it does. If you fancy lower-level APIs, create a GSocket and have a field day with it. Want something a bit more convenient? Use GSocketClient to create a GSocketConnection connected to a GNetworkAddress. Pretty straightforward. Everything parallels normal BSD sockets, but the API is nice and modern and GObject, and that’s really all there is to know about it. So when you point WebKitGTK+ at an HTTP address, libsoup is using those APIs behind the scenes to handle connection establishment. (We’re glossing over details like “actually implementing HTTP” here. Trust me, libsoup does that too.)
Things get more fun when you want to load an HTTPS address, since we have to add TLS to the picture, and we can’t have TLS code in GIO or GLib due to this little thing called “copyright law.” See, there are basically three major libraries used to implement TLS on Linux, and they all have problems:
OpenSSL is by far the most popular, but it’s, hm, shall we say technically non-spectacular. There are forks, but the forks have problems too (ask me about BoringSSL!), so forget about them. The copyright problem here is that the OpenSSL license is incompatible with the GPL. (Boring details: Red Hat waves away this problem by declaring OpenSSL a system library qualifying for the GPL’s system library exception. Debian has declared the opposite, so Red Hat’s choice doesn’t gain you anything if you care about Debian users. The OpenSSL developers are trying to relicense to the Apache license to fix this, but this process is taking forever, and the Apache license is still incompatible with GPLv2, so this would make it impossible to use GPLv2+ software except under the terms of GPLv3+. Yada yada details.) So if you are writing a library that needs to be used by GPL applications, like say GLib or libsoup or WebKit, then it would behoove you to not use OpenSSL.
GnuTLS is my favorite from a technical standpoint. Its license is LGPLv2+, which is unproblematic everywhere, but some of its dependencies are licensed LGPLv3+, and that’s uncomfortable for many embedded systems vendors, since LGPLv3+ contains some provisions that make it difficult to deny you your freedom to modify the LGPLv3+ software. So if you rely on embedded systems vendors to fund the development of your library, like say libsoup or WebKit, then you’re really going to want to avoid GnuTLS.
NSS is used by Firefox. I don’t know as much about it, because it’s not as popular. I get the impression that it’s more designed for the needs of Firefox than as a Linux system library, but it’s available, and it works, and it has no license problems.
So naturally GLib uses NSS to avoid the license issues of OpenSSL and GnuTLS, right?
Haha no, it uses a dynamically-loadable extension point system to allow you to pick your choice of OpenSSL or GnuTLS! (Support for NSS was started but never finished.) This is OK because embedded systems vendors don’t use GPL applications and have no problems with OpenSSL, while desktop Linux users don’t produce tivoized embedded systems and have no problems with LGPLv3. So if you’re using desktop Linux and point WebKitGTK+ at an HTTPS address, then GLib is going to load a GIO extension point called glib-networking, which implements all of GIO’s TLS APIs — notably GTlsConnection and GTlsCertificate — using GnuTLS. But if you’re building an embedded system, you simply don’t build or install glib-networking, and instead build a different GIO extension point called glib-openssl, and libsoup will create GTlsConnection and GTlsCertificate objects based on OpenSSL instead. Nice! And if you’re Centricular and you’re building GStreamer for Windows, you can use yet another GIO extension point, glib-schannel, for your native Windows TLS goodness, all hidden behind GTlsConnection so that GStreamer (or whatever application you’re writing) doesn’t have to know about SChannel or OpenSSL or GnuTLS or any of that sad complexity.
Now you know why the TLS extension point system exists in GIO. Software licenses! And you should not be surprised to learn that direct use of any of these crypto libraries is banned in libsoup and WebKit: we have to cater to both embedded system developers and to GPL-licensed applications. All TLS library use is hidden behind the GTlsConnection API, which is really quite nice to use because it inherits from GIOStream. You ask for a TLS connection, have it handed to you, and then read and write to it without having to deal with any of the crypto details.
As a recap, the layering here is: WebKit -> libsoup -> GIO (GLib) -> glib-networking (or glib-openssl or glib-schannel).
So when Epiphany fails to load a webpage, and you’re looking at a TLS-related error, glib-networking is probably to blame. If it’s an HTTP-related error, the fault most likely lies in libsoup. Same for any other GNOME applications that are having connectivity troubles: they all use the same network stack. And there you have it!
P.S. The glib-openssl maintainers are helping merge glib-openssl into glib-networking, such that glib-networking will offer a choice of GnuTLS or OpenSSL and obsoleting glib-openssl. This is still a work in progress. glib-schannel will be next!
P.S.S. libcurl also gives you multiple choices of TLS backend, but makes you choose which at build time, whereas with GIO extension points it’s actually possible to choose at runtime from the selection of installed extension points. The libcurl approach is fine in theory, but creates some weird problems, e.g. different backends with different bugs are used on different distributions. On Fedora, it used to use NSS, but now uses OpenSSL, which is fine for Fedora, but would be a license problem elsewhere. Debian actually builds several different backends and gives you a choice, unlike everywhere else. I digress.
To avoid this bug, downgrade to mesa-18.2.2-1.fc29:
$ sudo dnf downgrade mesa*
You can also update to mesa-18.2.4-2.fc29, but this build has not yet reached updates-testing, let alone stable, so downgrading is easier for now. Another workaround is to run your application with accelerated compositing mode disabled, to avoid OpenGL usage:
$ WEBKIT_DISABLE_COMPOSITING_MODE=1 epiphany
On the bright side of things, from all the bug reports I’ve received over the past two days I’ve discovered that lots of people use Epiphany and notice when it’s broken. That’s nice!
Huge thanks to Dave Airlie for quickly preparing the fixed mesa update, and to Jakub Jelenik for handling the same for GCC.
Last month, I attended the Web Engines Hackfest (hosted by Igalia in A Coruña, Spain) and also the WebKit Contributors Meeting (hosted by Apple in San Jose, California). These are easily the two biggest WebKit development events of the year, and it’s always amazing to meet everyone in person yet again. A Coruña is an amazing city, and every browser developer ought to visit at least once. And the Contributors Meeting is a no-brainer event for WebKit developers.
One of the main discussion points this year was Media Source Extensions (MSE). MSE is basically a way for browsers to control how videos are downloaded. Until recently, if you were to play a YouTube video in Epiphany, you’d notice that the video loads way faster than it does in other browsers. This is because WebKitGTK+ — until recently — had no support for MSE. In other browsers, YouTube uses MSE to limit the speed at which video is downloaded, in order to reduce wasted bandwidth in case you stop watching the video before it ends. But with WebKitGTK+, MSE was not available, so videos would load as quickly as possible. MSE also makes it harder for browsers to offer the ability to download the videos; you’ll notice that neither Firefox nor Chrome offer to download the videos in their context menus, a feature that’s been available in Epiphany for as long as I remember.
So that sounds like it’s good to not have MSE. Well, the downside is that YouTube requires it in order to receive HD videos, to avoid that wasted bandwidth and to make it harder for users to download HD videos. And so WebKitGTK+ users have been limited to 720p video with H.264 and 480p video with WebM, where other browsers had access to 1080p and 1440p video. I’d been stuck with 480p video on Fedora for so long, I’d forgotten that internet video could look good.
Unfortunately, WebKitGTK+ was quite late to implement MSE. All other major browsers turned it on several years ago, but WebKitGTK+ dawdled. There was some code to support MSE, but it didn’t really work, and was disabled. And so it came to pass that, in September of this year, YouTube began to require MSE to access any WebM video, and we had a crisis. We don’t normally enable major new features in stable releases, but this was an exceptional situation and users would not be well-served by delaying until the next release cycle. So within a couple weeks, we were able to release WebKitGTK+ 2.22.2 and Epiphany 3.30.1 (both on September 21), and GStreamer 1.14.4 (on October 2, thanks to Tim-Philipp Müller for expediting that release). Collectively, these releases enabled basic video playback with MSE for users of GNOME 3.30. And if you still use of GNOME 3.28, worry not: you are still supported and can get MSE if you update to Epiphany 3.28.5 and also have the aforementioned versions of WebKitGTK+ and GStreamer.
MSE in WebKitGTK+ 2.22.2 had many rough edges because it was a mad rush to get the feature into a minimally-viable state, but those issues have been polished off in 2.22.3, which we released earlier this week on October 29. Be sure you have WebKitGTK+ 2.22.3, plus GStreamer 1.14.4, for a good experience on YouTube. Unfortunately we can’t provide support for older software versions anymore: if you don’t have GStreamer 1.14.4, then you’ll need to configure WebKitGTK+ with -DENABLE_MEDIA_SOURCE=OFF at build time and suffer from lack of MSE.
Epiphany 3.28.1 uses WebKitSettings to turn on the “enable-mediasource” setting. Turn that on if your application wants MSE now (if it’s a web browser, it certainly does). This setting will be enabled by default in WebKitGTK+ 2.24. Huge thanks to the talented developers who made this feature possible! Enjoy your 1080p and 1440p video.
When building WebKitGTK+, it’s a good idea to stick to the default values for the build options. If you’re building some sort of embedded system and really know what you’re doing, then OK, it might make sense to change some settings and disable some stuff. But Linux distros are generally well-advised to stick to the defaults to avoid creating problems for users.
One exception is if you need to disable certain features to avoid newer dependencies when building WebKit for older systems. For example, Ubuntu 18.04 disables web fonts (ENABLE_WOFF2=OFF) because it doesn’t have the libbrotli and libwoff2 dependencies that are required for that feature to work, hence some webpages will display using subpar fonts. And distributions shipping older versions of GStreamer will need to disable the ENABLE_MEDIA_SOURCE option (which is missing from the below feature list by mistake), since that requires the very latest GStreamer to work.
Other exceptions are the ENABLE_GTKDOC and ENABLE_MINIBROWSER settings, which distros do want. ENABLE_GTKDOC is disabled by default because it’s slow to build, and ENABLE_MINIBROWSER because, well, actually I don’t know why, you always want that one and it’s just annoying to find it’s not built.
OK, but really now, other than those exceptions, you should probably leave the defaults alone.
The feature list that prints when building WebKitGTK+ looks like this:
-- ENABLE_ACCELERATED_2D_CANVAS .......... OFF
-- ENABLE_DRAG_SUPPORT ON
-- ENABLE_GEOLOCATION .................... ON
-- ENABLE_GLES2 OFF
-- ENABLE_GTKDOC ......................... OFF
-- ENABLE_ICONDATABASE ON
-- ENABLE_INTROSPECTION .................. ON
-- ENABLE_JIT ON
-- ENABLE_MINIBROWSER .................... OFF
-- ENABLE_OPENGL ON
-- ENABLE_PLUGIN_PROCESS_GTK2 ............ ON
-- ENABLE_QUARTZ_TARGET OFF
-- ENABLE_SAMPLING_PROFILER .............. ON
-- ENABLE_SPELLCHECK ON
-- ENABLE_TOUCH_EVENTS ................... ON
-- ENABLE_VIDEO ON
-- ENABLE_WAYLAND_TARGET ................. ON
-- ENABLE_WEBDRIVER ON
-- ENABLE_WEB_AUDIO ...................... ON
-- ENABLE_WEB_CRYPTO ON
-- ENABLE_X11_TARGET ..................... ON
-- USE_LIBHYPHEN ON
-- USE_LIBNOTIFY ......................... ON
-- USE_LIBSECRET ON
-- USE_SYSTEM_MALLOC ..................... OFF
-- USE_WOFF2 ON
And, asides from the exceptions noted above, those are probably the options you want to ship with.
Why are some things disabled by default? ENABLE_ACCELERATED_2D_CANVAS is OFF by default because it is experimental (i.e. not great :) and requires CairoGL, which has been available in most distributions for about half a decade now, but still hasn’t reached Debian yet, because the Debian developers know that the Cairo developers consider CarioGL experimental (i.e. not great!). Many of our developers use Debian, and we’re not keen on having two separate sets of canvas bugs depending on whether you’re using Debian or not, so best keep this off for now. ENABLE_GLES2 switches you from desktop GL to GLES, which is maybe needed for embedded systems with crap proprietary graphics drivers, but certainly not what you want when building for a general-purpose distribution with mesa. Then ENABLE_QUARTZ_TARGET is for building on macOS, not for Linux. And then we come to USE_SYSTEM_MALLOC.
USE_SYSTEM_MALLOC disables WebKit’s bmalloc memory allocator (“fast malloc”) in favor of glibc malloc. bmalloc is performance-optimized for macOS, and I’m uncertain how its performance compares to glibc malloc on Linux. Doesn’t matter really, because bmalloc contains important heap security features that will be disabled if you switch to glibc malloc, and that’s all you need to know to decide which one to use. If you disable bmalloc, you lose the Gigacage, isolated heaps, heap subspaces, etc. I don’t pretend to understand how any of those things work, so I’ll just refer you to this explanation by Sam Brown, who sounds like he knows what he’s talking about. The point is that, if an attacker has found a memory vulnerability in WebKit, these heap security features make it much harder to exploit and take control of users’ computers, and you don’t want them turned off.
USE_SYSTEM_MALLOC is currently enabled (bad!) in openSUSE and SUSE Linux Enterprise 15, presumably because when the Gigacage was originally introduced, it crashed immediately for users who set address space (virtual memory allocation) limits. Gigacage works by allocating a huge address space to reduce the chances that an attacker can find pointers within that space, similar to ASLR, so limiting the size of the address space prevents Gigacage from working. At first we thought it made more sense to crash than to allow a security feature to silently fail, but we got a bunch of complaints from users who use ulimit to limit the address space used by processes, and also from users who disable overcommit (which is required for Gigacage to allocate ludicrous amounts of address space), and so nowadays we just silently disable Gigacage instead if enough address space for it cannot be allocated. So hopefully there’s no longer any reason to disable this important security feature at build time! Distributions should be building with the default USE_SYSTEM_MALLOC=OFF.
which all looks pretty reasonable to me: certain features that require “newer” dependencies are disabled on the old distros, and NPAPI plugins are not supported in the enterprise distro, and JIT doesn’t work on odd architectures. I would remove the ENABLE_JIT=OFF lines only because WebKit’s build system should be smart enough nowadays to disable it automatically to save you the trouble of thinking about which architectures the JIT works on. And I would also remove the -DUSE_SYSTEM_MALLOC=ON line to ensure users are properly protected.
Just a quick update before boarding to Lyon for TPAC 2018.
This year 12 igalians will be at TPAC,
10 employees (Álex García Castro, Daniel Ehrenberg, Javier Fernández, Joanmarie Diggs, Martin Robinson, Rob Buis, Sergio Villar, Thibault Saunier and myself)
and 2 coding experience students (Oriol Brufau and Sven Sauleau).
We will represent Igalia in the different working groups and breakout sessions.
On top of that Igalia will have a booth in the solutions showcase
where we’ll have a few demos of our last developments like:
WebRTC, MSE, CSS Grid Layout, CSS Box Alignment, MathML, etc.
Showing them in some low-end boards like the Raspebrry Pi
using WPE an optimized WebKit port for embedded platforms.
In my personal case I’ll be attending the CSS Working Group (CSSWG)
and Houdini Task Force meetings to follow the work
Igalia has been doing on the implementation of different standards.
In addition, I’ll be giving a talk about how to contribute
to the evolution of CSS on the W3C Developer Meetup
that happens on Monday.
I’ll try to explain how easy is nowadays to provide feedback
to the CSSWG and have some influence on the different specifications.
Ever wanted to work on the design, specification and implementation of new web platform features? Does participating from anywhere in the world in a flat cooperative doing free software sound good? Igalia's hiring a web platform engineer! https://t.co/75A6I0zEfe
Last but not least, Igalia Web Platform Team is hiring,
we’re looking for people willing to work on web standards
from the implementation on the different browser engines,
to the discussions with the standard bodies or the definition of test suites.
If you’re attending TPAC and you want to work on a flat company focused on free software development,
probably you are a good candidate to join us.
Read the position announcement and don’t hesitate to talk to any of us there about that.
One year more
and a new edition of the Web Engines Hackfest
was arranged by Igalia.
This time it was the tenth edition,
the first five ones using the WebKitGTK+ Hackfest name
and another five editions with the new broader name Web Engines Hackfest.
A group of igalians, including myself, have been organizing this event.
It has been some busy days for us, but we hope everyone enjoyed it
and had a great time during the hackfest.
This was the biggest edition ever, we were 70 people
from 15 different companies including Apple, Google and Mozilla
(three of the main browser vendors).
It seems the hackfest is getting more popular,
several people attending are repeating in the next editions,
so that shows they enjoy it.
This is really awesome and we’re thrilled about the future of this event.
The presentations are not the main part of the event,
but I think it’s worth to do a quick recap about the ones we had this year:
Behdad Esfahbod and Dominik Röttsches from Google
talked about Variable Fonts
and the implementation in Chromium.
It’s always amazing to check the possibilities of this new technology.
Camille Lamy, Colin Blundell and Robert Kroeger from Google
presented the Servicification effort
in the Chromium project.
Which is trying to modularize Chromium in smaller parts.
Žan Doberšek from Igalia gave an update on WPE WebKit.
The port is now official
and it’s used everyday in more and more low-end devices.
Thibault Saunier from Igalia complemented Žan’s presentation
talking about the GStreamer based WebRTC
implementation in WebKitGTK+ and WPE ports.
Really cool to see WebRTC arriving to more browsers and web engines.
Antonio Gomes and Jeongeun Kim from Igalia
explained the status of Chromium on Wayland
and it’s way to become fully supported upstream.
This work will help to use Chromium on embedded systems.
Youenn Fablet from Apple closed the event
talking about Service Workers
support on WebKit.
This is a key technology for Progressive Web Apps (PWA)
and is now available in all major browsers.
During the event there were breakout sessions about many different topics.
In this section I’m going to talk about the ones I’m more interested on.
Web Platform Tests (WPT)
This is a key topic to improve interoperability on the web platform.
Simon Pieters started the session with an introduction to WPT
just in case someone was not aware of the repository and how it works.
For the rest of the session we discussed
the status of WPT on the different browsers.
Chromium and Firefox are doing
an automatic two ways (import/export) synchronization process
so the tests can be easily shared between both implementations.
On the other side WebKit still has some kind of manual process over the table,
neither import or export is totally automatic,
there are some scripts that help with the process though.
Apart from that, WPT is a first-class citizen in Chromium,
and the encouraged way to do new developments.
In Firefox it’s still not there,
as the test suites are not run in all the possible configurations yet
(but they’re getting there).
Christian Biesinger gave an introduction to LayoutNG project
in Blink, where Google is rewriting Chromium’s layout engine.
He showed the main ideas and concepts behind this effort
and navigated the code showing some examples.
According to Christian things are getting ready
and LayoutNG could be shipping in the coming months
for inline and block layout.
On top of questions about LayoutNG, we briefly mentioned
how other browsers are also trying to improve the layout code:
Firefox with Servo layout
and WebKit with Layout Formatting Context (LFC) aka Layout Reloaded.
It seems quite clear that the current layout engines
are getting to their limits and people are looking for new solutions.
Several companies (Google included) have to maintain downstream forks
Chromium with their own customizations to fit their particular use cases
and hardware platforms.
Colin Blundell was explaining how it was the process of maintaining
the downstream version of Chrome for iOS.
After trying many different strategies
the best solution was rebasing their changes 2-3 times per day.
That way the conflicts they had to deal with were much simpler to resolve,
otherwise it was not possible for them to cope with all the upstream changes.
Note that he mentioned that one (rotatory) full-time resource
was required to perform this job in time.
It was good to share the experiences of different companies
that are facing very similar issues for this kind of work.
Thank you very much
Just to close this post, big thanks to all the people attending the event,
without you the hackfest wouldn’t have any sense at all.
People are key for this event where discussions and conversations
are one of the main parts of it.
Of course special acknowledgments to the speakers
for the hard work they put on their lovely talks.
Finally I couldn’t forget to thank the Web Engines Hackfest 2018 sponsors:
Google and Igalia.
Without their support this event won’t be possible.
Web Engines Hackfest 2018 sponsors: Google and Igalia
This is a blog post about a change of behavior
on CSS Grid Layout
related to percentage row tracks and gutters
in grid containers with indefinite height.
Igalia has just implemented the change
which can affect some websites out there.
So here I am going to explain several things about
how percentages work in CSS and all the issues around it,
of course I will also explain the change we are doing in Grid Layout
and how to keep your previous behavior in the new version
with very simple changes.
Sorry for the length but I have been dealing with these issues since 2015
(probably earlier but that is the date of the first commit
I found about this topic),
and I went too deep explaining the concepts.
Probably the post has some mistakes, this topic is not simple at all,
but it represents a kind of brain dump of my knowledge about it.
Percentages and definite sizes
This is the easy part, if you have an element with fixed width and height
resolving percentages on children dimensions is really simple,
they are just computed against the width or height of the containing block.
A simple example:
Example of percentage dimensions in a containing block with definite sizes
Things are a bit trickier for percentage margins and paddings.
In inline direction (width in horizontal writing mode)
they work as expected and are resolved against the inline size.
However in block direction (height) they are not resolved against
the block size (as one can initially expect)
but against the inline size (width) of the containing block.
Again a very simple example:
Example of percentage margins in a containing block with definite sizes
Note that there is something more here,
in both Flexbox and Grid Layout specifications it was stated in the past
that percentage margins and paddings resolve against
their corresponding dimension, for example inline margins
against inline axis and block margins against block axis.
This was implemented like that in Firefox and Edge,
but Chromium and WebKit kept the usual behavior
of resolving always against inline size.
So for a while the spec had the possibility to resolve them in either way.
First question is, what is an indefinite size?
The simple answer is that a definite size is a size
that you can calculate without taking into account the contents of the element.
An indefinite size is the opposite, in order to compute it
you need to check the contents first.
But then, what happens when the containing block dimensions are indefinite?
For example, a floated element has indefinite width
(unless otherwise manually specified),
a regular block has indefinite height by default (height: auto).
For heights this is very simple, percentages are directly ignored
so they have no effect on the element, they are treated as auto.
For widths it starts to get funny.
Web rendering engines have two phases to compute the width of an element.
A first one to compute the minimum and maximum intrinsic width
(basically the minimum and maximum width of its contents),
and a second one to compute the final width for that box.
So let’s use an example to explain this properly.
Before getting into that, let me tell you that I am going to use Ahem font
in some examples, as it makes very easy to know the size of the text
and resolve the percentages accordingly,
so if we use font: 50px/1 Ahem; we know that the size
of an X character is a square of 50x50 pixels.
Example of intrisic width without constraints
The browser first calculates the intrinsic width,
as minimum it computes 250px
(the size of the smallest word, XXXXX in this case),
as maximum size it would be 400px
(the size of the whole text without line breaking XX XXXXX).
So after this phase the browser knows that the element
should have a width between 250px and 400px.
Then during layout phase the browser will decide the final size,
if there are no constraints imposed by the containing block
it will use the maximum intrinsic width (400px in this case).
But if you have a wrapper with a 300px width,
the element will have to use 300px as width.
If you have a wrapper smaller than the minimium intrinsic width,
for example 100px,
the element will still use the minimum 250px as its size.
This is a quick and dirty explanation,
but I hope it is useful to get the general idea.
Example of intrisic width with different constraints
In order to resolve percentage widths (in the indefinite width situations)
the browser does a different thing depending on the phase.
During intrinsic size computations the percentage width is ignored
(treated as auto like for the heights).
But in the layout phase the width is resolved against
the intrinsic size computed earlier.
Trying to summarize the above paragraphs,
we can say that somehow the width is only indefinite
while the browser is computing the intrinsic width of the element,
afterwards during the actual layout
the width is considered definite
and percentages are resolved against it.
So now let’s see an example of indefinite dimensions and percentages:
Example of percentage dimensions in a containing block with indefinite sizes
First the size of the magenta box is calculated based on its contents,
as it has not any constraint it uses the maximum intrinsic width
(the length of Hello world!).
Then as you can see the width of the cyan box is 50% of the text length,
but the height is the same than if we use height: auto
(the default value), so the 50% height is ignored.
For margins and paddings things work more or less the same,
remember that all of them are resolved against the inline direction
(so they are ignored during intrinsic size computation
and resolved later during layout).
But there is something special about this too.
Nowadays all the browsers have the same behavior
but that was not always the case, not so long time ago
(before Firefox 61 which was released past June)
things worked different in Firefox than the rest of browsers
Again let’s go to an example:
Example of percentage margins in a containing block with indefinite sizes
In this example the size of the magenta box (the floated div)
is the width of the text, 250px in this case.
Then the margin is 50% of that size (125px),
making that the size of the cyan box gets reduced to 125px too,
which causes overflow.
But for these cases (percentage width margins and paddings
and indefinite width container)
Firefox did something extra that was called back-compute percentages.
For that it something similar to the following formula:
Intrinsic width / (1 - Sum of percentages)
Which for this case would be 250px / (1 - 0.50) = 500px.
So it takes as intrinsic size of the magenta box 500px,
and then it resolves the 50% margin against it (250px).
Thanks to this there is no overflow,
and the margin is 50% of the containing block size.
Example of old Firefox behavior back-computing percentage margins
This Firefox behavior seems really smart and avoid overflows,
but the CSSWG discussed about it
and decided to use the other behavior.
The main reason is what happens when you are around 100% percentages,
or if you go over that value.
The size of the box starts to be quite big
(with 90% margin it would be 2500px),
and when you go to 100% or over it you cannot use that formula
so it considers the size as infinity
(basically the viewport size in this example)
and there is discontinuity in how percentages are resolved.
So after that resolution Firefox changed their implementation
and removed the back-computing percentages logic,
thus we have now interoperability in
how percentage margins and paddings are resolved.
CSS Grid Layout and percentages
And now we arrive to CSS Grid Layout and how to resolve percentages
in two places: grid tracks and grid gutters.
Of course when the grid container has definite dimensions
there are no problems in resolving percentages against them,
that is pretty simple.
As usual the problem starts with indefinite sizes.
Originally this was not a controversial topic,
percentages for tracks were behaving similar
to percentage for dimensions in regular blocks.
A percentage column was treated as auto for intrinsic size computation
and later resolved against that size during layout.
For percentage rows they were treated as auto.
It does not mean that this is very easy to understand
(actually it took me a while),
but once you get it, it is fine and not hard to implement.
But when percentage support was added to grid gutters
the big party started.
Firefox was the first browser implementing them
and they decided to use the back-compute technique
explained in the previous point.
Then when we add support in Chromium and WebKit
we did something different than Firefox,
we basically mimic the behavior of percentage tracks.
As browsers started to diverge different discussions appear.
One of the first agreements on the topic was that both
percentage tracks and gutters should behave the same.
That invalidated the back-computing approach,
as it was not going to work fine for percentage tracks as they have contents.
In addition it was finally discarded even for regular blocks,
as commented earlier,
so this was out of the discussion.
However the debate moved to how percentage row tracks and gutters
should be resolved, if similar to what we do for regular blocks
or if similar to what we do for columns.
The CSSWG decided they would like to keep CSS Grid Layout
as symmetric as possible, so making row percentages
resolve against the intrinsic height would achieve that goal
So finally the CSSWG resolved to modify how percentage row tracks and gutters
are resolved for grid containers with indefinite height.
The two GitHub issues with the last discussions are:
Let’s finish this point with a pair of examples to understand
the change better comparing the previous and new behavior.
Example of percentage tracks in a grid container with indefinite sizes
Here the intrinsic size of the grid container
is the width and height of the text Testing,
and then the percentages tracks are resolved against that size
for both columns and rows (before that was only done for columns).
Example of percentage gutters in a grid container with indefinite sizes
In this example we can see the same thing, with the new behavior
both the percentage column and row gaps are resolved
against the intrinsic size.
Change behavior for indefinite height grid containers
For a while all browsers were behaving the same
(after Firefox dropped the back-computing approach)
so changing this behavior would imply some kind of risks,
as some websites might be affected by that and get broken.
For that reason we added a use counter
to track how many websites where hitting this situation,
using percentage row tracks in a indefinite height grid container.
The number is not very high, but there is an increasing trend as
Grid Layout is being adopted
(almost 1% of websites are using it today).
The intent was approved, but we were requested to analyze the sites
that were hitting the use counter.
After checking 178 websites only 8 got broken due to this change,
we contacted them to try to get them fixed
explaining how to keep the previous behavior (more about this in next point).
You can find more details about this research in this mail.
Apart from that we added a deprecation message in Chromium 69,
so if you have a website that is affected by this
(it does not mean that it has to get broken but
that it uses percentage row tracks in a grid container with indefinite height)
[Deprecation] Percentages row tracks and gutters
for indefinite height grid containers
will be resolved against the intrinsic height
instead of being treated as auto and zero respectively.
This change will happen in M70, around October 2018.
for more details.
In addition Firefox and Edge developers have been notified
and we have shared the tests in WPT
so hopefully those implementations will get updated soon too.
Update your website
Yes this change might affect your website or not,
even if you get the deprecation warning it can be the case
that your website is still working perfectly fine,
but in some cases it can break quite badly.
The good news is that the solution is really straightforward.
If you find issues in your website and you want to keep the old behavior
you just need to do the following for grid containers with indefinite height:
Change percentages in grid-template-rows or grid-auto-rows
Modify percentages in row-gap or grid-row-gap to 0.
With those changes your website will keep behaving like before.
In most cases you will realize that the percentages were unneeded
and were not doing anything useful for you,
even you would be able to drop the declaration completely.
One of these cases would be websites that have grid containers
with just one single row of 100% height (grid-template-rows: 100%),
many of the sites hitting the use counter are like this.
All these are not affected by this change,
unless the have extra implicit rows,
but the 100% is not really useful at all there,
they can simply remove the declaration.
Another sites that have issues are the ones that have for example two rows
that sum up 100% in total (grid-template-rows: 25% 75%).
These percentages were ignored before,
so the contents always fit in each of the rows.
Now the contents might not fit in each row and the results
might not be the desired ones.
Example of overlapping rows in the new behavior
The sites that were more broken usually have several rows
and used percentages only for a few of them or for all.
And now the rows overflow the height of the grid container
and they overlap other content on the website.
There were cases like this example:
Example of overflowing rows in the new behavior
This topic has been a kind of neverending story for the CSSWG,
but finally it seems we are reaching to an end.
Let’s hope this does not get any further
and things get settle down after all this time.
We hope that this change is the best solution for web authors
and everyone will be happy with the final outcome.
As usual I could not forget to highlight that all this work
has been done by Igalia
thanks to Bloomberg sponsorship
as part of our ongoing collaboration.
Igalia and Bloomberg
working together to build a better web
Thanks for reading that long, this ended up being
much more verbose and covering more topics than originally planned.
But I hope it can be useful to understand the whole thing.
You can find all the examples from this blog post in this pen
feel free to play with them.
Since the beginning of the web we have been used to deal with physical
CSS properties for different features,
for example we all know how to set a margin in an element using
margin-left, margin-right, margin-top and/or margin-bottom.
But with the appearance of CSS Writing Modes
features, the concepts of left, right, top and bottom
have somehow lost their meaning.
Imagine that you have some right-to-left (RTL) content on your website
your left might be probably the physical right,
so if you are usually setting margin-left: 100px for some elements,
you might want to replace that with margin-right: 100px.
But what happens if you have mixed content left-to-right (LTR) and RTL
at the same time, then you will need different CSS properties
to set left or right depending on that.
Similar issues are present if you think about vertical writing modes,
maybe left for that content is the physical top or bottom.
CSS Logical Properties and Values
is a CSS specification that defines a set of logical (instead of physical)
properties and values to prevent this kind of issues.
So when you want to set that margin-left: 100px
independently of the direction and writing mode of your content,
you can directly use margin-inline-start: 100px that will be smart enough.
Rachel Andrew has a nice blog post
explaining deeply this specification and its relevance.
Example of margin-inline-start: 100px in different combinations of directions and writing modes
Chromium and WebKit have had support since a long time ago
for some of the CSS logical properties defined by the spec.
But they were not using the standard names defined in the specification
but some -webkit- prefixed ones with different names.
For setting the dimensions of an element Chromium and WebKit
have properties like -webkit-logical-width and -webkit-logical-height.
However CSS Logical defines inline-size and block-size instead.
There are also the equivalent ones for minimum and maximum sizes too.
These ones have been already unprefixed at the beginning of 2017
and included in Chromium since version 57 (March 2017).
In WebKit they are still only supported using the prefixed version.
But there are more similar properties for margins, paddings and borders
in Chromium and WebKit that use start and end for inline direction
and before and after for block direction.
In CSS Logical we have inline-start and inline-end for inline direction
and block-start and block-end for block direction,
which are much less confusing.
There was an attempt in the past to unprefix these properties
but the work was abandoned and never completed.
These ones were still using the -webkit- prefix
so we decided to tackle them as the first task.
The post has been only talking about properties so far,
but the same thing applies to some CSS values,
that is why the spec is called CSS Logical Properties and Values.
For example a very well-known property like float
has the physical values left and right.
The spec defines inline-start and inline-end
as the logical values for float.
However these were not supported yet in Chromium and WebKit,
not even using -webkit- prefixes.
Firefox used to have some -moz- prefixed properties,
but since Firefox 41
(September 2015) it is shipping many of the standard
logical properties and values.
Firefox has been using these properties extensively in its own tests,
thus having them supported in Chromium will make easier to share them.
At the beginning of this work, Oriol wrote a document
in which explaining the implementation plan
where you can check the status of all these properties
in Chromium and Firefox.
The work on the first part, making the old -webkit- prefixed properties
to use the new standard names, has been already completed by Oriol
and it is going to be included in the upcoming release of Chromium 69.
Next step was to add support for the new stuff behind an experimental flag.
This work is ongoing and you can check the current status in the latest Canary
enabling the Experimental Web Platform features flag.
So far Oriol has added support for a bunch of shorthands
and the flow-relative offset properties.
You can follow the work in issue #850004
in Chromium bug tracker.
We will talk more about this in a future blog post
once this task is completed
and the new logical properties and values are shipped.
Of course testing is a key part of all these tasks,
and web-platform-tests (WPT) repository
plays a fundamental role to ensure interoperability
between the different implementations.
Like we have been doing in Igalia lately in all our developments
we used WPT as the primary place to store all the tests
related to this work.
Oriol has been creating tests in WPT
to cover all these features.
Initial tests were based in the ones already available in Firefox
and modified them to adapt to the rest of stuff that needs to be checked.
As explained before, this is an ongoing task
but we already have some extra plans for it.
These are some of the tasks (in no particular order)
that we would like to do in the coming months:
Complete the implementation of
CSS Logical Properties and Values in Chromium.
This was explained in the previous point
and is moving forward at a good pace.
Get rid of usage of -webkit- prefixed properties
in Chromium source code.
Oriol has also started this task and is currently work in progress.
Deprecate and remove the -webkit- prefixed properties.
It still too early for that but we will keep an eye on the metrics
and do it once usage has decreased.
Implement it in WebKit too,
first by unprefixing the current properties (which has been already started)
and later continuing with the new things.
It would be really nice if WebKit follows Chromium on this.
Edge also has plans to add support for this spec,
so that would make logical properties and values available
in all the major browsers.
Oriol has been doing a good job here
as part of his Igalia Coding Experience.
Apart from all the new stuff that is landing in Chromium,
he has also been fixing
We have just started the WebKit tasks,
but we hope all this work can be part of future Chromium
and Safari releases in the short term.
And that is all for now, we will keep you posted! 😉
Here’s a little timeline of some fun we had with the GNOME master Flatpak runtime last week:
Tuesday, July 10: a bad runtime build is published. Trying to start any application results in error while loading shared libraries: libdw.so.1: cannot open shared object file: No such file or directory. Problem is the library is present in org.gnome.Sdk instead of org.gnome.Platform, where it is required.
Thursday, July 12: the bug is reported on WebKit Bugzilla (since it broke Epiphany Technology Preview)
Saturday, July 14: having returned from GUADEC, I notice the bug report and bisect the issue to a particular runtime build. Mathieu Bridon fixes the issue in the freedesktop SDK and opens a merge request.
Monday, July 16: Mathieu’s fix is committed. We now have to wait until Tuesday for the next build.
Tuesday, Wednesday, and Thursday: we deal with various runtime build failures. Each day, we get a new build log and try to fix whatever build failure is reported. Then, we wait until the next day and see what the next failure is. (I’m not aware of any way to build the runtime locally. No doubt it’s possible somehow, but there are no instructions for doing so.)
Friday, July 20: we wait. The build has succeeded and the log indicates the build has been published, but it’s not yet available via flatpak update
Saturday, July 21: the successful build is now available. The problem is fixed.
As far as I know, it was not possible to run any nightly applications during this two week period, except developer applications like Builder that depend on org.gnome.Sdk instead of the normal org.gnome.Platform. If you used Epiphany Technology Preview and wanted a functioning web browser, you had to run arcane commands to revert to the last good runtime version.
This multi-week response time is fairly typical for us. We need to improve our workflow somehow. It would be nice to be able to immediately revert to the last good build once a problem has been identified, for instance.
Meanwhile, even when the runtime is working fine, some apps have been broken for months without anyone noticing or caring. Perhaps it’s time for a rethink on how we handle nightly apps. It seems likely that only a few apps, like Builder and Epiphany, are actually being regularly used. The release team has some hazy future plans to take over responsibility for the nightly apps (but we have to take over the runtimes first, since those are more important), and we’ll need to somehow avoid these issues when we do so. Having some form of notifications for failed builds would be a good first step.
P.S. To avoid any possible misunderstandings: the client-side Flatpak technology itself is very good. It’s only the server-side infrastructure that is problematic here. Clearly we have a lot to fix, but it won’t require any changes in Flatpak.