Planet Igalia

May 25, 2017

Diego Pino

A brief history of IPv4 address space exhaustion

IPv4 address space exhaustion was a hot topic in the 90s, when everyone started to foresee that inevitable future. However, we’re still relying on IPv4 today. So, what has actually happened? Did anyone find a vast range of unused IPv4 addresses locked in a closet? What happened to IPv6?

Reviewing the history of IPv4 address depletion is also reviewing the history of the Internet. Many decisions about the Internet have been made with the goal of solving or mitigating this problem. In this post I start from the beginning up to today. It’s not intended to be an exhaustive guide, but a recap of the most important events.

8-bit Internet

The Internet has its origin in the ARPAnet. A research network funded by the Advanced Research Projects Agency of the United States of America Department of Defense.

ARPAnet came to live in 1969, connecting just 4 hosts. The network grew in size over the years connecting more and more hosts, mainly universities and research centers in the US. In 1981, there were a total of 213 hosts connected.

But back in the days of the ARPAnet, there was not TCP/IP. Its equivalent was NCP (Network Control Protocol). Addresses in NCP were 8-bit numbers. That means each host could be addressed by a simple number such as 10, 23 or 145. Although popular, the ARPAnet was not the single computer network that fostered during the 70s. There was a need of connecting these networks in an inter-network or internet.

Already in the early 70s, Robert Khan from Darpa and Vinton Cerf, developer of NCP, started to work in a new protocol that allowed communications across several heterogeneous networks. The proposed protocol was called Transmission Control Program, first published in 1974 (A Protocol for Packet Network Intercommunication). Implementations of the protocol went through 4 major versions. In version 3, the protocol split in two: Transmission Control Protocol & Internet Protocol. The first TCP/IP v4 draft was published in 1978 but 3 years more passed until the draft became a standard.

On 1st of January 1983, also known as flag day, the ARPAnet switched from NCP to TCP/IP.

4.3 billion addresses will be enough

One of the novelties that TCP/IP introduced was 32-bit addresses. Vinton Cerf has often taken blame for that decision. But 32-bit addresses seemed very reasonable back in the 70s. In those days, the world’s population was 4.5 billion people and the personal computing revolution hadn’t started yet. Upgrading address space to 16-bit seemed too little and something bigger than 32-bit (4.3 billion addresses) unreasonable and unjustified.

In 1981, another TCP/IP network was created: CSnet (Computer Science Network) funded by the National Science Foundation. In 1983, Darpa decided to split the ARPAnet in two: a public ARPAnet and MILnet. Finally in 1985, NSF founded another network, NSFnet (National Science Foundation Network).

NSFnet was the most popular TCP/IP network in the 80s and eventually became the primarily backbone of the Internet at that time. By the end of the decade, the Internet was composed by almost 1000 networks (RFC 1118 “The Hitchhikers Guide to the Internet”) and had 3 million users approximately. ARPAnet ceased its operations in 1990, while CSnet followed in 1991.

The first concerns about the scalability of the Internet appeared in the early 90s, even before the Web was invented. RFC 1287 (“Towards the Future Internet Architecture”) is the first RFC to discuss the IP address space exhaustion problem.

One of the first measures to simplify the management of the Internet was the creation of RIRs or Regional Internet Registries in 1992. Before that, the global IP address registry was managed by a single organization, the IANA (Internet Assigned Numbers Authority). Each region was allocated a range of IP addresses. The regions have evolved over time. Today there are 5 RIRs:

  • AFRINIC (Africa).
  • APNIC (Asia-Pacific).
  • ARIN (Canada, many Caribbean and North Atlantic islands, and the United States).
  • LACNIC (Latin America and the Caribbean)
  • RIPE NCC (Europe, Middle East, and Parts of Central Asia).

The glorious 90s: the Internet explodes

The World Wide Web debuted in the early 90s leading to an exponential growth of the Internet. But even before that, there were already concerns its scalability.

The IETF created the ROAD WG (Routing and Addressing Working Group) to come up with proposals which could help to solve this problem. Some of the proposed solutions were:

  • RFC 1519: “Classless Inter-Domain Routing” (September 1993).
  • RFC 1597: “Address Allocation for Private Internets” (March 1994).
  • RFC 1631: “The IP Network Address Translator (NAT)” (May 1994).

RFC 791 (“Internet Protocol”) defines an IP address as:

Addresses are fixed length of four octects (32 bits). An address begins with a network number, followed by local address (called the “rest” field)

It also defines 3 classes of network addresses:

There are three formats or classes of internet addresses: in class a, the high order bit is zero, the next 7 bits are the network, and the last 24 bits are the local address; in class b, the high order two bits are one-zero, the next 14 bits are the network and the last 16 bits are the local address; in class c, the high order three bits are one-one-zero, the next 21 bits are the network and the last 8 bits are the local address.

Summarizing:

Class Leading bits Start Address End Address Network field Rest field
A 0 0.0.0.0 127.255.255.255 8 bits 24 bits
B 10 128.0.0.0 191.255.255.255 16 bits 16 bits
C 110 192.0.0.0 223.255.255.255 24 bits 8 bits

This scheme is known as classful network.

Class Inter-Domain Routing defines a variable-length network field for IP addresses which doesn’t depend on its class. This scheme allows two things:

  • To divide a network address into subnetworks, which leads to a more efficient use of the address space.
  • To group networks into supernetworks, which reduces the number of entries in the routing tables.

This latter issue was the main motivation for the creation of CIDR. Before that, a routing table had to contain one entry per network. For instance:

Network address Gateway
193.1.255.0 1.2.3.4
193.1.254.0 1.2.3.4

Since 193.1.255.0 and 193.254.0 are contiguous networks, an equivalent table could be represented as:

Network address Gateway
193.1.254.0/23 1.2.3.4

Class Inter-Domain Routing also introduced a new IP address notation known as CIDR notation in which an address is represented as a pair {IPv4 address/bit-mask}. Bit mask is a number between 0 and 32 that represents the number of contiguous bits used as a network mask. Address 193.1.254.0/23 is equivalent to 193.1.254.0/255.255.254.0.

Class Inter-Domain Routing highly helped to reduce the size of routing tables as well as optimize IP address use and simplify IP address allocation.

Another standard that enormously helped to mitigate IPv4 address exhaustion was RFC 1597 (“Address Allocation for Private Internets”).

On its conception, the Internet was designed as an peer-to-peer network where every host was addressable from any other host. Hosts inside private networks that only needed to communicate with other hosts within the same network over TCP/IP were also addressable from the Internet. RFC 1597 explains:

With the proliferation of TCP/IP technology worldwide, including outside the Internet itself, an increasing number of non-connected enterprises use this technology and its addressing capabilities for sole intra-enterprise communications, without any intention to ever directly connect to other enterprises or the Internet itself. The current practice is to assign globally unique addresses to all hosts that use TCP/IP. There is a growing concern that the finite IP address space might become exhausted.

The standard proposed the reservation of 3 blocks, each per network class, for private addresses. Hosts using private addresses are not reachable from the Internet, but can communicate to other peers inside the same intranet.

Class Start Address End Address Total IP addresses
A 10.0.0.0 10.255.255.255 16,777,216
B 172.16.0.0 172.31.255.255 1,048,576
C 192.168.0.0 192.168.255.255 65,536

The last standard that highly reduced IP address exhaustion was RFC 1631 (“The IP Network Address Translator (NAT)”).

NAT maps one IP space realm to another IP space realm. That enables a host using a private address, thus non-routable in the Internet, to borrow the address of another host which indeed has a public address assigned. That allows the private host to be addressable in the Internet.

The original NAT proposal came with some experimental implementations that proved it successful and made it to be adopted very quickly. On the down side, NAT broke the original design of the Internet as a peer-to-peer network.

Lastly, I think is also worth to mention DHCP (RFC 1541, “Dynamic Host Configuration Protocol”). DHCP has its origins in BOOTP, which at the same time is an evolution of RARP (RFC 903 “A Reverse Address Resolution Protocol”, June 1984). DHCP didn’t come out as the result of ROAD WG deliverance, but when used by ISPs it has greatly helped to optimize public address usage.

The birth of IPv6

In addition to the mitigation efforts commented above, during the early 90s the IETF also started to evaluate whether to develop a new version of IP which could definitely solve the address space problem. With that goal in mind, the IETF created the IPng area (Internet Protocol Next Generation area).

In 1994, the IPng area came up with RFC 1752 (“The Recommendation for the IP Next Generation Protocol”), which encouraged the development of a new successor to IPv4.

The IPng area also created a new working group called ALE (Address Life Expectation). The goal of this group was to determine the expected lifetime for IPv4. If IPv4 address space was estimated to last many years more, the new version of the Internet Protocol could feature new functionalities. On the other hand, if lifetime was estimated to be short, only the address space exhaustion problem could be handled. The working group estimated IPv4 address space exhaustion would happen sometime between 2005 and 2011.

That very same year of 1994, the Internet Engineering Steering Group approved the IPv6 recommendation by the IPng area and drafted a proposed standard. In 1995, RFC 1883 (“Internet Protocol, Version (IPv6)”) was published. After several iterations over the proposed standard, an updated document was published in December 1998 (RFC 2460).

Note: Version 5 of the IP protocol was used during the development of an experimental protocol called Internet Stream Protocol, back in 1979. There was a second review of this protocol in 1995 (RFC 1819, “Internet Stream Protocol Version 2 (ST2)”), but the protocol never gained traction. To avoid potential confusions, the IETF preferred to skip this number and pick Version 6 as the successor of IPv4.

In the years that followed many hardware vendors and operating system developers began to add and implement support for IPv6 in their products. A first alpha version of IPv6 was implemented in the Linux kernel as early as 1996, although it remained in experimental status until 2005. In 1996 too, a testbed backbone for IPv6, called 6bone, was deployed. The original mission of this backbone was to establish a network to foster the development, testing, and deployment of IPv6. The backbone ceased its operations in 2006. Another important milestone occurred in 2008 when the IANA added AAAA records for the IPv6 addresses of 6 root name servers. That made possible to resolved domain names using only IPv6.

The decade after the publication of RFC 2460 served as a period of time for development, testing and refinement of IPv6, as well as adaptation of existing products. Originally, the IETF estimated that massive adoption of the new protocol would happen around 2005, although that never happened. Part of the reason was that many of the new functionalities that IPv6 featured, for instance IPSec, were back-ported one way or another to IPv4, so the actual need to make the transition was less urgent and NAT was working pretty well. The only feature which could not be back-ported was the increase of available address space…

The day the world run out of IPv4 addresses

On 31st January 2011, the IANA allocated their two remaining top-level address blocks to APNIC. APNIC run out of IPv4 public addresses some months later. RIPE followed the next year, as well as LACNIC in 2014 and ARIN in 2015. Today, the only RIR with IPv4 public addresses available is AFRINIC, but it won’t last long, only until 2018.

The event didn’t catch anyone off-guard, in fact the dates were in line to the estimation of the ALU working group. Perhaps witnessing the actual IPv4 address depletion in 2011, served as a wake-up a call to accelerate IPv6 adoption worldwide. Since 2010, adoption has been constantly increasing and in some cases doubling every year. Today, IPv6 traffic represents a total of 18% Internet worldwide traffic according to Google. But that’s the world’s average, truth is IPv6 adoption worldwide is uneven across countries. Belgium ranks first with 47% traffic as IPv6, while there are many other countries, such as Italy or Spain, where IPv6 roll-out hasn’t even started yet.

And this all for now. In a next article I will cover IPv6 adoption and what strategies are ISPs implementing to complete the transition from an already exhausted IPv4 address space to IPv6.

May 25, 2017 06:30 AM

May 17, 2017

Antonio Gomes

Chromium Mus/Ozone update (H1/2017): wayland, x11

Since January, Igalia has been working on a project whose goal is to make the latest Chromium Browser able to run natively on Wayland-based environments. The project has various phases, requires us to carve out existing implementations and align our work with the direction Chromium’s mainline is taking.

In this post I will provide an update on the progresses we have made over 2017/H1, as well as our plans coming next.

In order to jump straight to the latest results section (including videos) without the details, click here.

Background

In 2016/Q4, my fellow Igalian Frédéric Wang and I ran a warm-up project to check the status of the existing Wayland support in Chromium’s mainline repository, and estimate how much work was needed to get the full (and latest) Chromium browser running on Wayland.

As part of this warm-up we were able to build and launch ChromeOS’s Chrome for both desktop and embedded Linux distributions, featuring either X11 or Wayland. Automotive Grade Linux running on the Renesas’ R-car M3 board is an example of the embedded environments we tested.

Mus+ash on LinuxOS (Nov/2016).
Mus+ash on LinuxOS (Nov/2016)

Although this was obviously not our end goal (some undesirable ChromeOS widgets were visible at the bottom), it allowed us to verify the overall performance of the build, and experiment with things a bit. Here is a brief summary of the most relevant findings:

  • It is possible to build mus+ash for various platforms including Linux, ChromeOS and Windows. On Linux specifically, it is possible to make off-device ChromeOS builds of mus+ash, and run it on desktop Linux for testing purposes. A more minimalistic Window Manager version is also available in //mash/simple_wm, and should run on regular Linux builds too.

  • mus+ash can be built with Ozone enabled. This means that it can run with the various backends Ozone has. It is worth saying that the upstream focus seems to be the Ozone/DRM-GBM backend, for ChromeOS.

  • Ozone itself has morphed over time from an abstraction layer underneath the Aura toolkit, to be a layer underneath Mus.

  • Last, we could publish some worth reading content:

    2017 developments

    At the beginning of this new phase of the project, we knew we needed to work on two different levels, in order to have the Chromium browser running on desktop Linux, ideally without functionality losses if compared against the stock Chromium browser on X11: both Mus and Ozone needed to support ‘external window’ mode.

    For the sake of completeness, the term external window mode above is the terminology we chose to represent a regular desktop application on Linux, where the host Window Manager takes care of windowing actions like maximize, minimize, restore and fullscreen the application window. Also, the application itself reacts to content size changes accordingly. Analogously, when we say an application runs in internal window mode, it runs within the (M)ash shell environment, the builtin Window Manager that powers ChromeOS builds. Applications in this mode do not interact with the host WM.

    A huge pro about how mus+ash is being implemented is that the Chrome browser itself already works as it ought to in non-ChromeOS Mus-based environments: either we are running Mus in internal or external window modes, Chrome will work just like Chrome for a Linux desktop ought to.

    That being said, we identified the following set of tasks, on both Ozone and Mus sides.

    Ozone tasks:

    Extend Ozone so that both Window Manager provided window decorations (like a regular X11 window on Ubuntu) and Chromium’s builtin window decoration work flawlessly.
    On Wayland, window decorations can be provided either by the client side (application), or by the Wayland server (compositor). The fact that Weston does not provide window decorations by default, forces us to support Chromium’s builtin one for the good.
    In case of the Chromium’s builtin window decorations …
    … add support for basic windowing functionality like maximize, minimize, restore and fullscreen, as well as window dragging and resizing.
    Add support to “window close”.
    In internal window mode, there is no concept of window closing, because the outer/native Ozone window represents the Window Manager display, which is not supposed to get closed. In external window mode, windows can be closed freely, as per the needs of the user.
    Add support for multi window browsing.
    Each browser window should be backed by its own acceleratedWidget. This also includes being able to draw widgets that on stock Linux/X11 builds use native windows: tooltips, (nested and context) menus.
    Handle keyboard focus activation when switching windows.
    Again in ‘internal window’ mode the outer/native Ozone window is unique and represents the Window Manager display, not losing or gaining focus at any time. Focus switching of inner windows is handled by mus+ash. In ‘external window’ mode, user can open as many Browser windows as he wants, and focus switches at the Window Manager level, should reflect on the application focus.

    Mus tasks:

    Fix assumptions that make sense for mus+ash on ChromeOS only.
    The fact that a display::Display instance mapped always to a single ui::ws::Display instance.
    Ownership model
    Some Mus objects have slightly different ownership in external window mode: ws::Display, ws::WindowManagerState, ws::WindowManagerDisplayRoot and ws::WindowTree
    The plan

    After meeting with rjkroege@ at BlinkOn 7, we defined a highlevel plan to tackle the project. These were the main action points:

    1) Extend the mus_demo to work in ‘external window’ mode.
    2) Start fixing 1:1 assumptions in the code, e.g. display::Display ui::ws::Display.
    3) Extend Mus to work on ‘external window’ mode.
    4) Extend Ozone to work on ‘external window’ mode.
    5) Make the code that handles the existing –mus command line parameter non-ChromeOS specific.

    With this 5 highlevel steps done, we would be able to get Chrome/Mus running on desktop Linux, on the various Ozone backends.

    The action
    Mus Demo

    We were able to get mus_demo working in ‘external window’ mode, by making use of the existing WindowTreeHostFactory API.

    1:1 assumptions

    Although WindowTreeHostFactory was in place for the creation WindowTreeHost instances, both Mus and Ozone still had assumptions that only applied in a ChromeOS context. The Googler kylechar@ jumped in and fixed some of them, helping out on our effort.

    Mus and Ozone carve out

    In order to get the 3rd and 4th steps going, we decided to switch our main development repository to a GitHub fork, so that we could expedite reviews and progresses. Given Igalia’s excellence in carrying downstream forks of large projects forward, we established a contribution process and a rebase strategy that would allow us to move at a good pace, and still stay as close as possible to Chromium’s tip of trunk.

    These are some of the main changes in our downstream repository:

  • In this new set up, ui::ws::WindowTreeHostFactory::CreatePlatformWindow can create as many WindowTreeHost / ui::ws::Display instances as needed. ui::ws::Display triggers creation of PlatformDisplay objects, which hold Ozone window handles. Hence, every Chromium window (and some browser widgets) gets backed by its own acceleratedWidget.

  • In mus+ash, there are some operations accomplished through a cooperation between both Mus and Ash, or Mus and Aura/Mus sides. For example, setting “frame decorations” values in mus+ash goes through the following path:

    1) ash::mus::WindowManager get frame decoration values as per the “material design” in use and passes it to aura::WindowTreeClient::SetFrameDecorationValues.
    2) WindowTree::WmSetFrameDecorationValues
    3) WindowManagerState::SetFrameDecorationValues
    4) UserDisplayManager::OnFrameDecorationValuesChanged
    5) ScreenMus::OnDisplays()
    6) These values are used later on to draw “non client frame” area of the Browser window, which “frame” that contains the Web contents area.

    On Chrome/Mus LinuxOS, we skip this round trip by using the same “non client frame view” as stock Linux/X11 Chrome: OpaqueBrowserFrameView.

  • In mus+ash all Browser widgets creation take the DesktopNativeWidgetAura path. This implies a new WindowPort and new WindowTreeHost instances per widget. Adding support for this in Mus and Ozone sides would require lots of work and refactory. Hence, we again decided to use the stock Linux/X11 flow: for widgets currently backed by a native window (tooltips, menus) we use the NativeWidgetAura path, whereas for others widgets (bookmark banner and zoom in/out banners, URL completion window, status bubble, etc) we use NativeWidgetAura. Also, this choice made extending Ozone accordingly simpler.

  • Status and next steps

    We have reached a point where we can show Chrome Ozone/Mus on desktop Linux, on using both X11 and Wayland backends, and here is how it is looking like today:

    Wayland:

    X11:

    The –mus and –ozone-platform={name} command line parameters control the Chrome configuration. Please note that the same Chrome binary is used.

    Some of our next steps for Chromium Mus/Ozone are:

    • Continue to fix the windowing features (namely window resize and dragging, as well as drag and drop) when Chromium’s builtin window decorations are used.
    • Provide updated yocto builds on Igalia’s meta-browser fork.
    • Support newer shell protocols like XDG v6, supported by Fedora 25.
    • Ensure no feature losses when compared to stock Chromium X11/Linux.
    • Ensure there is no performance penalties when compared to stock Chromium X11/Linux.
    • Start to upstream some of the changes.

    We are also considering providing prebuilt binaries, so that earlier adopters can test the status.

    This project is sponsored by Renesas Electronics …

    renesas_logomark_l

    … and is being performed by Igalian hacker Maksim Sisov and Antonio Gomes (me) on behalf of Igalia, being Frederic Wang an emeritus contributor.

    igalia-logo-364x130

    by agomes at May 17, 2017 01:26 PM

    May 09, 2017

    Víctor Jáquez

    GStreamer Spring Hackfest 2017 & GStreamer 1.12

    Greetings earthlings!

    Two things:

    One

    GStreamer 1.12 is out! And with it, gstreamer-vaapi. Among other new features and improvements we have:

    • GstVaapiDisplay now inherits from GstObject, thus the VA display logging messages are better and tracing the context sharing is more readable.
    • When uploading raw images into a VA surfaces now VADeriveImages are tried first, improving the upload performance, if it is possible.
    • The decoders and the post-processor now can push dmabuf-based buffers to downstream under certain conditions. For example:
      GST_GL_PLATFORM=egl gst-play-1.0 video-sample.mkv --videosink=glimagesink
    • Refactored the wrapping of VA surface into gstreamer memory, adding lock when mapping and unmapping, and many other fixes.
    • Now vaapidecodebin loads vaapipostproc dynamically. It is possible to avoid it usage with the environment variable GST_VAAPI_DISABLE_VPP=1.
    • Regarding encoders: they have primary rank again, since they can discover, in run-time, the color formats they can use for upstream raw buffers and caps renegotiation is now possible. Also the encoders push encoding info downstream via tags.
    • About specific encoders: added constant bit-rate encoding mode for VP8 and H265 encoder handles P010_10LE color format.
    • Regarding decoders, flush operation has been improved, now the internal VA encoder is not recreated at each flush. Also there are several improvements in the handling of H264 and H265 streams.
    • VAAPI plugins try to create their own GstGL context (when available) if they cannot find it in the pipeline, to figure out what type of VA Display they should create.
    • Regarding vaapisink for X11, if the backend reports that it is unable to render correctly the current color format, an internal VA post-processor, is instantiated (if available) and converts the color format.

    And

    Two

    GStreamer Spring Hackfest 2017 is in less than two weeks!

    It is going to be held at Igalia premises inCoruña. Keep an eye on it 😉

    by vjaquez at May 09, 2017 11:14 AM

    Jacobo Aragunde

    Browsers in the 16th GENIVI AMM

    I’m currently in Birmingham, ready to attend the 16th GENIVI All-members meeting!

    We will be showcasing the work we have been doing lately to integrate Chromium in the GENIVI platform. I’m also holding two presentations:

    • Integration of the Chromium Browser in the GENIVI Platform, where I will present the status of the integration of the Chromium browser in the GDP and the plan for the next months. Slides available here.
    • Update on the Open Source Browser Space, where I will provide the latest news on the ever-changing world of Open Source browsers, and in particular regarding browsers supporting Wayland natively. Slides available here.

    See you there!

    by Jacobo Aragunde Pérez at May 09, 2017 09:59 AM

    May 03, 2017

    Javier Fernández

    Can I use CSS Box Alignment ?

    As a member of the Igalia’s team implementing the CSS Grid Layout feature for Blink and WebKit rendering engines, I’m very proud of what we’ve achieved from our collaboration with Bloomberg. I think Grid is a very interesting feature for the Web Platform and we still can’t see all its potential.

    One of my main assignments on this project is to implement the CSS Box Alignment spec for Grid. It’s obvious that alignment is an important feature for many cases in web development, but I consider it a key for a layout model like the one Grid provides.

    We recently announced that the patch implementing the self-baseline alignment landed in Blink. This was the last alignment functionality pending to implement, so now we can consider that the spec is complete for Grid. However, implementing a feature like CSS Box Alignment has an additional complexity in the form of interoperability issues.

    Interoperability is always a challenge when implementing any new specification, but I think it’s specially problematic for a feature like this for several reasons:

    • The feature applies to several layout models.
    • The CSS Flexible Box specification already defined some of the CSS properties and values.
    • Once a new layout model implements the new specification, Flexbox is forced to follow it as well.

    I admit that the editors of this new specification document made a huge effort to keep backward compatibility with the Flexbox spec (which caused not so few implementation challenges). However, the current Flexbox implementation of the CSS properties and values that both specs have in common would become a Partial Implementation regarding the new spec.

    Recently Florian Rivoal found out that this partial implementation of the CSS Box Alignment feature prevents the use of cascade or @support for providing customized fallbacks for the unimplemented Alignment properties.

    What does Partial Implementation actually mean ?

    As anybody can imagine, implementing a fancy web feature takes a considerable amount of time. During this period, the feature passes through several phases with different exposure to the end users. It’s precisely due to the importance of end user’s feedback that these new web features are shipped under experimental flags. This workflow is specially useful no only for browser devs but for the spec editors as well.

    For this reason, the W3C CSS Working Group defines a general policy to manage Partial Implementations, which can be summarized as follows:

    So that authors can exploit the forward-compatible parsing rules to assign fallback values, CSS renderers must treat as invalid (and ignore as appropriate) any at-rules, properties, property values, keywords, and other syntactic constructs for which they have no usable level of support. In particular, user agents must not selectively ignore unsupported property values and honor supported values in a single multi-value property declaration: if any value is considered invalid (as unsupported values must be), CSS requires that the entire declaration be ignored.

    This policy is added to every spec as part of its Conformance appendix, so it is in the case of the CSS Box Alignment specification document. However, the interpretation of the Partial Implementation policy is far from trivial, specially for a feature like CSS Box Alignment. The most restrictive interpretation would imply the following facts:

    • Any new CSS property of the new spec should be declared invalid until is supported by all the layout models it applies to.
    • Any of the already existent CSS properties with new values defined in the new spec should be declared invalid until all these new values are implemented in all the layout models such property applies to.
    • Browsers shouldn’t ship (without experimental flags) any CSS property or value until it’s implemented in all the layout model it applies to.

    When we discussed about this at Igalia we applied a less restrictive interpretation, based on the assumption that the spec actually defined several features which could be implemented and shipped independently, obviously avoiding any browsers interoperability issues. As it’s been always in the nature of the specification, keeping backward compatibility with Flexbox implementations has been a must, since its spec already defines some of the CSS properties now present in the new spec.

    The issue filed by Florian was discussed during the Tokyo F2F Apr 19-21 2017 meeting, where it was agreed to add a new section in the CSS Box Alignment spec to clarify how implementors of this feature should manage Partial Implementations:

    Since it is expected that support for the features in this module will be deployed in stages corresponding to the various layout models affected, it is hereby clarified that the rules for partial implementations that require treating as invalid any unsupported feature apply to any alignment keyword which is not supported across all layout modules to which it applies for layout models in which the implementation supports the property in general.

    The new text added makes the Partial Implementation policy less restrictive and, even it contradicts our interpretation of independent alignment features per layout model, it affects only to models which already implement any of the CSS properties defined in the new spec. In this case, only Flexbox has to be updated to implement the new values defined for its alignment related CSS properties: align-content, justify-content and align-self.

    Analysis of the implementation and shipment status

    Before thinking on how to address the Partial Implementation issues, I decided to analyze what’s the status of the CSS Box Alignment feature in the different browsers. If you are interested in the full analysis, it’s available here. The following table shows the implementation status of the new spec in the Safary, Chrome and Firefox browsers, using a color code like unimplemented, only grid or both (flex and grid):

    If you can try out some examples of these Partial Implementation issues, just try flexbox vs grid cases with some of these alignment values: align-items: center, align-self: left; align-content: start or justify-content: end.

    The 3 major browsers analyzed have shipped most, if not all, the CSS Box Alignment spec implemented for CSS Grid Layout (since Chrome 57, Safari 10.1, Firefox 52). Firefox is the browser which implemented and shipped a wider support for CSS Flexible Box.

    We can extract the following conclusions:

    • The 3 browsers analyzed have shipped Partial Implementations of the CSS Box Alignment specification, although Firefox is almost complete.
    • The 3 browsers have shipped a Grid feature that supports completely the new CSS Box Alignment spec, although Safari still misses the self-baseline values.
    • The 3 implementations of the new CSS Box Alignment specification are backward compatible with the CSS Flexible Box specification, even though it implements for some properties a lower level of the spec (e.g. self-baseline keywords)

    Work in progress

    Although we are still evaluating the problem together with the Blink and WebKit communities, at Igalia we are already working on improving the situation. We all agree on the damage to the Web Platform that these Partial Implementation issues are causing, as Florian pointed out initially, so that’s a good starting point. There are bug reports on both WebKit and Blink and we are already providing patches for some of them.

    We are still discussing about the best approach, but our bet would be to request an intent-to-implement-and-ship for a CSS Box Alignment (for flexbox layout) feature. This approach fits naturally in our initial plans of implementing several independent features from the alignment specification. It seems that it’s what Firefox is doing, which already announced the implementation of CSS Box Alignment (for block layout)

    Thanks to Bloomberg for sponsoring this work, as part of the efforts that Igalia has been doing all these years pursuing a better and more open web.

    Igalia & Bloomberg logos

    by jfernandez at May 03, 2017 08:19 PM

    Carlos García Campos

    WebKitGTK+ remote debugging in 2.18

    WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

    Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

    From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:

    $ WEBKIT_INSPECTOR_SERVER=127.0.0.1:1234 browser
    

    This hasn’t changed with the new remote inspector. To start debugging we opened any browser and loaded

    http://127.0.0.1:1234

    With the new remote inspector we have to use any WebKitGTK+ based browser and load

    inspector://127.0.0.1:1234

    As you have already noticed, it’s no longer possible to use any web browser, you need to use a recent enough WebKitGTK+ based browser as the debugger. This is because of the way the new remote inspector works. It requires a frontend implementation that knows how to communicate with the targets. In the case of Apple that frontend implementation is Safari itself, which has a menu with the list of remote debuggable targets. In WebKitGTK+ we didn’t want to force using a particular web browser as debugger, so the frontend is implemented as a builtin custom protocol of WebKitGTK+. So, loading inspector:// URLs in any WebKitGTK+ WebView will show the remote inspector page with the list of debuggable targets.

    It looks quite similar to what we had, just a list of debuggable targets, but there are a few differences:

    • A new debugger window is opened when inspector button is clicked instead of reusing the same web view. Clicking on inspect again just brings the window to the front.
    • The debugger window loads faster, because the inspector code is not served by HTTP, but locally loaded like the normal local inspector.
    • The target list page is updated automatically, without having to manually reload it when a target is added, removed or modified.
    • The debugger window is automatically closed when the target web view is closed or crashed.

    How does the new remote inspector work?

    The web browser checks the presence of WEBKIT_INSPECTOR_SERVER environment variable at start up, the same way it was done with the WebSockets. If present, the RemoteInspectorServer is started in the UI process running a DBus service listening in the IP and port provided. The environment variable is propagated to the child web processes, that create a RemoteInspector object and connect to the RemoteInspectorServer. There’s one RemoteInspector per web process, and one debuggable target per WebView. Every RemoteInspector maintains a list of debuggable targets that is sent to the RemoteInspector server when a new target is added, removed or modified, or when explicitly requested by the RemoteInspectorServer.
    When the debugger browser loads an inspector:// URL, a RemoteInspectorClient is created. The RemoteInspectorClient connects to the RemoteInspectorServer using the IP and port of the inspector:// URL and asks for the list of targets that is used by the custom protocol handler to create the web page. The RemoteInspectorServer works as a router, forwarding messages between RemoteInspector and RemoteInspectorClient objects.

    by carlos garcia campos at May 03, 2017 03:43 PM

    May 02, 2017

    Manuel Rego

    Adding <code>:focus-within</code> selector to Chromium

    Similar to what I wrote for caret-color in January, this is a blog post about the process to implement a new feature on Chromium/Blink. This time it’s the turn for :focus-within pseudo-class from the Selectors 4 spec, I’ll talk about the different things that happened during the development.

    :focus-within pseudo-class

    This is a new selector that allows to modify the style of an element when this element or any of its descendants are focused. It’s similar to the :focus selector but applying also to ancestors, so somehow working like :active and :hover.

    If you see an example it’s pretty simple to understand:

    <style>
      form:focus-within {
        background-color: green;
      }
    </style>
    <form>
      <input />
    </form>

    In this example, when the input is focused the form background will switch to green.

    Intent to ship

    Although the specification is still in the Editor’s Draft (ED) state, it has already been implemented in Firefox 52 and Safari 10.1, so it seems like a good candidate to be added to Chromium too.

    For that you need to send an intent mail to blink-dev. This seemed like something small and simple enough and, after investigating a little bit about the feature, I decided to send the mail: Intent to Implement and Ship: CSS Selectors Level 4: :focus-within pseudo-class.

    But here the first problems arose…

    Issues on the spec

    On a first sight you can think that this is a very simple feature, but the Web Platform is complex and has many things interacting between each other.

    In this case Rune Lillesveen promptly detected an issue on the spec text, related to the usage of this selector (and also :active and :hover) with Shadow DOM. The old text from the spec said:

    An element also matches :focus-within if one of its shadow-including descendants matches :focus.

    It seems the spec was ready regarding Shadow DOM, but it was not right. This can be quite tricky to understand but if you’re interested take a look to the following example:

    <div id="shadowHost">
      <input />
    </div>
    <script>
      shadowHost.attachShadow({ mode: "open"}).innerHTML =
        "<style>" +
        "  #shadowDiv:focus-within { border: thick solid green; }" +
        "</style>" +
        "<div id='shadowDiv'>" +
        "  <slot></slot>" +
        "</div>";
    </script>

    Just in case you don’t understand this example, the final result is that the input element gets inserted into the <slot> tag (this is just a quick and dirty explanation about this particular Shadow DOM example).

    The flat tree for this example would be something like this:

    <div id="shadowHost">
      #shadow-root
      <div id="shadowDiv">
        <slot>
          <input />
        </slot>
      </div>
    </div>

    The issue here is that when you focus the input, as it’s now inside the <slot> tag, you’d expect that the shadowDiv has a green border. However, the input is not a shadow-including descendant of the shadowDiv. The spec should talk about the descendants in the flat tree instead.

    The issue was reported to the CSS WG GitHub repository and fixed using the following prose:

    An element also matches :focus-within if one of its descendants in the flat tree (including non-element nodes, such as text nodes) matches the conditions for matching :focus.

    Implementing :focus-within

    Once the spec issue got resolved, the intent was approved. So I had green light to move forward on the implementation.

    The patch to support it was mostly boilerplate code required to add a new selector on Blink. Most of it was doing something very similar to what :focus already does, but then we have the interesting part, a loop through the ancestors of the element using the flat tree:

    for (ContainerNode* node = this; node;
         node = FlatTreeTraversal::Parent(*node)) {
      node->SetHasFocusWithin(received);
      node->FocusWithinStateChanged();
    }

    What about tests?

    Of course you need tests for any change on Blink, in this case I was lucky enough as the W3C Web Platform Tests (WPT) repository already have a few tests for this new selector.

    I imported these tests (not without some unrelated issues) into Blink and verified that my patch passed them (including Mozilla tests that were already upstreamed). On top of that, I checked the tests in WebKit repository, as they have already implemented the feature and upstreamed one of them that was checking some nice combinations. And finally, I also wrote a few more tests to cover more situations (like the spec issue described above).

    Focus and display:none

    During the review Rune found another controversial topic. The question is what happens to a focused element when it’s marked as display: none. At first glance, you would think that the element should lose focus, and you’ll be right (HTML spec has a rule specifically covering this case).

    But here we have to deal with an interoperability issue, because the only engine currently following this rule is Blink. There are bug reports in the rest of the browsers, and they seem to acknowledge the issue but there is no activity to fix this at this point. If you are interested in more details, all of them are linked from Chromium bug #491828.

    If you’re using :focus selector to change, for example, the background of an input, it’s not very important what happens when that input gets display: none and dissapears. You don’t care about the background of something that you’re not seing anymore. However, with focus-within this issue is more noticeable. Imagine that you’re changing the background of a form when any of its inputs is focused. If the focused input is marked with display: none, you won’t have anything focused in the form so its background should change, but that only happens in Chromium right now.

    Common ancestor strategy

    The initial patch supporting :focus-within landed in time for Chrome 59, but it was implemented behind a experimental flag. The main reason was that it still needed some extra work before being ready to be enabled by default.

    One of those things was related to style recalculations, the initial implementation was causing more recalculations than required.

    Let’s use a new example:

    <style>
      *:focus-within {
        background-color: green;
      }
    </style>
    <form>
      <ul>
        <li id="li1"><input id="input1" /></li>
        <li id="li2"><input id="input2" /></li>
      </ul>
    </form>

    What happens when you move the focus from input1 to input2?

    Let’s see this step by step with the initial patch:

    1. Initially input1 is focused, so this element and all its ancestors have the :focus-within flag (all of them will have a green border), that includes input1, li1, <ul> and <form> (actually even <body> and <html> but let’s ignore that for this explanation).
    2. Then when we move to input2, the first thing is that the previous focused element, in this case input1, loses the focus. And at that point we go through the ancestors chain removing the :focus-within flag from input1, li1, <ul> and <form>.
    3. Now input2 is actually focused, and we go again through the ancestors chain adding the flag to input2, li2, <ul> and <form>.

    As you see we’re removing and adding the flag from <form> and <ul> elements when it’s not actually needed as they end up in the same status.

    What the new version changes is that in point (2) it looks for the common ancestor between the element losing the focus and the one gaining it. In this case the common ancestor between input1 to input2 would be the <ul>. So when walking the ancestor chain to add/remove the :focus-within flag, it stops in the common ancestor and let it (and all its ancestors) unmodified. This way we’re saving style recalculations.

    Now in point (2) only input1 and li1 get the flag removed, and in point (3) only input2 and li2 get it added. The other elements <ul> and <form> remain untouched.

    And even more things…

    Taking advantage of this work on Chromium, I realized that WebKit was not following the spec in the flat tree case. So I imported the WPT tests into WebKit and make a one liner patch to use the flat tree in WebKit too.

    Adding a new selector might seem a simple task, but let me show you some numbers about the commits on the different repos related to all this work:

    And a few more might come as I’m still doing a few modifications on the tests so we can use them in both Blink and WebKit without issues.

    Use cases

    Now everything has landed and :focus-within will be available by default starting in Chrome 60. So it’s time to start using it.

    I’ve created a simple demo about what you can do with it, but probably you can think of much cooler stuff.

    :focus-within demo

    This new selector has an important impact on making the Web more accessible, especially to keyboard users. For example, if you only use :hover you’re leaving out a chunk of your user base, the ones using keyboard navigation, but now you could easily combine that with :focus-within avoiding this kind of problems.

    Again I’ve crafted a typical menu using :hover and :focus-within, take a look to how keyboard navigation works.

    Use keyboard navigation on a :focus-within menu

    Note that there’s a Firefox bug preventing this last example to work there.

    Thanks!

    As usual I’ll finish the post with the acknowledgements section. The development of this new pseudo-class has been done by Igalia sponsored by Bloomberg as part of our ongoing collaboration.

    Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

    On top of that I have to thank Florian Rioval for helping with the tests reviews on WPT. And especially to Rune Lillesveen for all his work and help during the whole process.

    May 02, 2017 10:00 PM

    April 28, 2017

    Frédéric Wang

    MathZilla collection ported to WebExtensions

    MathZilla is a collection of MathML-related add-ons for Mozilla applications. It provides nice features such as forcing native MathML rendering (e.g. on Wikipedia), using Web fonts to render MathML or providing a context menu item to copy math formulas into the clipboard.

    Initially written as a single XUL overlay extension (with even binary code for the LaTeX-to-MathML converter) it grows up as a collection of restartless add-ons using bootstrapped or SDK-based extensions, following the evolution of Mozilla’s recommendations. Also, SDK-based extensions were first generated using a Python program called cfx before Mozilla recommended to switch to a JS-based replacement called jpm.

    Mozilla announced some time ago that they will transition to the WebExtensions format. On the one hand this sounds bad because developers have to re-write their legacy add-ons again and actually be sure that the transition is even possible or does break anything. On the other hand it is good for long-term interoperability since e.g. Chromium browsers or Microsoft Edge support that format. My colleague Michael Catanzaro also mentioned in a recent blog post that WebExtensions are considered for Epiphany too. It is not clear what Mozilla’s plan is for Thunderbird or SeaMonkey but hopefully they will use that format too (in the past I was suggested to make the MathZilla add-ons compatible with SeaMonkey).

    Recently, Mozilla announced their plans for Firefox 57 which is basically to allow only add-ons written as WebExtensions. This means I had to re-write the Mathzilla add-ons again or they will stop working at the end of the year. In general, I believe the features have been preserved although there might be some small behavior changes or minor bugs due to the WebExtensions format. Please check the GitHub bug trackers and release notes for known issues and report any other problems you find. Finally, I reorganized a bit the git repositories and add-on names. Here is the updated list (some add-ons are still being reviewed by Mozilla):

    • MathML Fonts (~2300 users) - Provide MathML fonts as Web fonts, which is useful when they can not be installed (e.g. Firefox for Android).
    • Native MathML (~1400 users) - Force MathJax/KaTeX/MediaWiki to use native MathML rendering.
    • MathML Copy (~500 users) - Add context menu items to copy a MathML formula or other annotations attached to it (e.g. LaTeX) into the clipboard.
    • TeXZilla (~500 users) - Add-on giving access to TeXZilla, a Unicode TeX-to-MathML converter.
    • MathML Font Settings (~300 users) - Add context menu items to configure MathML font settings. Note that in recent Mozilla versions the advanced font preferences menu allows to configure “Fonts for Mathematics”.
    • Presentation MathML Polyfill (~200 users) - Add support for some advanced presentation MathML features (currently using David Carlisle’s “mml3ff” XSLT stylesheet).
    • Content MathML Polyfill (~200 users) - Add support for some content MathML features (currently using David Carlisle’s “ctop” XSLT stylesheet).
    • MathML Zoom (~100 users) - Allow zooming of mathematical formulas.
    • MathML View Source (experimental) - This is a re-writing of Mozilla’s ‘view MathML source’ feature with better syntax highlighting and serialization. The idea originated from this thread.
    • Image To MathML (experimental) - Try and convert images of mathematical formulas into MathML. It has not been ported to WebExtensions yet and I do not plan to do it in the short term.

    As a conclusion, I’d like to thank all the MathZilla users for their kind comments, bug reporting and financial support. The next step will probably be to ensure addons work in more browsers but that will be for another time ;-)

    April 28, 2017 10:00 PM

    April 26, 2017

    Manuel Rego

    10 years at Igalia

    Monday 9th April 2007… that was my first day working at Igalia, a really important day in my life. 😊

    How I met Igalia

    Just after finishing my Computer Science degree in Ourense, I had the chance to start a 6-months internship at PSA Peugeot Citroën in Vigo. There was the first time I heard about Igalia, and it was like a dream. First it was a free software company based in Galicia, I was a free software lover, and had been using it extensively since my first years in the University (despite being a rare exception there where most teachers still used proprietary software, hopefully things have improved now). Another unbelievable point was that it had a flat structure and you could become co-owner of the company in a few years after you entered. During that internship Igalia posted some job offers, so I decided to apply and I was happily selected to join the company. 😆

    Joining Igalia was an awesome experience, apart from the technical work (where Igalia has contributions to lots of free software projects that you use every day) the people in the company were really kind and helpful. From the first day my mentor Loren, which has eventually become one of my best friends, was explaining me everything I needed about the company. As time passed I was evolving trough the 3 stages: employee, assembly member and partner/co-owner of the company. It’s amazing how you can start to contribute to the company decisions so soon, and how you feel like the company is yours since the first days. I’m extremely grateful to the people who let me join the company at that time and give the opportunity to become part this wonderful family. 😍

    Some highlights about my work in Igalia

    During the first times I was working with TYPO3 CMS contributing to some extensions and also some patches to the main project itself. I even had the opportunity to attend my first international conference T3CON08 in Berlin. The next step was a project called LibrelPlan an open source web planning tool, again working on the Web as main technology.

    By the end of 2012 Igalia had gained a relevant position within the WebKit community. Trying to take advantage of all my previous experience around the Web, I joined the Igalia Web Platform team, where I started to contribute to WebKit initially and Chromium/Blink later. As any newcomer I started my contributions with some small patches here and there, but as time passed I got more and more involved on the implementation of CSS standards which allowed me to be granted reviewer/owner position in these projects.

    Due to my work around CSS, and particularly CSS Grid Layout, I started to participate on the W3C discussions specially inside the CSS WG, where I didn’t miss the chance to join, as external observer, their face-to-face meeting on the last TPAC. On top of that I’ve attended more and more conferences and I’ve been luckily selected to speak in some of them like BlinkOn 2, CSSConf US 2015, HTML5DevConf 2015, BlinkOn 6. Also lately I’m part of the organization of the Web Engines Hackfest. All this stuff has been really exiting, I’m loving it!

    Closing note

    Igalia is an incredible company, I cannot think in a better place to work. Igalia will be celebrating its 16th anniversary this year, my first 10 years here have been wonderful and in the years to come I just hope for the best. I’ve met lots of nice people in both Igalia and the projects I’ve been involved, thank you all!

    Taking a look to the past it’s clear that the Web has had a huge impact on my career, as I’ve been working for 10 years on different things but all very closely related to the Web. And I don’t have plans to move away from it any time soon.

    Let’s keep rocking in the free world. 😎

    April 26, 2017 10:00 PM

    April 20, 2017

    Asumu Takikawa

    Upstreaming Snabbwall

    As you may have seen from my previous blog posts (one and two), at Igalia we have been working on a layer–7 firewall app called Snabbwall.

    This project is now coming to a close, as we’ve just completed the sixth and final milestone.

    The final milestone for the project was upstreaming the source code to the main Snabb project, which was completed about a month ago in March. The new Snabb release 2017.04 “Dragon” that just came out now includes Snabbwall.

    Now that we’re wrapping up, I’d like to thank the NLNet Foundation again for sponsoring this project. Thanks also to other developers who were involved including Adrián Pérez (the lead developer who wrote most of the code) and Diego Pino. Thanks to Luke Gorrie and Katerina Barone-Adesi for merging the code upstream.

    Just in case you’re curious, I’ll go over the status of the project now that it has been merged upstream. The main project repository now lives in a branch at Igalia/snabb. The branch is set to “protected” mode so that your pulls will always be fast-forwarded.

    The commits in the development repo are current with the 2017.04 Snabb release. Any future maintenance that we do will continue in our development branch.

    We will periodically send pull requests to the next branch at snabbco/snabb as needed from the development branch.

    The upstream Snabb project follows a development model in which each maintainer of subsystems in the main Snabb tree have their own upstream branches (e.g., documentation or luajit) which eventually merge into next. Releases are made from next every so often (typically monthly). You can check out all the branches that are maintained here including Snabbwall itself.

    Now that the final milestone is complete, I’ll be working on other networking projects at Igalia, but do ping me if you end up using Snabbwall or would like to contribute to it.

    by Asumu Takikawa at April 20, 2017 03:00 PM

    April 19, 2017

    Samuel Iglesias

    ARB_gpu_shader_fp64 support on IvyBridge finally landed!

    We, at Igalia, have been involved in enabling ARB_gpu_shader_fp64 extension to different Intel generations: first Broadwell and later, then Haswell. Now IvyBridge support is finished and landed Mesa’s master branch.

    This feature was the last one to expose OpenGL 4.0 in Intel IvyBridge with the open-source Mesa driver. This is a big achievement for an old hardware generation (IvyBridge was released in 2012), which allows users to run OpenGL 4.0 games/apps on GNU/Linux without overriding the supported version with a Mesa-specific environment variable.

    More goods news… ARB_vertex_attrib64 support has landed too, meaning that we are exposing OpenGL 4.2 on Intel Ivybrige!

    Technical details

    Diving a little bit into technical details (skip this if you are not interested on those)…

    This work is standing on top of the shoulders of Intel Haswell support for ARB_gpu_shader_fp64. The latter introduced support of double floating-point (DF) data types on both scalar and vec4 backends which is, in general, very similar to Ivybridge. If you are interested in the technical details about adding ARB_gpu_shader_fp64 to Intel GPUs, see Iago’s talk at last XDC (slides here).

    Ivybridge was the first Intel generation that supported double floating-point data types natively. The most important difference bettwen Ivybridge and Haswell is that both execution size and regioning parameters (stride and width) are in terms of 32-bits, so we need to double both regioning parameters and execution size at DF instructions’ emission.

    But this is not the only annoyance, there are others quite relevant like:

    • We emit an scalar DF instruction with a maximum execution size of 4 (doubled later to 8) to avoid hitting the gen7’s decompression instruction bug -present also in Haswell- that makes the hardware to read 2 consecutive GRFs regardless the vertical stride. This is specially annoying when reading DF scalars, because the stride is zero -we just want to read data from one GRF- and this bug would make us to read next GRF too; furthermore the hardware applies the same channel enable signals to both halves of the compressed instruction which will be just wrong under non-uniform control flow if force_writemask_all is disabled. However, there is also a physical limitation related to this when using Align16 access mode: SIMD8 is not allowed for DF operations. Also, in order to make DF instructions work under non-uniform control flow, we use NibCtrl to choose the proper flags of the execution mask.

    • 32-bit data types to double (and vice-versa) conversions are quite special. Each 32-bit source data should be aligned 64-bits, so we need to apply an stride on the original data in order to keep this aligment. This is because the FPU internals cannot do the conversion if the data is not aligned to the size of the bigger one. A similar thing happens on converting doubles to 32-bit data types: the output elements need to be 64-bit aligned too.

    • When splitting each DF instruction in two (or more) instructions with an exec_size of 4 as maximum, sometimes it is not so trivial to do and need temporary registers to save the intermediate results before merge them to the real destination.

    Due to these things, we needed to improve the d2x lowering pass (now called lower_conversions) which fixes the aforementioned conversions from double floating-point data to 32-bit data types, add some specific fixes in the generator, add code in the validator to detect invalid cases, among other things.

    In summary, although Ivybridge is very similar to Haswell, the 64-bit floating point support is quite special and it needs specific code.

    Acknowledgements

    I would like to thanks Matt Turner and Francisco Jerez from Intel for their insightful reviews, sometimes spotting problems that we did not foresee and their contributions to the patch series. Also, I would like to thanks Juan for his contributions to make this support happen and Igalia for allowing me to work on this amazing open-source project.

    Igalia

    April 19, 2017 10:00 PM

    April 04, 2017

    Manuel Rego

    Announcing a New Edition of the Web Engines Hackfest

    Another year, another Web Engines Hackfest. Following the tradition that started back in 2009, Igalia is arranging a new edition of the Web Engines Hackfest that will happen in A Coruña from Monday, 2nd October, to Wednesday, 4th October.

    The hackfest is a gathering of participants from the different parts of the open web platform community, working on projects like Chromium/Blink, WebKit, Gecko, Servo, V8, JSC, SpiderMonkey, Chakra, etc. The main focus of the event is to increase collaboration between the different browsers implementors by working together for a few days. On top of that, we arrange a few talks about some interesting topics which the hackfest attendees are working on, and also arrange breakout sessions for in-depth discussions.

    Web Engines Hackfest 2016 Main Room Web Engines Hackfest 2016 Main Room

    Last year almost 40 hackers joined the event, the biggest number of attendees ever. Previous attendees might have already received an invitation, but if not, just send us a request if you want to attend this year.

    If you don’t want to miss any update, remember to follow @webhackfest on Twitter. See you in October!

    April 04, 2017 10:00 PM

    March 24, 2017

    Michael Catanzaro

    A Web Browser for Awesome People (Epiphany 3.24)

    Are you using a sad web browser that integrates poorly with GNOME or elementary OS? Was your sad browser’s GNOME integration theme broken for most of the past year? Does that make you feel sad? Do you wish you were using an awesome web browser that feels right at home in your chosen desktop instead? If so, Epiphany 3.24 might be right for you. It will make you awesome. (Ask your doctor before switching to a new web browser. Results not guaranteed. May cause severe Internet addiction. Some content unsuitable for minors.)

    Epiphany was already awesome before, but it just keeps getting better. Let’s look at some of the most-noticeable new features in Epiphany 3.24.

    You Can Load Webpages!

    Yeah that’s a great start, right? But seriously: some people had trouble with this before, because it was not at all clear how to get to Epiphany’s address bar. If you were in the know, you knew all you had to do was click on the title box, then the address bar would appear. But if you weren’t in the know, you could be stuck. I made the executive decision that the title box would have to go unless we could find a way to solve the discoverability problem, and wound up following through on removing it. Now the address bar is always there at the top of the screen, just like in all those sad browsers. This is without a doubt our biggest user interface change:

    Screenshot showing address bar visible
    Discover GNOME 3! Discover the address bar!

    You Can Set a Homepage!

    A very small subset of users have complained that Epiphany did not allow setting a homepage, something we removed several years back since it felt pretty outdated. While I’m confident that not many people want this, there’s not really any good reason not to allow it — it’s not like it’s a huge amount of code to maintain or anything — so you can now set a homepage in the preferences dialog, thanks to some work by Carlos García Campos and myself. Retro! Carlos has even added a home icon to the header bar, which appears when you have a homepage set. I honestly still don’t understand why having a homepage is useful, but I hope this allows a wider audience to enjoy Epiphany.

    New Bookmarks Interface

    There is now a new star icon in the address bar for bookmarking pages, and another new icon for viewing bookmarks. Iulian Radu gutted our old bookmarks system as part of his Google Summer of Code project last year, replacing our old and seriously-broken bookmarks dialog with something much, much nicer. (He also successfully completed a major refactoring of non-bookmarks code as part of his project. Thanks Iulian!) Take a look:

    Manage Tons of Tabs

    One of our biggest complaints was that it’s hard to manage a large number of tabs. I spent a few hours throwing together the cheapest-possible solution, and the result is actually pretty decent:

    Firefox has an equivalent feature, but Chrome does not. Ours is not perfect, since unfortunately the menu is not scrollable, so it still fails if there is a sufficiently-huge number of tabs. (This is actually surprisingly-difficult to fix while keeping the menu a popover, so I’m considering switching it to a traditional non-popover menu as a workaround. Help welcome.) But it works great up until the point where the popover is too big to fit on your monitor.

    Note that the New Tab button has been moved to the right side of the header bar when there is only one tab open, so it has less distance to travel to appear in the tab bar when there are multiple open tabs.

    Improved Tracking Protection

    I modified our adblocker — which has been enabled by default for years — to subscribe to the EasyPrivacy filters provided by EasyList. You can disable it in preferences if you need to, but I haven’t noticed any problems caused by it, so it’s enabled by default, not just in incognito mode. The goal is to compete with Firefox’s Disconnect feature. How well does it work compared to Disconnect? I have no clue! But EasyPrivacy felt like the natural solution, since we already have an adblocker that supports EasyList filters.

    Disclaimer: tracking protection on the Web is probably a losing battle, and you absolutely must use the Tor Browser Bundle if you really need anonymity. (And no, configuring Epiphany to use Tor is not clever, it’s very dumb.) But EasyPrivacy will at least make life harder for trackers.

    Insecure Password Form Warning

    Recently, Firefox and Chrome have started displaying security warnings  on webpages that contain password forms but do not use HTTPS. Now, we do too:

    I had a hard time selecting the text to use for the warning. I wanted to convey the near-certainty that the insecure communication is being intercepted, but I wound up using the word “cybercriminal” when it’s probably more likely that your password is being gobbled up by various  governments. Feel free to suggest changes for 3.26 in the comments.

    New Search Engine Manager

    Cedric Le Moigne spent a huge amount of time gutting our smart bookmarks code — which allowed adding custom search engines to the address bar dropdown in a convoluted manner that involved creating a bookmark and manually adding %s into its URL — and replacing it with an actual real search engine manager that’s much nicer than trying to add a search engine via bookmarks. Even better, you no longer have to drop down to the command line in order to change the default search engine to something other than DuckDuckGo, Google, or Bing. Yay!

    New Icon

    Jakub Steiner and Lapo Calamandrei created a great new high-resolution app icon for Epiphany, which makes its debut in 3.24. Take a look.

    WebKitGTK+ 2.16

    WebKitGTK+ 2.16 improvements are not really an Epiphany 3.24 feature, since users of older versions of Epiphany can and must upgrade to WebKitGTK+ 2.16 as well, but it contains some big improvements that affect Epiphany. (For example, Žan Doberšek landed an important fix for JavaScript garbage collection that has resulted in massive memory reductions in long-running web processes.) But sometimes WebKit improvements are necessary for implementing new Epiphany features. That was true this cycle more than ever. For example:

    • Carlos García added a new ephemeral mode API to WebKitGTK+, and modified Epiphany to use it in order to make incognito mode much more stable and robust, avoiding corner cases where your browsing data could be leaked on disk.
    • Carlos García also added a new website data API to WebKitGTK+, and modified Epiphany to use it in the clear data dialog and cookies dialog. There are no user-visible changes in the cookies dialog, but the clear data dialog now exposes HTTP disk cache, HTML local storage, WebSQL, IndexedDB, and offline web application cache. In particular, local storage and the two databases can be thought of as “supercookies”: methods of storing arbitrary data on your computer for tracking purposes, which persist even when you clear your cookies. Unfortunately it’s still not possible to protect against this tracking, but at least you can view and delete it all now, which is not possible in Chrome or Firefox.
    • Sergio Villar Senin added new API to WebKitGTK+ to improve form detection, and modified Epiphany to use it so that it can now remember passwords on more websites. There’s still room for improvement here, but it’s a big step forward.
    • I added new API to WebKitGTK+ to improve how we handle giving websites permission to display notifications, and hooked it up in Epiphany. This fixes notification requests appearing inappropriately on websites like the https://riot.im/app/.

    Notice the pattern? When there’s something we need to do in Epiphany that requires changes in WebKit, we make it happen. This is a lot more work, but it’s better for both Epiphany and WebKit in the long run. Read more about WebKitGTK+ 2.16 on Carlos García’s blog.

    Future Features

    Unfortunately, a couple exciting Epiphany features we were working on did not make the cut for Epiphany 3.24. The first is Firefox Sync support. This was developed by Gabriel Ivașcu during his Google Summer of Code project last year, and it’s working fairly well, but there are still a few problems. First, our current Firefox Sync code is only able to sync bookmarks, but we really want it to sync much more before releasing the feature: history and open tabs at the least. Also, although it uses Mozilla’s sync server (please thank Mozilla for their quite liberal terms of service allowing this!), it’s not actually compatible with Firefox. You can sync your Epiphany bookmarks between different Epiphany browser instances using your Firefox account, which is great, but we expect users will be quite confused that they do not sync with your Firefox bookmarks, which are stored separately. Some things, like preferences, will never be possible to sync with Firefox, but we can surely share bookmarks. Gabriel is currently working to address these issues while participating in the Igalia Coding Experience program, and we’re hopeful that sync support will be ready for prime time in Epiphany 3.26.

    Also missing is HTTPS Everywhere support. It’s mostly working properly, thanks to lots of hard work from Daniel Brendle (grindhold) who created the libhttpseverywhere library we use, but it breaks a few websites and is not really robust yet, so we need more time to get this properly integrated into Epiphany. The goal is to make sure outdated HTTPS Everywhere rulesets do not break websites by falling back automatically to use of plain, insecure HTTP when a load fails. This will be much less secure than upstream HTTPS Everywhere, but websites that care about security ought to be redirecting users to HTTPS automatically (and also enabling HSTS). Our use of HTTPS Everywhere will just be to gain a quick layer of protection against passive attackers. Otherwise, we would not be able to enable it by default, since the HTTPS Everywhere rulesets are just not reliable enough. Expect HTTPS Everywhere to land for Epiphany 3.26.

    Help Out

    Are you a computer programmer? Found something less-than-perfect about Epiphany? We’re open for contributions, and would really appreciate it if you would try to fix that bug or add that feature instead of slinking back to using a less-awesome web browser. One frequently-requested feature is support for extensions. This is probably not going to happen anytime soon — we’d like to support WebExtensions, but that would be a huge effort — but if there’s some extension you miss from a sadder browser, ask if we’d allow building it into Epiphany as a regular feature. Replacements for popular extensions like NoScript and Greasemonkey would certainly be welcome.

    Not a computer programmer? You can still help by reporting bugs on GNOME Bugzilla. If you have a crash to report, learn how to generate a good-quality stack trace so that we can try to fix it. I’ve credited many programmers for their work on Epiphany 3.24 up above, but programming work only gets us so far if we don’t know about bugs. I want to give a shout-out here to Hussam Al-Tayeb, who regularly built the latest code over the course of the 3.24 development cycle and found lots of problems for us to fix. This release would be much less awesome if not for his testing.

    OK, I’m done typing stuff now. Onwards to 3.26!

    by Michael Catanzaro at March 24, 2017 01:18 AM

    March 20, 2017

    Carlos García Campos

    WebKitGTK+ 2.16

    The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

    Memory consumption

    After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

    CSS Grid Layout

    Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

    New API

    The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

    Hardware acceleration policy

    Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

    Network proxy settings

    Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

    Private browsing

    WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

    Website data

    WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

    Dynamically added forms

    Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

    Custom print settings

    The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

    Notification improvements

    Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

    Debugging tools

    Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

    Memory sampler

    This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

    $ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
    Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
    Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036
    

    The files contain a list of sample reports like this one:

    Timestamp                          1490004807
    Total Program Bytes                1960214528
    Resident Set Bytes                 84127744
    Resident Shared Bytes              68661248
    Text Bytes                         4096
    Library Bytes                      0
    Data + Stack Bytes                 87068672
    Dirty Bytes                        0
    Fast Malloc In Use                 86466560
    Fast Malloc Committed Memory       86466560
    JavaScript Heap In Use             0
    JavaScript Heap Committed Memory   49152
    JavaScript Stack Bytes             2472
    JavaScript JIT Bytes               8192
    Total Memory In Use                86477224
    Total Committed Memory             86526376
    System Total Bytes                 16729788416
    Available Bytes                    5788946432
    Shared Bytes                       1037447168
    Buffer Bytes                       844214272
    Total Swap Bytes                   1996484608
    Available Swap Bytes               1991532544
    

    Resource usage overlay

    The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

    We plan to add more information to the overlay in the future like memory cache status.

    by carlos garcia campos at March 20, 2017 03:19 PM

    Enrique Ocaña

    Media Source Extensions upstreaming, from WPE to WebKitGTK+

    A lot of good things have happened to the Media Source Extensions support since my last post, almost a year ago.

    The most important piece of news is that the code upstreaming has kept going forward at a slow, but steady pace. The amount of code Igalia had to port was pretty big. Calvaris (my favourite reviewer) and I considered that the regular review tools in WebKit bugzilla were not going to be enough for a good exhaustive review. Instead, we did a pre-review in GitHub using a pull request on my own repository. It was an interesting experience, because the change set was so large that it had to be (artificially) divided in smaller commits just to avoid reaching GitHub diff display limits.

    394 GitHub comments later, the patches were mature enough to be submitted to bugzilla as child bugs of Bug 157314 – [GStreamer][MSE] Complete backend rework. After some comments more in bugzilla, they were finally committed during Web Engines Hackfest 2016:

    Some unforeseen regressions in the layout tests appeared, but after a couple of commits more, all the mediasource WebKit tests were passing. There are also some other tests imported from W3C, but I kept them still skipped because webm support was needed for many of them. I’ll focus again on that set of tests at its due time.

    Igalia is proud of having brought the MSE support up to date to WebKitGTK+. Eventually, this will improve the browser video experience for a lot of users using Epiphany and other web browsers based on that library. Here’s how it enables the usage of YouTube TV at 1080p@30fps on desktop Linux:

    Our future roadmap includes bugfixing and webm/vp9+opus support. This support is important for users from countries enforcing patents on H.264. The current implementation can’t be included in distros such as Fedora for that reason.

    As mentioned before, part of this upstreaming work happened during Web Engines Hackfest 2016. I’d like to thank our sponsors for having made this hackfest possible, as well as Metrological for giving upstreaming the importance it deserves.

    Thank you for reading.

     

    by eocanha at March 20, 2017 11:55 AM

    Hyunjun Ko

    Libva-rust(libva binding to rust) in development stage

    Since Rust language appeared in the world, I felt strongly this is the language I should learn.

    This is because:

    • Rust guarantees to prevent from common bugs in C/C++ such as memory corruption and race condition, which are very painful to fix whenever you encounter in large project.
    • Rust guarantees it doesn’t lose performance even supporting these features!

    I don’t think that Rust aims at replacing C/C++, but it’s worth learning for C/C++ developers like me at least. So I’ve been searching and thinking of what I can do with this new language. In the end of last year, I decided to implement libva bindings to Rust.

    Here are advantages of doing this project.

    • I’m working on gstreamer-vaapi project, which means that I’m familiar with VA-API and middleware using this.
    • This kind of binding the existing project to another language makes me understanding the project much more than the moment.
    • Simultaneously, I could also learn new language in the level of practical development.
    • H/W acceleration is a critical feature, especially for laptop or other embedded systems. So this project could be a good option for those trying to use H/W acceleration for playback on linux.

    Finally, I did open this internal project on github, named as libva-rust.
    There is one example, creating an VASurface and putting raw data to the surface and displaying it only on X11 window.

    Let’s see the example code briefly.

    let va_disp = VADisplay::initialize(native_display as *mut VANativeDisplay).unwrap();
    
    let va_surface = VASurface::new(&va_disp, WIDTH, HEIGHT, ffi::VA_RT_FORMAT_YUV420, 1).unwrap();
    
    let va_config = VAConfig::new(&va_disp, ffi::VAProfileMPEG2Main, ffi::VAEntrypointVLD, 1).unwrap();
    
    let va_context = VAContext::new(&va_disp,
                                    &va_config,
                                    &va_surface,
                                    WIDTH as i32,
                                    HEIGHT as i32,
                                    0).unwrap();

    Initalization for VA-API.

    test_draw::image_generate(&va_disp, &va_image, &va_image_buf);
    
    va_image.put_image(&va_disp,
                       &va_surface,
                       0,
                       0,
                       WIDTH,
                       HEIGHT,
                       0,
                       0,
                       WIDTH,
                       HEIGHT);

    Draw raw data to VaapiImage in test_draw.rs and put it to the created surface.

    va_surface.put_surface(&va_disp, win, 0, 0, WIDTH, HEIGHT, 0, 0, WIDTH, HEIGHT);

    Finally, display it by putting the surface to created X11 window.
    It’s simple as you see, but the important first step.

    My first goal is providing general and easy a set of “rusty” APIs so that this could be integrated into other rust-multimedia project like rust-media.

    Another potential goal is implementation of vaapi plugins in gstreamer, written in Rust. Recently, Sebastian has been working on this(https://github.com/sdroege/rsplugin), I would really like to get involved in this project.

    There are tons of things to do for the moment.
    Here’s to-do list for now.

    • Implement vp8 decoder first: Simply, it looks easier than h26x decoder. Is there any useful rust h26x parser out there, by the way?
    • Manipulate raw data using Rust apis like Bit/Byte Reader/Writer.
    • Implement general try-catch statement in Rust.
    • Make test cases.
    • Support wayland.

    Yes. It has a long way to go still and I don’t have enough time to focus on this project. But I’ll be managing to keep working on this project.

    So feel free to use and absolutely welcome contributions including issue report, bug fix, providing a patch, etc.

    Thanks!

    March 20, 2017 03:15 AM

    March 17, 2017

    Víctor Jáquez

    GStreamer VAAPI 1.11.x (development branch)

    Greetings GstFolks!

    Last month the unstable release 1.11.2 of GStreamer hit the streets, and I would like to share with you all a quick heads-up of what we are working on in gstreamer-vaapi, since there are a lot of new stuff:

    1. GstVaapiDisplay inherits from GstObject

      GstVaapiDisplay is a wrapper for VADisplay. Before it was a custom C structure shared among the pipeline through the GstContext mechanism. Now it is a GObject based object, which can be queried, introspected and, perhaps later on, exposed in a separated library.

    2. Direct rendering and upload

      Direct rendering and upload are mechanisms based on using ">">vaDeriveImage to upload an raw image into a VASurface, or to download a VASurface into a raw image, which is faster rather than exporting the VASurface to a VAImage.

      Nonetheless we have found some issues with the direct rendering in new Intel hardware (Skylake and above), and we are still assessing if we keep it as default.

    3. Improve the GstValidate pass rate

      GstValidate provides a battery of tests for the whole GStreamer object, sadly, using gstreamer-vaapi, the battery didn’t output the same pass rate as without it. Though we still have some issues with the vaapsink that might need to be tackled in VA API, the pass rate has increased a lot.

    4. Refactor the GstVaapiVideoMemory

      We had refactor completely the internals of the VAAPI video memory (related with the work done for the direct download and upload). Also we have added locks when mapping and unmapping, to avoid race conditions.

    5. Support dmabuf sharing with downstream

      gstreamer-vaapi already had support to share dmabuf-based buffers with upstream (e.g. cameras) but now it is also capable to share dmabuf-based buffers with downstream with sinks capable of importing them (e.g. glimagesink under supported EGL).

    6. Support compilation with meson

      Meson is a new compilation machinery in GStreamer, along with autotools, and now it is supported also by gstreamer-vaapi.

    7. Headless rendering improvements

      There has been a couple of improvements in the DRM backend for vaapisink, for headless environments.

    8. Wayland backend improvements

      Also there has been improvements for the Wayland backend for vaapisink and GstVaapiDisplay.

    9. Dynamically reports the supported raw caps

      Now the elements query in run-time the VA backend to know which color formats does it support, so either the source or sink caps are negotiated correctly, avoiding possible error conditions (like negotiating a unsupported color space). This has been done for encoders, decoders and the post-processor.

    10. Encoders enhancements

      We have improve encoders a lot, adding new features such as constant bit rate support for VP8, the handling of stream metadata through tags, etc.

    And many, many more changes, improvements and fixes. But there is still a long road to the stable release (1.12) with many pending tasks and bugs to tackle.

    Thanks a bunch to Hyunjun Ko, Julien Isorce, Scott D Phillips, Stirling Westrup, etc. for all their work.

    Also, Intel Media and Audio For Linux was accepted in the Google Summer Of Code this year! If your are willing to face this challenge, you can browse the list of ideas to work on, not only in gstreamer-vaapi, but in the driver, or other projects surrounding VAAPI.

    Finally, do not forget these dates: 20th and 21h of May @ A Coruña (Spain), where the GStreamer Spring Hackfest is going to take place. Sign up!

    by vjaquez at March 17, 2017 04:53 PM

    March 15, 2017

    Manuel Rego

    CSS Grid Layout is Here to Stay

    It’s been a long journey but finally CSS Grid Layout is here! 🚀 In the past week, Chrome 57 and Firefox 52 were released, becoming the first browsers to ship CSS Grid Layout unprefixed (Explorer/Edge has been shipping an older, prefixed version of the spec since 2012). Not only that, but Safari will hopefully be shipping it very soon too.

    I’m probably biased after having worked on it for a few years, but I believe CSS Grid Layout is going to be a big step in the history of the Web. Web authors have been waiting for a solution like this since the early days of the Web, and now they can use a very powerful and flexible layout module supported natively by the browser, without the need of any external frameworks.

    Igalia has been playing a major role in the implementation of CSS Grid Layout in Chromium/Blink and Safari/WebKit since 2013 sponsored by Bloomberg. This is a blog post about that successful collaboration.

    A blast from the past

    Grids are not something new at all, since we can even find references to them in some of the initial discussions of the CSS creators. Next is an excerpt from a mail by Håkon Wium Lie in June 1995 to www-style:

    Grids! Let the style sheet carve up the canvas into golden rectangles, and use an expert system to lay out the elements!! Ok, drop the expert system and define a set of simple rules that we hardcode.. whoops! But grids do look nice!

    -h&kon

    Since that time the Web hasn’t stopped moving and there have been different solutions and approaches to try to solve the problem of having grid-based designs in HTML/CSS.

    At the beginning of the decade Microsoft started to work on what eventually become the CSS Grid Layout initial specification. This spec was based on the Internet Explorer 10 implementation and the experience gathered by Microsoft during its development. IE10 was released in 2012, shipping a prefixed version of that initial spec.

    Then Google started to add support to WebKit at the end of 2011. At that time, WebKit was the engine used by both Chromium and Safari; later in 2012 it would be forked to create Blink.

    Meanwhile, Mozilla had not started the Grid implementation in Firefox as they had some conflicts with their XUL grid layout type.

    Igalia and Bloomberg collaboration

    Bloomberg uses Chromium and they were looking forward to having a proper solution for their layout requirements. They detected performance issues due to the limitations of the current layout modules available on the Web. They see CSS Grid Layout as the right way to fix those problems and cover their needs.

    Bloomberg decided to push CSS Grid Layout implementation as part of the collaboration with Igalia. My colleagues, Sergio Villar and Xan López, started to work on CSS Grid Layout around the summer of 2013. In 2014, Javi Fernández and I replaced Xan, joining the effort as well. We’ve been working on this for more than 3 years and counting.

    At the beginning, we were working together with some Google folks but later Igalia took the lead role in the development of the specification. The spec has evolved and changed quite a lot since 2013, so we’ve had to deal with all these changes always trying to keep our implementations up to date, and at the same time continue to add new features. As the codebase in Blink and WebKit was still sharing quite a lot of things after the fork, we were working on both implementations at the same time.

    Igalia and Bloomberg working together to build a better web Igalia and Bloomberg working together to build a better web

    The results of this collaboration have been really satisfactory, as now CSS Grid Layout has shipped in Chromium and enabled by default in WebKit too (which will hopefully mean that it’ll be shipped in the upcoming Safari 10.1 release too).

    Thanks @jensimmons for the feedback regarding Safari 10.1.

    And now what?

    Update your browsers, be sure you grab a version with Grid Layout support and start to use CSS Grid Layout, play with it, experiment and so on. We’d love to get bug reports and feedback about it. It’s too late to change the current version of the spec, but ideas for a future version are already being recorded in the CSS Working Group GitHub repository.

    If you want to start with Grid Layout, there are plenty of resources available on the Internet:

    It’s possible to think that now that CSS Grid Layout has shipped, it’s all over. Nothing is further from the truth as there is still a lot of work to do:

    • An important step would be to complete the W3C Test Suite. Igalia has been contributing to it and it’s currently imported into Blink and WebKit, but it doesn’t cover the whole spec yet.
    • There are some missing features in the current implementations. For example, nobody supports subgrids yet, web authors tell us that they would love to have them available. Another example, in Blink and WebKit is that we are still finishing the support for baseline alignment.
    • When bugs and issues appear they will need to be fixed and some might even imply some minor modifications to the spec.
    • Performance optimizations should be done. CSS Grid Layout is a huge spec so the biggest part effort so far has been done in the implementation. Now it’s time to improve performance of different use cases.
    • And as I explained earlier, people are starting to think about new features for a future version of the spec. Progress won’t stop now.

    Acknowledgements

    First of all, it’s important to highlight once again Bloomberg’s role in the development of CSS Grid Layout. Without their vision and support it probably would not be have shipped so soon.

    But this is not an individual effort, but something much bigger. I’ll mention several people next, but I’m sure I’ll forget a lot of them, so please forgive me in advance.

    So big thanks to:

    • The Microsoft folks who started the spec.
    • The current spec editors: Elika J. Etemad (fantasai), Rossen Atanassov, and Tab Atkins Jr. Especially fantasai & Tab, who have been dealing with most of the issues we have reported.
    • The whole CSS Working Group for their work on this spec.
    • Our reviewers in both Blink and WebKit: Christian Biesinger, Darin Adler, Julien Chaffraix, and many other.
    • Other implementors: Daniel Holbert, Mats Palmgren, etc.
    • People spreading the word about CSS Grid Layout: Jen Simmons, Rachel Andrew, etc.
    • The many other people I’m missing in this list who helped to make CSS Grid Layout the newest layout module for the Web.

    Thanks to you all! 😻 And particularly to Bloomberg for letting Igalia be part of this amazing experience. We’re really happy to have walked this path together and we really hope to do more cool stuff in the future.

    Translations

    March 15, 2017 11:00 PM

    Andy Wingo

    guile 2.2 omg!!!

    Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

    I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

    In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

    The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

    For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

    high fives all around

    As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

    Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

    Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

    While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

    I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

    a digression on the nature of seeking and knowledge

    I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

    But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

    I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

    In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

    getting the goods

    It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

    I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

    Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

    This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

    It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

    (If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

    Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

    Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)

    by Andy Wingo at March 15, 2017 10:56 PM

    March 14, 2017

    Diego Pino

    Fosdem 2017

    Fosdem is one of my favorite conferences. I guess this is true for many free software enthusiasts. It doesn’t matter how many editions it has been through, it still keeps the same spirit. Tons of people sharing, talking about their homebrew experiments, projects, ideas, and of course tons of people rushing around. That’s something unmatched in other technical events. Kudos to all the volunteers and people involved that make Fosdem possible every year.

    As for this year edition, I tried a new approach. Instead of switching rooms, I decided to stick to one of the tracks. Since I’m mostly interested in networking lately, I attended the “SDN/NFV devroom”. Here is my summary of the talks:

    “Opening network access in the Central Office” by Chris Price. Unfortunately I was late for this talk. According to the abstract it covered several SDN/NFV related technologies such as OpenDaylight, OPNFV and ONOS/CORD.

    “The emergence of open-source 4G/5G ecosystems” by Raymond Knopp. Really interesting talk and topic.

    Everyone is talking about 5G lately. However, it seems we’re still in the early stages. Unlike previous generations, 5G won’t be only a radio spectrum improvement, but an evolution in computing for wireless networks. For this reason, it’s not coincidence to see 5G connected to terms such as SDN & NFV.

    Commoditization of 3GPP radio systems will also be a reality, making radio technology more accessible to hobbyist. This year at Fosdem, as it happened last year for the first time, there was a Software Defined Radio devroom.

    Raymond’s talk covers a wide range of topics related with 5G, SDN/NFV and open-source. If you’re already familiar with SDN/NFV, I totally recommend his talk as an introduction to 5G.

    “switchdev: the Linux switching framework” by Bert Vermeulen. History and evolution of Linux kernel’s switchdev.

    A hardware switch maps MAC addresses to ports. To keep up track of that mapping a Forwarding Database, or FDB, is used.

    Since its early beginnings the Linux kernel could emulate a switch by using a bridge, with the disadvantage of doing all the processing in the CPU. No chance to offload specific operations to specialized hardware. Eventually the DSA (Distributed Switch Architecture) subsystem came along but it was mostly bound to Mellanox switches (other vendors got supported later). The final step on this journey was switchdev, “an in-kernel driver model for switch devices which offload the forwarding (data) plane from the kernel”.

    Currently both DSA and switchdev subsystems live in the kernel.

    “Accelerating TCP with TLDK” by Ray Kinsella. TLDK is a TCP/IP stack in user-space for DPDK.

    If you have been following the news in the networking world lately, likely you’ve heard of DPDK. But in case you have never heard of it and still wonder what it means, keep reading.

    DPDK stands for Data-Plane Development Kit. It was project started by Intel, although other network-card vendors joined later. The advent of affordable high-speed NICs made developers realize of the bottlenecks in the Linux kernel forwarding-path. Basically, Linux’s networking stack has a hard time dealing with 10G Ethernet with only one core. This is particularly true for small packets.

    This fact triggered two reactions:

    • One from the Linux kernel community to improve the kernel’s networking stack performance.
    • Another one from the network community to move away from the kernel and do packet processing in user-space.

    The latter is called a kernel-bypass and there are several approaches to it. One of these approaches is to talk directly to the hardware from an user-space program. In other words, an user-space driver.

    And that’s mostly what DPDK is about. Speeding up packet processing by providing user space drivers for several high-performance NICs (from different vendors). However, an user-space driver alone is not sufficient for squeezing a high-speed NIC performance. It’s also necessary to apply other techniques. For this reason, DPDK also implements bulk packet processing, non-blocking API, cache optimizations, memory locality, etc

    On the other hand, by-passing the kernel means, among other things, there’s no TCP/IP stack. TLDK tries to solve that.

    “Writing a functional DPDK application from scratch” by Ferruh Yigit.

    It was an OK talk. It can serve as an introduction on how to write your first DPDK app. Unfortunately the slides are not available on-line. During the talk it was hard to follow the example code since the font-size was too small.

    “eBPF and XDP walkthrough and recent updates” by Daniel Borkmann.

    Very good speaker and one of the best talks of the day. The talk was a catch up on the latest developments on eBPF and XDP. If you’ve never heard of eBPF or XDP, let me introduce you these terms. If you’re already familiar with them, you can skip the next paragraphs completely.

    eBPF stands for Extended BPF. But then, what BPF means? BPF (Berkeley Packet Filter) is a bytecode modeled after the Morotola 6502 instruction set. Packet filtering expressions used by tcpdump, such as “ip dst port 80” get compiled to a BPF bytecode program. Later compiled programs get executed by an interpreter. For instance:

    $ tcpdump -d "ip dst 192.168.0.1"
    (000) ldh      [12]
    (001) jeq      #0x800           jt 2    jf 5
    (002) ld       [30]
    (003) jeq      #0xc0a80001      jt 4    jf 5
    (004) ret      #262144
    (005) ret      #0

    The bytecode above is the result of compiling the expression “ip dst 192.168.0.1”.

    On a side note, at Igalia we developed a packet-filtering expression compiler called pflua. In pflua instead of lowering expressions to BPF bytecode, they got lowered to a Lua code function which is later run and optimized by LuaJIT.

    The Linux Kernel has its own BPF interpreters (yes, there actually two). One is the BPF interpreter and the other one is the eBPF interpreter, which understand BPF as well.

    In 2013 Alexei Starovoitov extended BPF and created eBPF. The Linux Kernel’s eBPF interpreter is more sophisticated than the BPF one. Its main features are:

    • Similar architecture to x86-64. eBPF uses 64-bit registers and increases the number of available registers from 2 (Accumulator and X register) to 10.
    • System calls. It’s possible to execute system calls from eBPF programs. In addition, there’s now a bpf system call which allows to run eBPF programs from user-space.
    • Decoupling from the networking subsystem. The eBPF intrepreter lives now at its own path kernel/ebpf. Unlike BPF, eBPF is used for more than packet filtering. It’s possible to attach an eBPF program to a tracepoint or to a kprobe. This opens up the door to eBPF for instrumentation and performance analysis. It’s also possible to attach eBPF programs to sockets, so packets that do not match the filter are discarded earlier, improving performance.
    • Maps. It allows eBPF programs to remember values from previous calls, in other words, eBPF can be stateful. Maps data can be queried from user-space. They provide a good mechanism for collecting statistics.
    • Tail-calls. eBPF programs are limited to 4096 instructions per programs, but the tail-call features allows a eBPF program to control the next eBPF program to execute.
    • Helper functions. Such as packet rewrite, checksum calculation or packet cloning. Unlike user-space programming, these functions get executed inside the kernel.

    Note: Actually, they’re not interpreters but JIT compilers as what they do is to translate eBPF bytecode programs to native assembly code which is later run by the kernel. Before compiling the code, several checks are performed. For instance, eBPF programs cannot contain loops (an unnitended infinite loop could hang the kernel).

    Related to eBPF, there is XDP. XDP (eXpress Data Path) is a kernel subsystem which runs eBPF programs at the earliest place possible (as soon as the packet is read from the RX ring buffer). The execution of a eBPF program can return 4 possible values: DROP, PASS, TX or ABORT. If we manage to discard a packet before it hits the networking stack that will result into a performance gain. And although eBPF programs are meant to be simple, there’s a fairly big amount of things that can be expressed as eBPF programs (routing decisions, packet modifications, etc).

    Recently there was a very interesting discussion in the Linux Kernel mailing list about the real value of XDP. The discussion is greatly summarized in this LWN article: Debating the value of XDP. After reading all the opinions, I mostly share Stephen Hemminger’s point of view. The networking world is complex. I think XDP has its space, but I honestly cannot imagine writing a network function as complex as the lwAFTR function as an eBPF program. User-space networking is a reality hard to deny, it solves real problems and it’s getting more and more common everyday.

    “Cilium - BPF & XDP for containers” by Thomas Graf.

    Another great talk. Thomas is a seasoned Linux kernel networking hacker with more than 10 years of experience. In addition, he knows how to deliver a talk which it highly helps to follow the topics at discussion.

    Thomas talk focused on the Cillium project. Cillium is a system for easing Linux container networking. The project leverages heavily on eBPF and XDP. It was helpful to schedule Daniel’s talk right before this one, so all those concepts were already introduced.

    The Cillium project provides fast in-kernel networking and security policy enforcement for containers. It does it by orchestrating eBPF programs to containers. The programs are directly executed on XDP, instead of being attached to a connection proxy. Programs can be modified, recompiled and distributed again to the containers without dropping the connection. Containers only care about the traffic that matters to them and since traffic is filtered at XDP level that results into a performance gain. I forgot to mention that since XDP access to the DMA buffer, it requires driver support by the NIC. At this moment only Mellanox cards support XDP although Intel support is coming.

    Cillium provides a catalogue of network functions. It features functions such as L3/L4 load balancing, NAT46, connection tracking, port mapping, statistics etc. Another interesting thing it does is communication between containers via labels, which is implemented via IPv6.

    “Stateful packet processing with eBPF” by Quentin Monnet.

    The background of this talk was a R&D project called Beba. Firstly, Monnet introduced the OpenState project. OpenState is a stateful data plane API for SDN controllers. Two network functions implemented using OpenState, Port Knocking and Token Bucket, were discussed and Monet shown how they could be implemented using eBPF.

    “Getting started with OpenDaylight” by Charles Eckel & Giles Heron.

    OpenFlow is one of the basic building blocks for SDN. It standardizes a protocol by which an entity, called a SDN controller, can manage several remote switches. These switches can be either hardware appliances or software switches. OpenDaylight is an open-source implementation of a SDN controller which goal is to grow the adoption of SDN. OpenDaylight is hosted by the Linux Foundation.

    “Open-Source BGP networking with OpenDaylight” by Giles Heron. Follow-up on the previous talk with a practical focus.

    “FastDataStacks” by Tomas Cechvala. FastDataStacks is a stack composed by OpenStack, OpenDayLight and FD.io + OPNFV.

    “PNDA.io” by Jeremie Garnier.

    PNDA.io is a platform for network data analytics. It brings together several open-source technologies for data analysis (Yarn, HDFS, Spark, Zookeeper, etc) and combines them to streamline the process of developing data processing applications. Users can focus on their data analysis and not in developing a pipeline.

    “When configuration management meet SDN” by Michael Scherer. Ansible + ssh as an orchestration tool.

    “What do you mean ’SDN’ on traditional routers?” by Peter Van Eynde.

    Really fun and interesting talk by one of the responsibles of network infrastructure at Fosdem. If you’re curious about how Fosdem’s network is deployed you should watch this talk. Peter’s talk focused mostly on network monitoring. It covered topics such as SNMP, NetFlow and YANG/Netconf.

    That was all for day one.

    On Sunday I planned to attend several mixed talks, covering a wide range of topics. In the morning I attended the “Small languages panel”. After the workshop I said hello to Justin Cormack and thank him for ljsyscall. After chatting a bit, I headed towards the “Open Game Development devroom”.

    As it was crowded everywhere and switching rooms was tough, I decided in the end to stick to this track for the rest of the day. Some of the talks I enjoyed the most were:

    And that was all for Fosdem this year.

    Besides the talks, I enjoyed hanging out with other igalians, meeting old friends, travelling with the little local community from Vigo and of course meeting new people. I found pleasure in walking the streets of Brussels and enjoying the beer, the fries, the parks, the buildings and all the things that make Brussels a charming place I always like to be back.

    ULBs Janson hall
    ULBs Janson hall
    Francisco Ferrer monument at ULB Campus
    Francisco Ferrer monument at ULB Campus

    March 14, 2017 12:00 PM

    March 08, 2017

    Juan A. Suárez

    Grilo, Travis CI and Containers

    Good news! Finally, we are using containers in Travis CI for Grilo!. Something I was trying for a while, but we achived it now. I must say that a post Bassi wrote was the trigger for getting into this. So all my kudos to him!

    In this post I’ll explain the history behind using Travis CI for Grilo continuous integration.

    The origin

    It all started when one day exploring how GitHub integrates with other services, I discovered Travis CI. As you may know, Travis is a continuous integration service that checks every commit from a project in GitHub, and for each one it starts a testing process. Roughly, it starts a “virtual machine”1 running Ubuntu2, clones the repository at that commit under test, and runs a set of commands defined in the .travis.yml file, located in the same project GitHub repository. In that file, beside the steps to execute the tests, it contains the instructions about how to build the project, as well as which dependencies are required.

    Note that before Travis, instead of a continuous integration system in Grilo we had a ‘discontinuous’ one: run the checks manually, from time to time. So we could have a commit entering a bug, and we won’t realize until we run the next check, which can happen way later. Thus, when I found Travis, I thought it would be a good idea to use it.

    Setting up .travis.yml for Grilo was quite easy: in the before_install section we just use apt-get to install all requirements: libglib2.0-dev, libxml2-dev, and so on. And then, in the script section we run autogen.sh and make. If nothing fails, we consider the test is successful. We do not run any specific test because we don’t have any in Grilo.

    For the plugins, the same steps: install dependencies, configure and build the plugins. In this case, we also run make check, so tests are run always. Again, if nothing fails Travis gives us a green light. Otherwise, a red one. The status is shown in the main web page. Also, if the test fail, an email is sent to the commit author.

    Now, this has a small problem when testing plugins: they require Grilo, and we were relying in the package provided by Ubuntu (it is listed in the dependencies). But what happens if the current commit is using a feature that was added in Grilo upstream, but not released yet? One option could be cloning Grilo core, building and installing it, before the plugins, and then compiling the plugins, depending on this version. This means that for each commit in plugins, we need to build two projects, adding lot of complexity in the Travis file. So we decided to go with a different approach: just create a Grilo package with the required unreleased Grilo core version (only for testing), and put it in a PPA. Then we can add that PPA in our .travis.yml file and use that version instead.

    A similar problem happens with Grilo itself: sometimes we require a specific version of a package that is not available in the Ubuntu version used by Travis (Ubuntu 12.04). So we need to backport it from a more recent Ubuntu version, and add it in the same PPA.

    Summing up, our .travis.yml files just add the PPA, install the required dependencies, build and test it. You can take a look at the core and plugins file.

    Travis and the Peter Pan syndrome

    Time passes, we were adding more features, new plugins, fixing problem, adding new requirements or bumping up the required versions… but Travis continues using Ubuntu 12.04. My first thoughts were “OK, maybe Travis wants to rely only in LTS releases”. So we need to wait until the next LTS is released, and meanwhile backporting everything we need. No need to say that doing this becomes more and more complicated as time is passing. Sometimes backporting a single dependency requires to backport a lot of other dependencies, which can end up in a bloody nightmare. “Only for a while, until the new LTS is released”, repeated to myself.

    And good news! Ubuntu 14.04, the new LTS, is released. But you know what? Travis is not updated, and still uses the old LTS!. What the hell!

    Moreover, two years later after this release, Ubuntu 16.04 LTS is also released, and Travis still uses 12.04!

    At that moment, backporting were so complex that basically I gave up. And Continuous Integration was basically broken.

    Travis and the containers.

    And we were under this broken status until I read Travis was adding support for containers. “This is what we need”. But the truth is that even I knew that it would fix all the problems, I wasn’t very sure how to use the new feature. I tried several approaches, but I wasn’t happy with none of them.

    Until Emmanuele Bassi published a post about using Meson in Epoxy. That post included an explanation about using Docker containers in Travis, which solved all the doubts I had, and allowed me to finally move to use containers. So again, thank you, Emmanuele!

    What’s the idea? First, we have created a Docker container that has preinstalled all the requirements to build Grilo and the plugins. We tagged this image as base.

    When Travis is going to test Grilo, we instruct Travis to build a new container, based on base, that builds and installs Grilo. If everything goes fine, then our continous integration is successful, and Travis gives green light. Otherwise it gives red light. Exactly like it happened in the old approach.

    But we don’t stop here. If everything goes fine, we push the new container into Docker register, tagging it as core. Why? Because this is the image we will use for building the plugins.

    And in the case of plugins we do exactly the same as in the core. But this time, instead of relying in the base image, we rely in the core one. This way, we always use a version that has an up-to-date version of Grilo, so we don’t need to package it when introducing new features. Only if either Grilo or the plugins require a new dependency we need to build a new base image and push it. That’s all.

    Also, as a plus, instead of discarding the container that contains the plugins, we push it in Docker, tagged as latest. So anyone can just pull it with Docker to have a container to run and test Grilo and all the plugins.

    If interested, you can take a look at the core and plugins files to check how it looks like.

    Oh! Last but not least. This also helped us to test the building both using Autotools and Meson, both supported in Grilo. Which is really awesome.

    Summing up, moving to containers provides a lot of flexibility, and make things quite easier.

    Please, leave any comment or question either in Facebook or Google+.

    1. Let’s call Virtual Machine, container, whatever. In this context it doesn’t matter.

    2. Ubuntu 12.04 LTS, to be exact.

    March 08, 2017 11:00 PM

    March 06, 2017

    Andy Wingo

    it's probably spam

    Greetings, peoples. As you probably know, these words are served to you by Tekuti, a blog engine written in Scheme that uses Git as its database.

    Part of the reason I wrote this blog software was that from the time when I was using Wordpress, I actually appreciated the comments that I would get. Sometimes nice folks visit this blog and comment with information that I find really interesting, and I thought it would be a shame if I had to disable those entirely.

    But allowing users to add things to your site is tricky. There are all kinds of potential security vulnerabilities. I thought about the ones that were important to me, back in 2008 when I wrote Tekuti, and I thought I did a pretty OK job on preventing XSS and designing-out code execution possibilities. When it came to bogus comments though, things worked well enough for the time. Tekuti uses Git as a log-structured database, and so to delete a comment, you just revert the change that added the comment. I added a little security question ("what's your favorite number?"; any number worked) to prevent wordpress spammers from hitting me, and I was good to go.

    Sadly, what was good enough in 2008 isn't good enough in 2017. In 2017 alone, some 2000 bogus comments made it through. So I took comments offline and painstakingly went through and separated the wheat from the chaff while pondering what to do next.

    an aside

    I really wondered why spammers bothered though. I mean, I added the rel="external nofollow" attribute on links, which should prevent search engines from granting relevancy to the spammer's links, so what gives? Could be that all the advice from the mid-2000s regarding nofollow is bogus. But it was definitely the case that while I was adding the attribute to commenter's home page links, I wasn't adding it to links in the comment. Doh! With this fixed, perhaps I will just have to deal with the spammers I have and not even more spammers in the future.

    i digress

    I started by simply changing my security question to require a number in a certain range. No dice; bogus comments still got through. I changed the range; could it be the numbers they were using were already in range? Again the bogosity continued undaunted.

    So I decided to break down and write a bogus comment filter. Luckily, Git gives me a handy corpus of legit and bogus comments: all the comments that remain live are legit, and all that were ever added but are no longer live are bogus. I wrote a simple tokenizer across the comments, extracted feature counts, and fed that into a naive Bayesian classifier. I finally turned it on this morning; fingers crossed!

    My trials at home show that if you train the classifier on half the data set (around 5300 bogus comments and 1900 legit comments) and then run it against the other half, I get about 6% false negatives and 1% false positives. The feature extractor interns sequences of 1, 2, and 3 tokens, and doesn't have a lower limit for number of features extracted -- a feature seen only once in bogus comments and never in legit comments is a fairly strong bogosity signal; as you have to make up the denominator in that case, I set it to indicate that such a feature is 99.9% bogus. A corresponding single feature in the legit set without appearance in the bogus set is 99% legit.

    Of course with this strong of a bias towards precise features of the training set, if you run the classifier against its own training set, it produces no false positives and only 0.3% false negatives, some of which were simply reverted duplicate comments.

    It wasn't straightforward to get these results out of a Bayesian classifier. The "smoothing" factor that you add to both numerator and denominator was tricky, as I mentioned above. Getting a useful tokenization was tricky. And the final trick was even trickier: limiting the significant-feature count when determining bogosity. I hate to cite Paul Graham but I have to do so here -- choosing the N most significant features in the document made the classification much less sensitive to the varying lengths of legit and bogus comments, and less sensitive to inclusions of verbatim texts from other comments.

    We'll see I guess. If your comment gets caught by my filters, let me know -- over email or Twitter I guess, since you might not be able to comment! I hope to be able to keep comments open; I've learned a lot from yall over the years.

    by Andy Wingo at March 06, 2017 02:16 PM

    February 27, 2017

    Jacobo Aragunde

    GENIVI-fying Chromium, part 2

    In the previous blog post, we introduced the work to port Chromium to the GENIVI Development Platform (GDP). We have continued working to improve the integration, and make everything easier to build.

    In first place, we are now using the latest code from the Ozone-Wayland project that builds on top of Chromium 53 instead of Chromium 48. We have rebased the meta-browser recipes for the newer version and contributed the patch to the upstream project, together with other patches to clean the build process and to fix issues on certain platforms.

    Some issues detected in the earlier steps of the integration were addressed. The aspect ratio of the browser window has been modified to fit the GDP demonstration HMI. A performance degradation when playing video had also been detected, the problem was not actually in Chromium, Pulseaudio was taking all the CPU away from the browser processes when using the default null sink. We fixed it by setting Alsa as the default sink with the command pacmd "set-default-sink AlsaPrimary" (do it in /etc/pulse/default.pa to make the change persistent). We are obviously bypassing the GENIVI Audio Manager here, it should be integrated at a later point.

    We are in the process to merge our patches into the GENIVI platform, to make the Chromium browser part of the default build. You currently have to use our fork of meta-genivi-dev, while the meta-browser layer has already been added as a submodule and it’s not necessary to explicitly add it.

    Finally, we have been testing how Chromium and Ozone-Wayland behave in multi-seat environments using the Wayland IVI Extension and the IVI Layer Manager libraries to have full control of screens, layers, surfaces and focus. We have extracted some conclusions that will allow us to make Chromium behave as expected in this scenario.

    Chromium on a multi-seat environment

    All the work we have done is publicly available already. You may try it by:

    • Setting up GDP master for your board. Make sure you are using the latest master to get the meta-browser layer automatically.
    • While review is ongoing, you may add our fork as a new remote for the meta-genivi-dev submodule and switch to the chromium-integration branch.
    • Finally, just bitbake your image, the Chromium browser has been made part of the default image in one of the meta-genivi-dev patches.

    A warning about platforms: please notice we are currently using a Minnowboard as a test platform. There is a known issue on Raspberry Pi that we hope will be fixed soon. Regarding R-Car Gen. 2 boards, we think it should work, we have ran Chromium there before but not recently.

    This work is performed by Igalia and sponsored by GENIVI through the Challenge Grant Program. Thank you!

    GENIVI logo

    by Jacobo Aragunde Pérez at February 27, 2017 01:16 PM

    February 24, 2017

    Andy Wingo

    encyclopedia snabb and the case of the foreign drivers

    Peoples of the blogosphere, welcome back to the solipsism! Happy 2017 and all that. Today's missive is about Snabb (formerly Snabb Switch), a high-speed networking project we've been working on at work for some years now.

    What's Snabb all about you say? Good question and I have a nice answer for you in video and third-party textual form! This year I managed to make it to linux.conf.au in lovely Tasmania. Tasmania is amazing, with wild wombats and pademelons and devils and wallabies and all kinds of things, and they let me talk about Snabb.

    You can check that video on the youtube if the link above doesn't work; slides here.

    Jonathan Corbet from LWN wrote up the talk in an article here, which besides being flattering is a real windfall as I don't have to write it up myself :)

    In that talk I mentioned that Snabb uses its own drivers. We were recently approached by a customer with a simple and honest question: does this really make sense? Is it really a win? Why wouldn't we just use the work that the NIC vendors have already put into their drivers for the Data Plane Development Kit (DPDK)? After all, part of the attraction of a switch to open source is that you will be able to take advantage of the work that others have produced.

    Our answer is that while it is indeed possible to use drivers from DPDK, there are costs and benefits on both sides and we think that when we weigh it all up, it makes both technical and economic sense for Snabb to have its own driver implementations. It might sound counterintuitive on the face of things, so I wrote this long article to discuss some perhaps under-appreciated points about the tradeoff.

    Technically speaking there are generally two ways you can imagine incorporating DPDK drivers into Snabb:

    1. Bundle a snapshot of the DPDK into Snabb itself.

    2. Somehow make it so that Snabb could (perhaps optionally) compile against a built DPDK SDK.

    As part of a software-producing organization that ships solutions based on Snabb, I need to be able to ship a "known thing" to customers. When we ship the lwAFTR, we ship it in source and in binary form. For both of those deliverables, we need to know exactly what code we are shipping. We achieve that by having a minimal set of dependencies in Snabb -- only LuaJIT and three Lua libraries (DynASM, ljsyscall, and pflua) -- and we include those dependencies directly in the source tree. This requirement of ours rules out (2), so the option under consideration is only (1): importing the DPDK (or some part of it) directly into Snabb.

    So let's start by looking at Snabb and the DPDK from the top down, comparing some metrics, seeing how we could make this combination.

    snabbdpdk
    Code lines 61K 583K
    Contributors (all-time) 60 370
    Contributors (since Jan 2016) 32 240
    Non-merge commits (since Jan 2016) 1.4K 3.2K

    These numbers aren't directly comparable, of course; in Snabb our unit of code change is the merge rather than the commit, and in Snabb we include a number of production-ready applications like the lwAFTR and the NFV, but they are fine enough numbers to start with. What seems clear is that the DPDK project is significantly larger than Snabb, so adding it to Snabb would fundamentally change the nature of the Snabb project.

    So depending on the DPDK makes it so that suddenly Snabb jumps from being a project that compiles in a minute to being a much more heavy-weight thing. That could be OK if the benefits were high enough and if there weren't other costs, but there are indeed other costs to including the DPDK:

    • Data-plane control. Right now when I ship a product, I can be responsible for the whole data plane: everything that happens on the CPU when packets are being processed. This includes the driver, naturally; it's part of Snabb and if I need to change it or if I need to understand it in some deep way, I can do that. But if I switch to third-party drivers, this is now out of my domain; there's a wall between me and something that running on my CPU. And if there is a performance problem, I now have someone to blame that's not myself! From the customer perspective this is terrible, as you want the responsibility for software to rest in one entity.

    • Impedance-matching development costs. Snabb is written in Lua; the DPDK is written in C. I will have to build a bridge, and keep it up to date as both Snabb and the DPDK evolve. This impedance-matching layer is also another source of bugs; either we make a local impedance matcher in C or we bind everything using LuaJIT's FFI. In the former case, it's a lot of duplicate code, and in the latter we lose compile-time type checking, which is a no-go given that the DPDK can and does change API and ABI.

    • Communication costs. The DPDK development list had 3K messages in January. Keeping up with DPDK development would become necessary, as the DPDK is now in your dataplane, but it costs significant amounts of time.

    • Costs relating to mismatched goals. Snabb tries to win development and run-time speed by searching for simple solutions. The DPDK tries to be a showcase for NIC features from vendors, placing less of a priority on simplicity. This is a very real cost in the form of the way network packets are represented in the DPDK, with support for such features as scatter/gather and indirect buffers. In Snabb we were able to do away with this complexity by having simple linear buffers, and our speed did not suffer; adding the DPDK again would either force us to marshal and unmarshal these buffers into and out of the DPDK's format, or otherwise to reintroduce this particular complexity into Snabb.

    • Abstraction costs. A network function written against the DPDK typically uses at least three abstraction layers: the "EAL" environment abstraction layer, the "PMD" poll-mode driver layer, and often an internal hardware abstraction layer from the network card vendor. (And some of those abstraction layers are actually external dependencies of the DPDK, as with Mellanox's ConnectX-4 drivers!) Any discrepancy between the goals and/or implementation of these layers and the goals of a Snabb network function is a cost in developer time and in run-time. Note that those low-level HAL facilities aren't considered acceptable in upstream Linux kernels, for all of these reasons!

    • Stay-on-the-train costs. The DPDK is big and sometimes its abstractions change. As a minor player just riding the DPDK train, we would have to invest a continuous amount of effort into just staying aboard.

    • Fork costs. The Snabb project has a number of contributors but is really run by Luke Gorrie. Because Snabb is so small and understandable, if Luke decided to stop working on Snabb or take it in a radically different direction, I would feel comfortable continuing to maintain (a fork of) Snabb for as long as is necessary. If the DPDK changed goals for whatever reason, I don't think I would want to continue to maintain a stale fork.

    • Overkill costs. Drivers written against the DPDK have many considerations that simply aren't relevant in a Snabb world: kernel drivers (KNI), special NIC features that we don't use in Snabb (RDMA, offload), non-x86 architectures with different barrier semantics, threads, complicated buffer layouts (chained and indirect), interaction with specific kernel modules (uio-pci-generic / igb-uio / ...), and so on. We don't need all of that, but we would have to bring it along for the ride, and any changes we might want to make would have to take these use cases into account so that other users won't get mad.

    So there are lots of costs if we were to try to hop on the DPDK train. But what about the benefits? The goal of relying on the DPDK would be that we "automatically" get drivers, and ultimately that a network function would be driver-agnostic. But this is not necessarily the case. Each driver has its own set of quirks and tuning parameters; in order for a software development team to be able to support a new platform, the team would need to validate the platform, discover the right tuning parameters, and modify the software to configure the platform for good performance. Sadly this is not a trivial amount of work.

    Furthermore, using a different vendor's driver isn't always easy. Consider Mellanox's DPDK ConnectX-4 / ConnectX-5 support: the "Quick Start" guide has you first install MLNX_OFED in order to build the DPDK drivers. What is this thing exactly? You go to download the tarball and it's 55 megabytes. What's in it? 30 other tarballs! If you build it somehow from source instead of using the vendor binaries, then what do you get? All that code, running as root, with kernel modules, and implementing systemd/sysvinit services!!! And this is just step one!!!! Worse yet, this enormous amount of code powering a DPDK driver is mostly driver-specific; what we hear from colleagues whose organizations decided to bet on the DPDK is that you don't get to amortize much knowledge or validation when you switch between an Intel and a Mellanox card.

    In the end when we ship a solution, it's going to be tested against a specific NIC or set of NICs. Each NIC will add to the validation effort. So if we were to rely on the DPDK's drivers, we would have payed all the costs but we wouldn't save very much in the end.

    There is another way. Instead of relying on so much third-party code that it is impossible for any one person to grasp the entirety of a network function, much less be responsible for it, we can build systems small enough to understand. In Snabb we just read the data sheet and write a driver. (Of course we also benefit by looking at DPDK and other open source drivers as well to see how they structure things.) By only including what is needed, Snabb drivers are typically only a thousand or two thousand lines of Lua. With a driver of that size, it's possible for even a small ISV or in-house developer to "own" the entire data plane of whatever network function you need.

    Of course Snabb drivers have costs too. What are they? Are customers going to be stuck forever paying for drivers for every new card that comes out? It's a very good question and one that I know is in the minds of many.

    Obviously I don't have the whole answer, as my role in this market is a software developer, not an end user. But having talked with other people in the Snabb community, I see it like this: Snabb is still in relatively early days. What we need are about three good drivers. One of them should be for a standard workhorse commodity 10Gbps NIC, which we have in the Intel 82599 driver. That chipset has been out for a while so we probably need to update it to the current commodities being sold. Additionally we need a couple cards that are going to compete in the 100Gbps space. We have the Mellanox ConnectX-4 and presumably ConnectX-5 drivers on the way, but there's room for another one. We've found that it's hard to actually get good performance out of 100Gbps cards, so this is a space in which NIC vendors can differentiate their offerings.

    We budget somewhere between 3 and 9 months of developer time to create a completely new Snabb driver. Of course it usually takes less time to develop Snabb support for a NIC that is only incrementally different from others in the same family that already have drivers.

    We see this driver development work to be similar to the work needed to validate a new NIC for a network function, with the additional advantage that it gives us up-front knowledge instead of the best-effort testing later in the game that we would get with the DPDK. When you add all the additional costs of riding the DPDK train, we expect that the cost of Snabb-native drivers competes favorably against the cost of relying on third-party DPDK drivers.

    In the beginning it's natural that early adopters of Snabb make investments in this base set of Snabb network drivers, as they would to validate a network function on a new platform. However over time as Snabb applications start to be deployed over more ports in the field, network vendors will also see that it's in their interests to have solid Snabb drivers, just as they now see with the Linux kernel and with the DPDK, and given that the investment is relatively low compared to their already existing efforts in Linux and the DPDK, it is quite feasible that we will see the NIC vendors of the world start to value Snabb for the performance that it can squeeze out of their cards.

    So in summary, in Snabb we are convinced that writing minimal drivers that are adapted to our needs is an overall win compared to relying on third-party code. It lets us ship solutions that we can feel responsible for: both for their operational characteristics as well as their maintainability over time. Still, we are happy to learn and share with our colleagues all across the open source high-performance networking space, from the DPDK to VPP and beyond.

    by Andy Wingo at February 24, 2017 05:37 PM

    February 21, 2017

    Asumu Takikawa

    Optimizing Snabbwall

    In my previous blog post, I wrote about the work I’ve been doing on Snabbwall, sponsored by the NLNet Foundation. The next milestone in the project was to write some user documentation (this is now done) and to do some benchmarking.

    After some initial benchmarking, I found that Snabbwall wasn’t performing as well as it could. One of the impressive things about Snabb is that well-engineered apps can achieve line-rate performance on 10gbps NICs. That means that the LuaJIT program is processing packets at 10gbps, which means that if your packets are about 40 bytes (the minimum size of an IPv6 packet) then it spends around 30 nanoseconds per packet.

    On the other hand, Snabbwall was clocking about 1gbps or less. This was based on measurements from a simple benchmarking script that uses the packetblaster program to fire a ton of packets at a NIC connected to an instance of Snabbwall. The benchmark output looked like this:

    1
    2
    3
    4
    5
    Firewall on 02:00.1 (CPU 3). Packetblaster on 82:00.1.
    BENCH (BITTORRENT.pcap, 1 iters, 10 secs)
    bytes: 1,396,392,179 packets: 1,736,085 bps: 1,090,257,569.1429
    BENCH (rtmp_sample.cap, 1 iters, 10 secs)
    bytes: 490,248,129 packets: 1,510,824 bps: 381,488,072.1031
    

    (the bps numbers give the bits per second processed for the run)

    For Snabbwall, we hadn’t actually set a goal of processing packets at line-rate. And in any case, the performance of the system is limited by the processing speed of nDPI, which handles the actual deep-packet inspection work. But 1gbps is pretty far from line-rate, so I spent a few days on finding some low-hanging performance fruit.

    Profiling and exploring traces

    Most of the performance issues that I found were pinpointed by using the very helpful LuaJIT profiler and debugging tools. For debugging Snabb performance issues in particular, you can use an invocation like the following:

    1
    2
       ## dump verbose trace information
       $ ./snabb snsh -jv -p program.to.run -f <flags> args
    

    The -jv option provides verbose profiling output that shows an overview of the tracing process. In particular, it shows when the trace recorder has to abort.

    (see this page for details on LuaJIT’s control commands)

    In case you’re not familiar with how tracing JITs like LuaJIT work, the basic idea is that the VM will run in an interpreted mode by default, and record traces through the program as it executes instructions.

    (BTW, if you’re wondering what a trace is, it is a “a linear sequence of instructions with a single entry point and one or more exit points”)

    Once the VM finds a hot (i.e., frequently executed) trace that it is capable of compiling and is also worth compiling, the VM compiles the trace and runs the result.

    If the compiler can’t handle some aspect of the trace, however, it will abort and return to the interpreter. If this happens in hot code, you can get severely degraded performance.

    This was what was happening in Snabbwall. Here’s an excerpt from the trace info for a Snabbwall run:

    1
    2
    3
    4
    5
    6
    7
    [TRACE  83 util.lua:27 -> 72]
    [TRACE --- util.lua:58 -- NYI: unsupported C type conversion at scanner.lua:202]
    [TRACE --- (78/1) scanner.lua:110 -- NYI: unsupported C type conversion at scanner.lua:202]
    [TRACE --- (78/1) scanner.lua:110 -- NYI: unsupported C type conversion at scanner.lua:202]
    [TRACE --- (78/1) scanner.lua:110 -- NYI: unsupported C type conversion at scanner.lua:202]
    [TRACE --- (78/1) scanner.lua:110 -- NYI: unsupported C type conversion at scanner.lua:202]
    [TRACE  84 (78/1) scanner.lua:110 -- fallback to interpreter]
    

    The source code documentation in the LuaJIT implementation explains what the notation means. What’s important for our purposes is that the lines without a trace number which have --- are showing trace aborts where the compiler gave up.

    As the comments note, trace aborts are not always a problem because the speed of the interpreter may be sufficient. Presumably more so if the code is not that warm.

    In our case, however, these trace aborts are happening in the middle of the packet scanning code in scanner.lua, which is part of the core loop of the firewall. That’s a bad sign.

    It turns out that the unsupported C type conversion error occurs in some cases when a cdata (the type for LuaJIT’s FFI objects) allocation is unsupported. You can see the code that’s throwing this error in the LuaJIT implementation here.

    The specific line in Snabbwall that is causing the trace to abort in cdata allocation is this one:

    1
    key = flow_key_ipv4()
    

    which is allocating a new instance of an FFI data type. The call occurs in a function which is called repeatedly in the scanning loop, so it triggers the allocation issue each time. The data type it’s trying to allocate is this one:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    struct swall_flow_key_ipv4 {
       uint16_t vlan_id;
       uint8_t  __pad;
       uint8_t  ip_proto;
       uint8_t  lo_addr[4];
       uint8_t  hi_addr[4];
       uint16_t lo_port;
       uint16_t hi_port;
    } __attribute__((packed));
    

    Reading the LuaJIT internals a bit reveals that the issue is that an allocation of a struct which has an array field is unsupported in JIT-mode.

    To test this hypothesis, here’s a small Lua script that you can try out that just allocates a struct with a single array field:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    ffi = require("ffi")
    
    ffi.cdef[[
      struct foo {
        uint8_t a[4];
      };
    ]]
    
    for i=1, 1000 do
      local foo = ffi.new("struct foo")
    end
    

    Running this with the -jv option yields output like this:

    1
    2
    3
    4
    5
    6
    $ luajit -jv cdata-test.lua 
    [TRACE --- cdata-test.lua:9 -- NYI: unsupported C type conversion at cdata-test.lua:10]
    [TRACE --- cdata-test.lua:9 -- NYI: unsupported C type conversion at cdata-test.lua:10]
    [TRACE --- cdata-test.lua:9 -- NYI: unsupported C type conversion at cdata-test.lua:10]
    [TRACE --- cdata-test.lua:9 -- NYI: unsupported C type conversion at cdata-test.lua:10]
    [TRACE --- cdata-test.lua:9 -- NYI: unsupported C type conversion at cdata-test.lua:10]
    

    which is the same error we saw earlier from Snabbwall. For Snabbwall, we can work around this by allocating the swall_flow_key_ipv4 data structure just once in the module. On each loop iteration, we then re-write the fields on the single instance instead of allocating new ones.

    This might sound iffy, but as long as the lifetime of this flow key data structure is controlled, it should be ok. In particular, the documented API for Snabbwall doesn’t even expose this data structure so we can ensure that an old reference is never read after the fields get overwritten.

    Using some dynasm

    Once I optimized the flow key allocation, I saw another trace abort in Snabbwall that was trickier to work around. The relevant trace info is this excerpt here:

    1
    2
    3
    4
    5
    6
    7
    [TRACE  78 (71/3) scanner.lua:110 -> 72]
    [TRACE --- (77/1) util.lua:34 -- NYI: unsupported C function type at wrap.lua:64]
    [TRACE --- (77/1) util.lua:34 -- NYI: unsupported C function type at wrap.lua:64]
    [TRACE --- (77/1) util.lua:34 -- NYI: unsupported C function type at wrap.lua:64]
    [TRACE --- (77/1) util.lua:34 -- NYI: unsupported C function type at wrap.lua:64]
    [TRACE  79 link.lua:45 return]
    [TRACE  80 (77/1) util.lua:34 -- fallback to interpreter]
    

    For this case, it wasn’t necessary to go read the LuaJIT source code to figure out exactly what was going on (though I suspect the error comes from this line). The module wrap.lua in the nDPI FFI library uses two C functions with the following signatures:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    typedef struct { uint16_t master_protocol, protocol; } ndpi_protocol_t;
    
    ndpi_protocol_t ndpi_detection_process_packet (ndpi_detection_module_t *detection_module,
                                                   ndpi_flow_t *flow,
                                                   const uint8_t *packet,
                                                   unsigned short packetlen,
                                                   uint64_t current_tick,
                                                   ndpi_id_t *src,
                                                   ndpi_id_t *dst);
    
    ndpi_protocol_t ndpi_guess_undetected_protocol (ndpi_detection_module_t *detection_module,
                                                    uint8_t protocol,
                                                    uint32_t src_host, uint16_t src_port,
                                                    uint32_t dst_host, uint32_t dst_port);
    

    Note that both functions return a struct by value. If you read the FFI semantics page for LuaJIT closely, you’ll see that calls to “C functions with aggregates passed or returned by value” are described as having “suboptimal performance” because they’re not compiled.

    This is a little tricky to work around without writing some C code. At the C-level, it’s easy to write a wrapper that returns the struct data by reference through a pointer argument to avoid the return. Then wrap.lua can allocate its own protocol struct and pass that into the wrapper instead. That’s actually the first thing I did in order to test if this approach improves the performance (spoiler: it did).

    But using a C wrapper complicates the build process for Snabbwall and introduces some issues with linking. It turns out that dynasm, which came up in a previous blog post, can help us out.

    Specifically, instead of using a C wrapper, we can just write what the C wrapper code would do in dynamically-generated assembly code. Generating the code once at run-time lets us avoid any build/linking issues and it’s just as fast.

    The downside is of course that it’s harder to write and debug. I’m not really a seasoned x64 assembly hacker, so it took me a while to grok the ABI docs in order to put it all together.

    Here’s the wrapper code for the ndpi_detection_process_packet function:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    local function gen(Dst)
       -- pass the first stack argument onto the original function
       | mov rax, [rsp+8]
       | push rax
    
       -- call the original function, do stack cleanup
       | mov64 rax, orig_f
       | call rax
       | add rsp, 8
    
       -- at this point, rax and rdx have struct
       -- fields in them, which we want to write into
       -- the struct pointer (2nd stack arg)
       | mov rcx, [rsp+16]
       | mov [rcx], rax
       | mov [rcx+4], rdx
    
       | ret
    end
    

    The code is basically just doing some function call plumbing following the x64 SystemV ABI.

    What’s going on is that the original C function has 7 arguments and our assembly wrapper is supposed to have 8 (an additional pointer that we’ll write through). On x64, six integer arguments are passed through registers so the remaining two get passed on the stack.

    That means we don’t need to modify any registers in this wrapper (since we will immediately call the original function), but we do need to re-push the first argument onto the stack to prepare for the call.

    We can then call the function, and then increment the stack pointer to clean up the stack (the ABI also requires the caller to clean up the stack).

    The rest of the code just writes the two returned struct fields from the registers through the pointer on the stack to the struct contained in Lua-land.

    Speedup

    With the changes I described in the post, the performance of Snabbwall on the benchmark improved quite a bit. Here are the numbers after implementing the three optimizations mentioned above:

    1
    2
    3
    4
    5
    Firewall on 02:00.1 (CPU 3). Packetblaster on 82:00.1.
    BENCH (BITTORRENT.pcap, 1 iters, 10 secs)
    bytes: 5,827,504,913 packets: 7,247,814 bps: 4,539,941,765.8323
    BENCH (rtmp_sample.cap, 1 iters, 10 secs)
    bytes: 6,099,115,315 packets: 18,793,998 bps: 4,745,353,670.4701
    

    We’re still not at line-rate, but at this point the profiler attributes a large portion of the time (23% + 20%) to the two C functions that do the actual packet inspection work:

    1
    2
    3
    4
    5
    6
    24%  ipv4_addr_cmp
    23%  process_packet
    20%  guess_undetected_protocol
     9%  extract_packet_info
     4%  scan_packet
     3%  hash
    

    (there might be some optimization potential in ipv4_addr_cmp, which is in the Lua code)

    In doing this optimization work, I was happy to find that the LuaJIT performance tools were very helpful. Though I do think there might be an opportunity to put a more solution-oriented interface on it. For example, an optimization coach for LuaJIT could be interesting and useful.

    by Asumu Takikawa at February 21, 2017 02:22 PM

    February 10, 2017

    Carlos García Campos

    Accelerated compositing in WebKitGTK+ 2.14.4

    WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

    • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
    • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
    • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

    Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

    All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

    There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

    We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

    We really hope this new release and the upcoming 2.16 will work much better for everybody.

    by carlos garcia campos at February 10, 2017 05:18 PM

    February 08, 2017

    Alberto Garcia

    QEMU and the qcow2 metadata checks

    When choosing a disk image format for your virtual machine one of the factors to take into considerations is its I/O performance. In this post I’ll talk a bit about the internals of qcow2 and about one of the aspects that can affect its performance under QEMU: its consistency checks.

    As you probably know, qcow2 is QEMU’s native file format. The first thing that I’d like to highlight is that this format is perfectly fine in most cases and its I/O performance is comparable to that of a raw file. When it isn’t, chances are that this is due to an insufficiently large L2 cache. In one of my previous blog posts I wrote about the qcow2 L2 cache and how to tune it, so if your virtual disk is too slow, you should go there first.

    I also recommend Max Reitz and Kevin Wolf’s qcow2: why (not)? talk from KVM Forum 2015, where they talk about a lot of internal details and show some performance tests.

    qcow2 clusters: data and metadata

    A qcow2 file is organized into units of constant size called clusters. The cluster size defaults to 64KB, but a different value can be set when creating a new image:

    qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

    Clusters can contain either data or metadata. A qcow2 file grows dynamically and only allocates space when it is actually needed, so apart from the header there’s no fixed location for any of the data and metadata clusters: they can appear mixed anywhere in the file.

    Here’s an example of what it looks like internally:

    In this example we can see the most important types of clusters that a qcow2 file can have:

    • Header: this one contains basic information such as the virtual size of the image, the version number, and pointers to where the rest of the metadata is located, among other things.
    • Data clusters: the data that the virtual machine sees.
    • L1 and L2 tables: a two-level structure that maps the virtual disk that the guest can see to the actual location of the data clusters in the qcow2 file.
    • Refcount table and blocks: a two-level structure with a reference count for each data cluster. Internal snapshots use this: a cluster with a reference count >= 2 means that it’s used by other snapshots, and therefore any modifications require a copy-on-write operation.

    Metadata overlap checks

    In order to detect corruption when writing to qcow2 images QEMU (since v1.7) performs several sanity checks. They verify that QEMU does not try to overwrite sections of the file that are already being used for metadata. If this happens, the image is marked as corrupted and further access is prevented.

    Although in most cases these checks are innocuous, under certain scenarios they can have a negative impact on disk write performance. This depends a lot on the case, and I want to insist that in most scenarios it doesn’t have any effect. When it does, the general rule is that you’ll have more chances of noticing it if the storage backend is very fast or if the qcow2 image is very large.

    In these cases, and if I/O performance is critical for you, you might want to consider tweaking the images a bit or disabling some of these checks, so let’s take a look at them. There are currently eight different checks. They’re named after the metadata sections that they check, and can be divided into the following categories:

    1. Checks that run in constant time. These are equally fast for all kinds of images and I don’t think they’re worth disabling.
      • main-header
      • active-l1
      • refcount-table
      • snapshot-table
    2. Checks that run in variable time but don’t need to read anything from disk.
      • refcount-block
      • active-l2
      • inactive-l1
    3. Checks that need to read data from disk. There is just one check here and it’s only needed if there are internal snapshots.
      • inactive-l2

    By default all tests are enabled except for the last one (inactive-l2), because it needs to read data from disk.

    Disabling the overlap checks

    Tests can be disabled or enabled from the command line using the following syntax:

    -drive file=hd.qcow2,overlap-check.inactive-l2=on
    -drive file=hd.qcow2,overlap-check.snapshot-table=off

    It’s also possible to select the group of checks that you want to enable using the following syntax:

    -drive file=hd.qcow2,overlap-check.template=none
    -drive file=hd.qcow2,overlap-check.template=constant
    -drive file=hd.qcow2,overlap-check.template=cached
    -drive file=hd.qcow2,overlap-check.template=all

    Here, none means that no tests are enabled, constant enables all tests from group 1, cached enables all tests from groups 1 and 2, and all enables all of them.

    As I explained in the previous section, if you’re worried about I/O performance then the checks that are probably worth evaluating are refcount-block, active-l2 and inactive-l1. I’m not counting inactive-l2 because it’s off by default. Let’s look at the other three:

    • inactive-l1: This is a variable length check because it depends on the number of internal snapshots in the qcow2 image. However its performance impact is likely to be negligible in all cases so I don’t think it’s worth bothering with.
    • active-l2: This check depends on the virtual size of the image, and on the percentage that has already been allocated. This check might have some impact if the image is very large (several hundred GBs or more). In that case one way to deal with it is to create an image with a larger cluster size. This also has the nice side effect of reducing the amount of memory needed for the L2 cache.
    • refcount-block: This check depends on the actual size of the qcow2 file and it’s independent from its virtual size. This check is relatively expensive even for small images, so if you notice performance problems chances are that they are due to this one. The good news is that we have been working on optimizing it, so if it’s slowing down your VMs the problem might go away completely in QEMU 2.9.

    Conclusion

    The qcow2 consistency checks are useful to detect data corruption, but they can affect write performance.

    If you’re unsure and you want to check it quickly, open an image with overlap-check.template=none and see for yourself, but remember again that this will only affect write operations. To obtain more reliable results you should also open the image with cache=none in order to perform direct I/O and bypass the page cache. I’ve seen performance increases of 50% and more, but whether you’ll see them depends a lot on your setup. In many cases you won’t notice any difference.

    I hope this post was useful to learn a bit more about the qcow2 format. There are other things that can help QEMU perform better, and I’ll probably come back to them in future posts, so stay tuned!

    Acknowledgments

    My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the rest of the QEMU development team.

    by berto at February 08, 2017 08:52 AM

    Michael Catanzaro

    An Update on WebKit Security Updates

    One year ago, I wrote a blog post about WebKit security updates that attracted a fair amount of attention at the time. For a full understanding of the situation, you really have to read the whole thing, but the most important point was that, while WebKitGTK+ — one of the two WebKit ports present in Linux distributions — was regularly releasing upstream security updates, most Linux distributions were ignoring the updates, leaving users vulnerable to various security bugs, mainly of the remote code execution variety. At the time of that blog post, only Arch Linux and Fedora were regularly releasing WebKitGTK+ updates, and Fedora had only very recently begun doing so comprehensively.

    Progress report!

    So how have things changed in the past year? The best way to see this is to look at the versions of WebKitGTK+ in currently-supported distributions. The latest version of WebKitGTK+ is 2.14.3, which fixes 13 known security issues present in 2.14.2. Do users of the most popular Linux operating systems have the fixes?

    • Fedora users are good. Both Fedora 24 and Fedora 25 have the latest version, 2.14.3.
    • If you use Arch, you know you always have the latest stuff.
    • Ubuntu users rejoice: 2.14.3 updates have been released to users of both Ubuntu 16.04 and 16.10. I’m very  pleased that Ubuntu has decided to take my advice and make an exception to its usual stable release update policy to ensure its users have a secure version of WebKit. I can’t give Ubuntu an A grade here because the updates tend to lag behind upstream by several months, but slow updates are much better than no updates, so this is undoubtedly a huge improvement. (Anyway, it’s hardly a bad idea to be cautious when releasing a big update with high regression potential, as is unfortunately the case with even stable WebKit updates.) But if you use the still-supported Ubuntu 14.04 or 12.04, be aware that these versions of Ubuntu cannot ever update WebKit, as it would require a switch to WebKit2, a major API change.
    • Debian does not update WebKit as a matter of policy. The latest release, Debian 8.7, is still shipping WebKitGTK+ 2.6.2. I count 184 known vulnerabilities affecting it, though that’s an overcount as we did not exclude some Mac-specific security issues from the 2015 security advisories. (Shipping ancient WebKit is not just a security problem, but a user experience problem too. Actually attempting to browse the web with WebKitGTK+ 2.6.2 is quite painful due to bugs that were fixed years ago, so please don’t try to pretend it’s “stable.”) Note that a secure version of WebKitGTK+ is available for those in the know via the backports repository, but this does no good for users who trust Debian to provide them with security updates by default without requiring difficult configuration. Debian testing users also currently have the latest 2.14.3, but you will need to switch to Debian unstable to get security updates for the foreseeable future, as testing is about to freeze.
    • For openSUSE users, only Tumbleweed has the latest version of WebKit. The current stable release, Leap 42.2, ships with WebKitGTK+ 2.12.5, which is coincidentally affected by exactly 42 known vulnerabilities. (I swear I am not making this up.) The previous stable release, Leap 42.1, originally released with WebKitGTK+ 2.8.5 and later updated to 2.10.7, but never past that. It is affected by 65 known vulnerabilities. (Note: I have to disclose that I told openSUSE I’d try to help out with that update, but never actually did. Sorry!) openSUSE has it a bit harder than other distros because it has decided to use SUSE Linux Enterprise as the source for its GCC package, meaning it’s stuck on GCC 4.8 for the foreseeable future, while WebKit requires GCC 4.9. Still, this is only a build-time requirement; it’s not as if it would be impossible to build with Clang instead, or a custom version of GCC. I would expect WebKit updates to be provided to both currently-supported Leap releases.
    • Gentoo has the latest version of WebKitGTK+, but only in testing. The latest version marked stable is 2.12.5, so this is a serious problem if you’re following Gentoo’s stable channel.
    • Mageia has been updating WebKit and released a couple security advisories for Mageia 5, but it seems to be stuck on 2.12.4, which is disappointing, especially since 2.12.5 is a fairly small update. The problem here does not seem to be lack of upstream release monitoring, but rather lack of manpower to prepare the updates, which is a typical problem for small distros.
    • The enterprise distros from Red Hat, Oracle, and SUSE do not provide any WebKit security updates. They suffer from the same problem as Ubuntu’s old LTS releases: the WebKit2 API change  makes updating impossible. See my previous blog post if you want to learn more about that. (SUSE actually does have WebKitGTK+ 2.12.5 as well, but… yeah, 42.)

    So results are clearly mixed. Some distros are clearly doing well, and others are struggling, and Debian is Debian. Still, the situation on the whole seems to be much better than it was one year ago. Most importantly, Ubuntu’s decision to start updating WebKitGTK+ means the vast majority of Linux users are now receiving updates. Thanks Ubuntu!

    To arrive at the above vulnerability totals, I just counted up the CVEs listed in WebKitGTK+ Security Advisories, so please do double-check my counting if you want. The upstream security advisories themselves are worth mentioning, as we have only been releasing these for two years now, and the first year was pretty rough when we lost our original security contact at Apple shortly after releasing the first advisory: you can see there were only two advisories in all of 2015, and the second one was huge as a result of that. But 2016 seems to have gone decently well. WebKitGTK+ has normally been releasing most security fixes even before Apple does, though the actual advisories and a few remaining fixes normally lag behind Apple by roughly a month or so. Big thanks to my colleagues at Igalia who handle this work.

    Challenges ahead

    There are still some pretty big problems remaining!

    First of all, the distributions that still aren’t releasing regular WebKit updates should start doing so.

    Next, we have to do something about QtWebKit, the other big WebKit port for Linux, which stopped receiving security updates in 2013 after the Qt developers decided to abandon the project. The good news is that Konstantin Tokarev has been working on a QtWebKit fork based on WebKitGTK+ 2.12, which is almost (but not quite yet) ready for use in distributions. I hope we are able to switch to use his project as the new upstream for QtWebKit in Fedora 26, and I’d encourage other distros to follow along. WebKitGTK+ 2.12 does still suffer from those 42 vulnerabilities, but this will be a big improvement nevertheless and an important stepping stone for a subsequent release based on the latest version of WebKitGTK+. (Yes, QtWebKit will be a downstream of WebKitGTK+. No, it will not use GTK+. It will work out fine!)

    It’s also time to get rid of the old WebKitGTK+ 2.4 (“WebKit1”), which all distributions currently parallel-install alongside modern WebKitGTK+ (“WebKit2”). It’s very unfortunate that a large number of applications still depend on WebKitGTK+ 2.4 — I count 41 such packages in Fedora — but this old version of WebKit is affected by over 200 known vulnerabilities and really has to go sooner rather than later. We’ve agreed to remove WebKitGTK+ 2.4 and its dependencies from Fedora rawhide right after Fedora 26 is branched next month, so they will no longer be present in Fedora 27 (targeted for release in November). That’s bad for you if you use any of the affected applications, but fortunately most of the remaining unported applications are not very important or well-known; the most notable ones that are unlikely to be ported in time are GnuCash (which won’t make our deadline) and Empathy (which is ported in git master, but is not currently in a  releasable state; help wanted!). I encourage other distributions to follow our lead here in setting a deadline for removal. The alternative is to leave WebKitGTK+ 2.4 around until no more applications are using it. Distros that opt for this approach should be prepared to be stuck with it for the next 10 years or so, as the remaining applications are realistically not likely to be ported so long as zombie WebKitGTK+ 2.4 remains available.

    These are surmountable problems, but they require action by downstream distributions. No doubt some distributions will be more successful than others, but hopefully many distributions will be able to fix these problems in 2017. We shall see!

    by Michael Catanzaro at February 08, 2017 06:32 AM

    On Epiphany Security Updates and Stable Branches

    One of the advantages of maintaining a web browser based on WebKit, like Epiphany, is that the vast majority of complexity is contained within WebKit. Epiphany itself doesn’t have any code for HTML parsing or rendering, multimedia playback, or JavaScript execution, or anything else that’s actually related to displaying web pages: all of the hard stuff is handled by WebKit. That means almost all of the security problems exist in WebKit’s code and not Epiphany’s code. While WebKit has been affected by over 200 CVEs in the past two years, and those issues do affect Epiphany, I believe nobody has reported a security issue in Epiphany’s code during that time. I’m sure a large part of that is simply because only the bad guys are looking, but the attack surface really is much, much smaller than that of WebKit. To my knowledge, the last time we fixed a security issue that affected a stable version of Epiphany was 2014.

    Well that streak has unfortunately ended; you need to make sure to update to Epiphany 3.22.6, 3.20.7, or 3.18.11 as soon as possible (or Epiphany 3.23.5 if you’re testing our unstable series). If your distribution is not already preparing an update, insist that it do so. I’m not planning to discuss the embarrassing issue here — you can check the bug report if you’re interested — but rather on why I made new releases on three different branches. That’s quite unlike how we handle WebKitGTK+ updates! Distributions must always update to the very latest version of WebKitGTK+, as it is not practical to backport dozens of WebKit security fixes to older versions of WebKit. This is rarely a problem, because WebKitGTK+ has a strict policy to dictate when it’s acceptable to require new versions of runtime dependencies, designed to ensure roughly three years of WebKit updates without the need to upgrade any of its dependencies. But new major versions of Epiphany are usually incompatible with older releases of system libraries like GTK+, so it’s not practical or expected for distributions to update to new major versions.

    My current working policy is to support three stable branches at once: the latest stable release (currently Epiphany 3.22), the previous stable release (currently Epiphany 3.20), and an LTS branch defined by whatever’s currently in Ubuntu LTS and elementary OS (currently Epiphany 3.18). It was nice of elementary OS to make Epiphany its default web browser, and I would hardly want to make it difficult for its users to receive updates.

    Three branches can be annoying at times, and it’s a lot more than is typical for a GNOME application, but a web browser is not a typical application. For better or for worse, the majority of our users are going to be stuck on Epiphany 3.18 for a long time, and it would be a shame to leave them completely without updates. That said, the 3.18 and 3.20 branches are very stable and only getting bugfixes and occasional releases for the most serious issues. In contrast, I try to backport all significant bugfixes to the 3.22 branch and do a new release every month or thereabouts.

    So that’s why I just released another update for Epiphany 3.18, which was originally released in September 2015. Compare this to the long-term support policies of Chrome (which supports only the latest version of the browser, and only for six weeks) or Firefox (which provides nine months of support for an ESR release), and I think we compare quite favorably. (A stable WebKit series like 2.14 is only supported for six months, but that’s comparable to Firefox.) Not bad?

    by Michael Catanzaro at February 08, 2017 05:56 AM