Planet Igalia

November 17, 2021

Fernando Jiménez

opentok-rs: easy WebRTC with Rust

OpenTok is Vonage’s (formerly TokBox’s) PaaS (Platform as a Service) that enables developers to easily build custom video experiences within any mobile, web, or desktop application, on top of a WebRTC stack.

One of the customer projects that I am working on at Igalia requires publishing and subscribing to streams to and from OpenTok sessions. The main application of this project needs to run on a Linux box and Vonage already provides a nice OpenTok C++ SDK for Linux. However, the entire application for this customer project is written in Rust so, together with my colleague Philippe Normand, we decided to write Rust bindings for the OpenTok C++ SDK.

opentok-rs contains the result of this work. There you can find the FFI bindings, mostly generated with bindgen, and the safe wrapper API.

We recently published a first version in crates.io.

There is really not much documentation yet, apart from the rustdoc published here, that is mostly a copy & paste of the C++ documentation. But there are a few examples that demonstrate how easy and fast you can write your own custom video experiences.

Basic video chat application

With opentok-rs you can write a very basic video chat application like this one in only a few dozen lines of code.

If you are not familiar with the basic concepts of OpenTok, I recommend reading the official documentation at Vonage’s developer site.

In a nutshell, all OpenTok activity occurs within a session, which is somewhat like a “room” where clients interact with one another in real-time. Each participant in a session can publish streams to the session or subscribe to other participants’ streams.

To connect to OpenTok sessions you need its identifier and a token. For testing purposes, you can obtain a session ID and a token from the project page in your Vonage Video API account. However, in a production application, you will need to dynamically obtain the session ID and token from a web service that uses one of the Vonage Video API server SDKs.

For a basic chat application you need to create a Publisher instance, to publish your video stream, and a Subscriber instance, likely in a different thread, to subscribe to the rest of the streams in the session. Each entity may connect to the session separately.

Publisher

The OpenTok SDK is heavily based on callbacks. Starting with the session, you need to provide a SessionCallbacks instance to the Session constructor. For the sake of simplicity, we only care about the on_connected and on_error callbacks in this case.

You also need to provide the session credentials. This is the Vonage API key, the session ID and its token.

let session_callbacks = SessionCallbacks::builder()
    .on_connected(move |session| {
        // At this point, we can start publishing
        session.publish(&*publisher.lock().unwrap())
    })
    .on_error(|_, error, _| {
        eprintln!("on_error {:?}", error);
    })
    .build();
let session = Session::new(
    &credentials.api_key,
    &credentials.session_id,
    session_callbacks,
)?;
session.connect(&credentials.token)?;

The Publisher constructor gets a PublisherCallbacks instance and optionally a VideoCapturer instance. If you do not provide a custom video capturer, the default one capturing audio and video from your local mic and webcam will be used.

let publisher_callbacks = PublisherCallbacks::builder()
    .on_stream_created(move |_, stream| {
        println!("Publishing stream with ID {}", stream.id());
    })
    .on_error(|_, error, _| {
        eprintln!("on_error {:?}", error);
    })
    .build();
let publisher = Arc::new(Mutex::new(Publisher::new(
    "publisher" /* Publisher name */,
    None, /* Use WebRTC's video capturer */,
    publisher_callbacks,
)));

The basic video chat example demonstrates how to add a custom video capturer. In this case, it uses a GStreamer videotestsrc element to produce test video data. You can use whatever mechanism to produce video that you prefer though.

Subscriber

The subscriber part is somewhat similar. It needs to connect to the session, providing the credentials and the session callbacks. In this case, the callback that we care about the most is the on_stream_received callback. Within this callback, you can set the stream on your Subscriber instance and instruct the session to use it.

let session_callbacks = SessionCallbacks::builder()
    .on_stream_received(move |session, stream| {
        if subscriber.set_stream(stream).is_ok() {
            if let Err(e) = session.subscribe(&subscriber) {
                eprintln!("Could not subscribe to session {:?}", e);
            }
        }
    })
    .on_error(|_, error, _| {
        eprintln!("on_error {:?}", error);
    })
    .build();

The Subscriber gets the video frames through repeated calls to the on_render_frame callback.

let subscriber_callbacks = SubscriberCallbacks::builder()
    .on_render_frame(move |_, frame| {
        let width = frame.get_width().unwrap() as u32;
        let height = frame.get_height().unwrap() as u32;

        let get_plane_size = |format, width: u32, height: u32| match format {
            FramePlane::Y => width * height,
            FramePlane::U | FramePlane::V => {
                let pw = (width + 1) >> 1;
                let ph = (height + 1) >> 1;
                pw * ph
            }
            _ => unimplemented!(),
        };

        let offset = [
            0,
            get_plane_size(FramePlane::Y, width, height) as usize,
            get_plane_size(FramePlane::Y, width, height) as usize
                + get_plane_size(FramePlane::U, width, height) as usize,
        ];

        let stride = [
            frame.get_plane_stride(FramePlane::Y).unwrap(),
            frame.get_plane_stride(FramePlane::U).unwrap(),
            frame.get_plane_stride(FramePlane::V).unwrap(),
        ];
        renderer_
            .lock()
            .unwrap()
            .as_ref()
            .unwrap()
            .push_video_buffer(
                frame.get_buffer().unwrap(),
                frame.get_format().unwrap(),
                width,
                height,
                &offset,
                &stride,
            );
    })
    .on_error(|_, error, _| {
        eprintln!("on_error {:?}", error);
    })
    .build();

The snippet above uses a video renderer based on the GStreamer autovideosink element. But just like with the custom video capturer, you can use whatever you like to render your video frames.

Audio

The OpenTok SDK handles audio and video in different ways. While video streams are independently tied to each publisher and each subscriber in a session, audio is tied to a global audio device that is shared by all publishers and subscribers.

This design imposes two hard limitations:

  • There is no way to obtain the independent audio stream from each participant. OpenTok provides a single audio stream which is a mix of every participant’s audio, so there is no way to do things like speech-to-text, moderation or any kind of audio processing per participant, unless you create a somewhat complex workaround where you run each audio subscriber in its own dedicated process.

  • It is not possible to run two instances of the OpenTok SDK in the same process. A second instance of the OpenTok SDK overwrites the audio callbacks set from the previous instance.

Vonage claimed to be working on improving this design.

There is more

Everything in opentok-rs is meant to run on client applications, but as mentioned before, Vonage also provides server side OpenTok SDKs.

opentok-server-rs wraps a minimal subset of the OpenTok REST API. It lets developers to securely create sessions and generate tokens for their OpenTok applications.

I started it only to be able to write automatic tests for opentok-rs, so the functionality is limited and will hopefully be extended soon.

Acknowledgements

November 17, 2021 12:00 AM

November 11, 2021

Fernando Jiménez

gst-dots: live view of GStreamer pipelines

These days I spend a lot of time dealing with large dynamic GStreamer pipelines. More often than not, I find myself stuck in problems that take some careful analysis of the endless stream of debug logs that GStreamer produces. In these situations, taking a look at how the pipelines of the application look like really helps me with the debugging process. To get this information, GStreamer has the capability of outputing graph files that describe the topology of your pipelines. The information that you get is really well presented, but the process of getting it can be a bit cumbersome when you have to do it over and over. The output files are .dot files that require programs like GraphViz to get a displayable version of the graph. Many GStreamer developers end up writing scripts or creating their own tools to ease this process. My version of this kind of tool is gst-dots, an extremely simple NodeJS server that watches for GStreamer .dot files in the path defined by the GST_DEBUG_DUMP_DOT_DIR environment variable, convert them into SVG images and displays them in a browser with live reload.

This is how it looks in action.

November 11, 2021 12:00 AM

November 01, 2021

Qiuyi Zhang (Joyee)

Building V8 on an M1 MacBook

I’ve recently got an M1 MacBook and played around with it a bit. It seems many open source projects still haven’t added MacOS with ARM64

November 01, 2021 01:50 PM

My 2019

It’s that time of the year again! I did not manage to write a recap about my 2018, so I’ll include some reflection about that year in

November 01, 2021 01:50 PM

Uncaught exceptions in Node.js

In this post, I’ll jot down some notes that I took when refactoring the uncaught exception handling routines in Node.js. Hopefully it

November 01, 2021 01:50 PM

On deps/v8 in Node.js

I recently ran into a V8 test failure that only showed up in the V8 fork of Node.js but not in the upstream. Here I’ll write down my

November 01, 2021 01:50 PM

Tips and Tricks for Node.js Core Development and Debugging

I thought about writing some guides on this topic in the nodejs/node repo, but it’s easier to throw whatever tricks I personally use on

November 01, 2021 01:50 PM

My 2017

I decided to write a recap of my 2017 because looking back, it was a very important year to me.

November 01, 2021 01:50 PM

New Blog

I’ve been thinking about starting a new blog for a while now. So here it is.

Not sure if I am going to write about tech here.

November 01, 2021 01:50 PM

October 15, 2021

Fernando Jiménez

2021 WebKit Contributors Meeting talk - WPE Android

A couple of weeks ago I attended my first WebKit Contributors Meeting and I presented this talk about WPE WebKit for Android.

October 15, 2021 12:00 AM

September 30, 2021

Brian Kardell

Making the whole web better, one canvas at a time.

Making the whole web better, one canvas at a time.

One can have an entire career on the web and never write a single canvas.getContext('2d'), so "Why should I care about this new OffscreenCanvas thing?" is a decent question for many. In this post, I'll tell you why I'm certain that it will matter to you, in real ways.

How relevant is canvas?

As a user, you know from lived experience that <video> on the web is pretty popular. It isn't remotely niche. However, many developers I talk to think that <canvas> is. The sentiment seems to be something like...

I can see how it is useful if you want to make a photo editor or something, but... It's not really a thing I've ever added to a site or think I experience much... It's kind of niche, right?

What's interesting though, is that in reality, <canvas>'s prevalence in the the HTTPArchive isn't so far behind <video> (63rd/70th most popular elements respectively). It's considerably more widely used than many other standard HTML elements.

Amazing, right? I mean, how could that even be?!

The short answer is, it's just harder to recognize. A great example of this is maps. As a user, you recognize maps. You know they are common and popular. But what perhaps you don't recognize that it's on a canvas.

As a developer, there is a fair chance you have included a <canvas> somewhere without even realizing it. But again, since it is harder to recognize "ah this is a canvas" we don't idenitfy it the way we do video. Think about it: We include videos similarly all the time - not by directly including a <video> but via an abtraction - maybe it is a custom element or an iframe. Still, as a user you still clearly idenitfy it, so in your mind, as a developer you count it.

If canvas is niche, it is only so in the sense of who has to worry about those details. So let's talk about why you'll care, even if you don't directly use the API...

The trouble with canvas...

Unfortunately, <canvas> itself has a fundamental flaw. Let me show you...

Canvas (old)

This video is made by Andreas Hocevar using a common mapping library, on some fairly powerful hardware. You'll note how janky it gets - what you also can't tell from the video is that user interactions are temporarily interrupted on and off as rendering tries to keep up. The interface feels a little broken and frustrating.

For whom the bell tolls

For as bad as the video above is, as is the case on all performance related things, it's tempting to kind of shrug it off and think "Well, I don't know.. it's pretty usable, still - and hardware will catch up".

For all of the various appeals that have been made over the years to get us to care more about performance ("What about the fact that the majority of people use hardware less powerful than yours?" or "What about the fact that you're losing potential customers and users?" etc), we haven't moved that ball as meaningfully as we'd like. But,W I'd like to add one more to the list of things to think about here...

Ask not for whom the performance bell tolls, because increasingly: It tolls for you.

While we've been busy talking about phones and computers, something interesting happened: Billions of new devices using embedded web rendering engines appeared. TVs, game consoles, GPS systems, audio systems, infotainment systems in cars, planes and trains, kiosks, point of sale, digital signage, refridgerators, cooking appliances, ereaders, etc.. They're all using web engines.

Interstingly, if you own a high-end computer or phone, you're similarly more likely to enounter even more of these, as a user.

Embedded systems are generally way less powered than the universal devices we talk about often when they're brand new -- and their replacement rate is way slower.

So, while that moderately uncomfortable jank on your new iPhone still seems pretty bearable, it might translate to just a few (or even 1) FPS on your embedded device. Zoiks!

In other words, increasingly, that person that all of the other talks ask you to consider and empathize with... is you.

Enter: OffscreenCanvas

OffscreenCanvas is a solution to this. It's API surface is really small: It has a constructor, and a getContext('2d') method. Unlike the canvas element itself, however, it is neatly decoupled from the DOM. It can be used in a worker - in fact, they are tranferrable - you can pass them between windows and workers via postMessage. The existing DOM <canvas> API itself adds a .transferControlToOffscreen which will (explcitly) give you one back, and is in charge of painting in this rectangle.

If you are one of the many people who don't program against canvases yourself, don't worry about the details... Instead, let me show you what that means. The practical upshot of simply decoupling this is pretty clear, even on good hardware, as you can see in this demo...

OffscreenCanvas based maps
Using OffscreenCanvas, user interactions are not blocked - the rendering is way more fluid and the interface is able to feel smooth and responsive.

A Unique Opportunity

Canvas is also pretty unique in the history of the web because it began as unusually low level. That has its pros and its cons - but one positive thing is that the fact that most people use it by abstraction presents an intersting opportunity. We can radically improve things for pretty much all real users, through the actions of comparatively group of people who directly write things against the actual canvas APIs. Your own work can realize this, in most cases, without any changes to your code. Potentially without you even knowing. Nice.

New super powers, same great taste

There's a knock on effect here too that might be hard to notice at first. OffscreenCanvas doesn't create a whole new API to do its work - it's basically the same canvas context. And so are Houdini Custom Paint worklets. In fact, it's pretty hard to not see the relationship between painting on a canvas in a worker, and painting on a canvas in a worklet - right? They are effectively the same idea. There is minimal new platform "stuff" but we gain whole new superpowers and a clearer architecture. To me, this seems great.

What's more, while breaking off control and decoupling the main thread is a kind of easy win for performance and an intersting super power on it's own, we actually get more than that: In the case of Houdini we are suddenly able to tap into all of the rest of the CSS infrastructure and use this to brainstorm, explore and test and polyfill interesting new paint ideas before we talk about standardizing them. Amazing! That's really good for both standards and users.

Really interestingly though: In the case of OffscreenCanvas, we now suddenly have the ability to parallelize tasks and throw more hardware at highly parallelizable problems. Maps are also an example of that, but they aren't the only one.

My colleague Chris Lord recently gave a talk in which he gave a great demo visualizing an interactive and animated Mandlebrot Set (below). If you're unfamilliar with why this is impressive: A fractal is a self repeating geometric pattern, and they can be pretty intense to visualize. Even harder to make explorable in a UI. At 1080p resolution, and 250 iterations, that's about half a billion complex equations per rendered frame. Fortunately, they are also an example of a highly parallelizable problem, so they make for a nice demo of a thing that was just totally impossible with web technology yesterday, suddenly becomming possible with this new superpower.

OffscreenCanvas super powers!
A video of a talk from a recent WebKit Contributors meeting, showing impressive rendering. It should be time jumped, but on the chance that that fails, you can skip to about the 5 minute mark to see the demo.

What other doors will this open, and what will we see come from it? It will be super exciting to see!

September 30, 2021 04:00 AM

September 29, 2021

Thibault Saunier

GStreamer: one repository to rule them all

For the last years, the GStreamer community has been analysing and discussing the idea of merging all the modules into one single repository. Since all the official modules are released in sync and the code evolves simultaneously between those repositories, having the code split was a burden and several core GStreamer developers believed that it was worth making the effort to consolidate them into a single repository. As announced a while back this is now effective and this post is about explaining the technical choices and implications of that change.

You can also check out our Monorepo FAQ for a list of questions and answers.

Technicall details of the unification

Since we moved to meson as a build system a few years ago we implemented gst-build which leverages the meson subproject feature to build all GStreamer modules as one single project. This greatly enhanced the development experience of the GStreamer framework but we considered that we could improve it even more by having all GStreamer code in a single repository that looks the same as gst-build.

This is what the new unified git repository looks like, gst-build in the main gstreamer repository, except that all the code from the GStreamer modules located in the subprojects/ directory are checked in.

This new setup now lives in the main default branch of the gstreamer repository, the master branches for all the other modules repositories are now retired and frozen, no new merge request or code change will be accepted there.

This is only the first step and we will consider reorganizing the repository in the future, but the goal is to minimize disruptions.

The technical process for merging the repositories looks like:

foreach GSTREAMER_MODULE
    git remote add GSTREAMER_MODULE.name GSTREAMER_MODULE.url
    git fetch GSTREAMER_MODULE.name
    git merge GSTREAMER_MODULE.name/master
    git mv list_all_files_from_merged_gstreamer_module() GSTREAMER_MODULE.shortname
    git commit -m "Moved all files from " + GSTREAMER_MODULE.name
endforeach

This allows us to keep the exact same history (and checksum of each commit) for all the old gstreamer modules in the new repository which guarantees that the code is still exactly the same as before.

Releases with the new setup

In the same spirit of avoiding disruption, releases will look exactly the same as before. In the new unique gstreamer repository we still have meson subprojects for each GStreamer modules and they will have their own release tarballs. In practice, this means that not much (nothing?) should change for distribution packagers and consumers of GStreamer tarballs.

What should I do with my pending MRs in old modules repositories?

Since we can not create new merge requests in your name on gitlab, we wrote a move_mrs_to_monorepo script that you can run yourself. The script is located in the gstreamer repository and you can start moving all your pending MRs by simply calling it (scripts/move_mrs_to_monorepo.py and follow the instructions).


You can also check out our Monorepo FAQ for a list of questions and answers.

Thanks to everyone in the community for providing us with all the feedback and thanks to Xavier Claessens for co-leading the effort.

We are still working on ensuring the transition as smoothly as possible and if you have any question don’t hesitate to come talk to us in #gstreamer on the oftc IRC network.

Happy GStreamer hacking!

by thiblahute at September 29, 2021 09:34 PM

September 24, 2021

Samuel Iglesias

X.Org Developers Conference 2021

Last week we had our most loved annual conference: X.Org Developers Conference 2021. As a reminder, due to COVID-19 situation in Europe (and its respective restrictions on travel and events), we kept it virtual again this year… which is a pity as the former venue was Gdańsk, a very beautiful city (see picture below if you don’t believe me!) in Poland. Let’s see if we can finally have an XDC there!

XDC 2021

This year we had a very strong program. There were talks covering all aspects of the open-source graphics stack: from the kernel (including an Outreachy talk about VKMS) and Mesa drivers of all kind, inputs, libraries, X.org security and Wayland robustness… we had talks about testing drivers, debugging them, our infra at freedesktop.org, and even Vulkan specs (such Vulkan Video and VK_EXT_multi_draw) and their support in the open-source graphics stack. Definitely, a very complete program that is very interesting to all open-source developers working on this area. You can watch all the talks here or here and the slides were already uploaded in the program.

On behalf of the Call For Papers Committee, I would like to thank all speakers for their talks… this conference won’t make sense without you!

Big shout-out to the XDC 2021 organizers (Intel) represented by Radosław Szwichtenberg, Ryszard Knop and Maciej Ramotowski. They did an awesome job on having a very smooth conference. I can tell you that they promptly fixed any issue that happened, all of that behind the scenes so that the attendees not even noticed anything most of the times! That is what good conference organizers do!

XDC 2021 Organizers Can I invite you to a drink at least? You really deserve it!

If you want to know more details about what this virtual conference entailed, just watch Ryszard’s talk at XDC (info, video) or you can reuse their materials for future conferences. That’s very useful info for future conference organizers!

Talking about our streaming platforms, the big novelty this year was the use of media.ccc.de as a privacy-friendly alternative to our traditional Youtube setup (last year we got feedback about this). Media.ccc.de is an open-source platform that respects your privacy and we hope it worked fine for all attendees. Our stats indicate that ~50% of our audience connected to it during the three days of the conference. That’s awesome!

Last but not least, we couldn’t make this conference without our sponsors. We are very lucky to have on board Intel as our Platinum sponsor and organizer, our Gold sponsors (Google, NVIDIA, ARM, Microsoft and AMD, our Silver sponsors (Igalia, Collabora, The Linux Foundation), our Bronze sponsors (Gitlab and Khronos Group) and our Supporters (C3VOC). Big thank you from the X.Org community!

XDC 2021 Sponsors

Feedback

We would like to hear from you and learn about what worked and what needs to be improved for future editions of XDC! Share us your experience!

We have sent an email asking for feedback to different mailing lists (for example this). Don’t hesitate to send an email to X.Org Foundation board with all your feedback!

XDC 2022 announced!

X.Org Developers Conference 2022 has been announced! Jeremy White, from Codeweavers, gave a lightning talk presenting next year edition! Next year the XDC will not be alone… WineConf 2022 is going to be organized by Codeweavers as well and co-located with XDC!

Save the dates! October 4-5-6, 2022 in Minneapolis, Minnesota, USA.

XDC 2022: Minneapolis, Minnesota, USA Image from Wikipedia. License CC BY-SA 4.0.

XDC 2023 hosting proposals

Have you enjoyed XDC 2021? Do you think you can do it better? ;-) We are looking for organizers for XDC 2023 (most likely in Europe but we are open to other places).

We know this is a decision that takes time (trigger internal discussion, looking for volunteers, budget, a venue suitable for the event, etc). Therefore, we encourage potential interested parties to start the internal discussions now, so any question they have can be answered before we open the call for proposals for XDC 2023 at some point next year. Please read what it is required to organize this conference and feel free to contact me or the X.Org Foundation board for more info if needed.

Final acknowledgment

I would like to thank Igalia for all the support I got when I decided to run for re-election this year in the X.Org Foundation board and to allow me to participate in XDC organization during my work hours. It’s amazing that our Free Software and collaboration values are still present after 20 years rocking in the free world!

Igalia 20th anniversary Igalia

September 24, 2021 05:20 AM

Brian Kardell

Dad: A Personal Post

Dad: A Personal Post

Last month, my dad passed away, very unexpectedly. That night, alone with my thoughts and unable to sleep, or do anything else, I wrote this post. I didn't write it for my blog, I wrote it for me. I needed to. I didn't post it then for a lot of reasons, not the least of which is that I don't generally share personal or vulnerable things here. I can understand if that's not why you're here. Today, I decided I would, as a kind of memorial... And immediately cried. So, please: Feel free to skip this one if it pops up in your feed and you're here for the tech. This isn't that post. This one isn't for you, it's for me, and my dad.

[Posted later] Today my dad passed away, unexpectedly. I am thinking a lot, and so sad. I need to put words on a page and get them out of my head.

my dad's obit photo
My dad's obit photo. He was barely 63.

My Dad

When I was 5 my mother and my biological father, barely in their mid-20s, got a divorce. Even I could see that weren't compatible. My mom, just finishing college had a lot of friends and they would occasionally help us out in many ways: From picking me up from school because my mom was held up, to to helping us move into our first apartment - or sometimes, just inviting us over.

One of those people, who I saw more and more of was a young man named Jim Wyse.. Jimmy... My dad, who passed away today, unexpectedly.

Legally speaking, I guess, Jimmy became my "dad" in a ceremony when I was 7 - but that's bullshit, because the truth is, I can't even tell you when it became clear that this distinction was uttlerly meaningless to us. I was his son, and he was my dad. It wasn't because of biology or law or ceremony, but by virtue of all of the things that ultimately matter so much more...and by choice. I couldn't tell you when, because it is seamless in my mind.

From the very beginning he cared for me. He took me camping, and fishing. He taught me to shift gears while he worked the clutch. He played with me in the yard. We wrestled and "boxed". We swam and we boated. He took me to see the movies of my childhood: The Empire Strikes Back, Superman II and Rocky 3. He gave me my first tastes of coffee, beer and wine. He told me stories of his childhood. We laughed together. He taught me to build and fix things, or at least he included me, as if my "help" (often counter-productive) really mattered. What really mattered was something more than that. It's easy to see that, now.

Early photos of my dad and me, maybe even before he was technically my dad (I am in the black hat, with me is his nephew, my late cousin Jason who died a couple of years ago).

In fact, we spent what seems like, in retrospect, an impossible amount of time together. He cared when I was sad. He celebrated my victories. He taught me to be respectful and empathetic and generous and forgiving. He provided discipline too.

Jimmy came from a large family by today's standards, 4 brothers and a sister who all grew up and spent their entire lives in the same small 3 bedroom, 1 small bathroom house. It is generous, in fact, to call it 3 bedrooms. One of them, I believe, was converted out of the largest of two when my aunt was born. I worked on houses with my dad that had walk-in closets that are larger. They weren't wealthy by any stretch of the imagination, but they were close, and he still lived in that house with his parents when I met him. He was younger than my mom.

In this I got a whole new (big) family too. Cousins, aunts, uncles and grandparents with grand children who would become fixtures in my life. They were, of course, all actually biologically related and yet this distinction seems to have been totally irrelevant to all of them from the beginning as well. We spent holidays and vacaations together. In fact, while we lived near enough, we spent many weekends and evenings together too. Several of them lived with us and worked for him for a stint during difficult times in their own lives.

When I was 9 my sister Jennifer was born. It would be impossible to overstate how much I loved this new baby that came into our house. And it would be impossible to not see how much he did too. Perhaps it was the fact that some people began congratulate him on his "first child" that caused me to hear him first address the issue. It may well be the first time, though it was certainly not the last - that I heard him express just how much he loved me and reassure me that I was every bit his child. It was genuine.

By the time my sister Sarah was born there was certainly nobody I met who doubted this. I was "Jimmy and Adele's kid" and most people referred to me as a Wyse.

My sisters are much younger than me. I don't tell them enough anymore, but I hope they know how much I love them, and how much he did. Because of our age differences, I probably have different memories than them. By the time they were probably old enough to remember much, I was already in my teenage years and spending less time at home. But I have so many wonderful memories of time we all spent together.

Somehow, it is amazing to me that the first time heard anyone refer to us as "half-brother" or "half-sister" I was 40. Despite knowing this to be a biological truth in my mind, I considerably was taken aback just to hear it and it still feels... wrong.

Tonight this memory dawned on me again as I realized it might be difficult for me to help with arrangements. He and my mother divorced long ago, so on paper we're as good as strangers, probably.

As I spoke to my sister on the phone, this realization fresh in my head, I began to worry that perhaps there was a difference. My heart broke again as I imagined the pain my sisters must feel - is it more than my own? Perhaps it is even insensitve to not acknowledge? He was, after all, the man who held them in his arms at the hospital moments after their birth - they have known nothing else. I offered a stuggled, "I know it probably isn't quite the same for us... I'm so sorry.".

That this was the moment that finally prompted her to audible tears filled me with instant regret. "No one ever thought that. How could you say that? He definitely didn't see it that way." She's right, of course, and I know it. I'm saddened that I brought it up. He was my dad - and throughout my entire life he has always been there.

My teenage years were difficult. I was difficult. I didn't take school, or much of anything else seriously. But through it all, he never gave up on me. By then he had started a small business as a general contractor and he put me to work weekends and summers (and even the occasional school day when he was very shorthanded and it was clear I wasn't going to go to school anyway). He was a constant force who walked a thin line - both teaching me valueable skills that I might need with pride, and simultaneously constantly pushing me to please use my brain and not my back to make a living.

When I graduated highschool, by some miracle, I went to work for him full time.

The following February we went to a job near Lake Erie to work on a roof. It was just about the last day anyone would want to do such a thing. It was windy, and biting cold - just above freezing. There was easily a foot of snow on the roof, and in inch of ice below it. By 9, a freezing rain had started whipping across us too.

Cold, soaked, and more uncomfortable than I have ever been, I realized that I couldn't imagine lasting the rest of the day. Did I really imagine doing this for another 40 years or more?. It was then that I realized he was right, I should do something else. I wanted to leave right then, but the shame I'd feel walking off the job because I couldn't take it kept me going for another hour... But it couldn't last.

Around 10am, I quit.

Miles from home, I sat in his truck (still very cold, wet and without heat) for many hours pondering my future. I'm sure he took some shit from the rest of the crew about it. I spent months finding a college to accept me on a trial admission program.

I tell this story so that I can add that years later, after honors and success, he told me "That was the plan. I had to show you very clearly the choice in front of you. It was one of the happiest days of my life, when you realized you didn't have to do this.... But man it was cold. That was a shitty day.".

Throughout my life, he's always been teaching me - sometimes directly, sometimes indirectly by letting me fall flat and being there to pick me up and set me right.

He was the model in my young life that set the bar for what I wanted to be for my own chidren. I also watched him a show kindness, patience and understanding to many people, over the years, in ways that remain unparalleled examples to me. He was my example and the man I tried to be in so many ways.

He was the warmest soul in my darkest hours. There were times in my life where he was the only one I could talk to. On more than one occasion, he consoled and supported me in ways no one else could. He sat with me and calmed me while I cried so hard I couldn't speak. I tried be there for him in some of his difficult times too, and he had some rough ones. He wasn't perfect either, but who is?

The truth is, he was more to me than "dad" expresses. Much more.

7 years ago, during one of those difficult times in my life, after my own long relationship broke up, I purchased the home that he grew up in from my Aunt. It needed a lot of work at the time, and he came and did some of it. He replaced the roof and installed new windows in the front. We were planning on doing the back last fall, until the pandemic. A boom in new work after things began to turn around meant we'd put it off till this fall.

It's funny, and sad, how much we (or at least I) put off till tomorrow, and then miss the chance because there are no more tomorrows. I haven't phsyically seen him (or much of anyone, really) in a year.

A lot of our conversation since the pandemic has centered on the old place: Me asking him questions about how to do something, or sending him pictures of improvements or changes I'd made. He'd always reply encouragingly, celebrating my work and expressing happiness that this home remained in the family. "Your grandparents would be happy".

Tonight, as I went to make a call, I realized that I have an unread message on my phone from him from last week. He was replying to to a photo I sent him of some new landscaping. It was a simple message. "Looks great!" Two words and an exclamation point. That's it. Nothing deep, but it made me cry. I missed it at the time, and these are his last words to me. Encouraging me.

Each night I fall asleep in the same room that he did until we met. My bedroom is his old bedroom that I used to go and play in and wrestle with my cousins. I think about all of this often - and how lucky I am that Jim was my dad and that he loved me. I loved him too - and I'm glad I can say that we both knew it. Tonight, won't be different in that respect - I'm sure I'll replay all of this in my mind... But.. It is quite a bit different, isn't it? He's gone now.

Photo memories

One of the things I spent a long time doing since is looking through old photos. Most of these are bad photos of photos, but they give some context to all of this and are some great memories for me... Even if they aren't all of him, he's in all of the memories.

I was in my mom and dad's wedding party, in fact. I am pretty sure he helped pick my suit. This is me (in the suit), outside the reception where a bunch of us helped decorate their car with paper flowers.
This is a photo of my dad on his honeymoon after he married my mom. He was 21. So young. Only 14 years older than me, in fact.
<
A photo of me and my sister Jennifer. I was 9 by the time she was born. My dad took this photo of us on vacation.
Me and my youngest sister, Sarah, when she was born. Also, taken by dad. He loved taking pictures of us (he got much better at it later).
This is me at my 6th grade graduation. My dad got right up there on the stage to take a photo. He was like that - always cheering me on, boisterously. I almost didn't go to my highschool graduation. He talked me into it. I could hear him over the entire crowd when I walked up.
A photo of me and my two sisters, taken by my dad.
My dad and I were always horsing around. These memories of him are so firmly engrained in my mind, and still how I see him that a just few years ago, in his pool I initiated similar horseplay in his pool (we had fun), before remembering that he was 60 and had a bad back, and just very quickly let him take me down. I say "let" only to say I phsyically stopped - but I'm not gonna lie, my dad was rugged as hell, even then.
In 2014, a photo of me, my dad and my two sisters (and my sister's husband) at his house for Christmas, after I moved back to Pittsburgh. He and my mom had been divorced for maybe a decade. He'd been remarried and divorced again since. He never stopped being my dad for a minute.

September 24, 2021 04:00 AM

September 20, 2021

Manuel Rego

Igalia 20th Anniversary

This is a brief post about an important event that is happening today.

Back in 2001 a group of 10 engineers from the University of A Coruña in Galicia (Spain) founded Igalia to run a cooperative business around the free software world. Today it’s its 20th anniversary so it’s time to celebrate! 🎂

Igalia 20th anniversary logo Igalia 20th anniversary logo

In my particular case I joined the company in 2007, just after graduating from the University. During these years I have had the chance to be involved in many interesting projects and communities. On top of that, I’ve learnt a ton of thigs about how a company is managed. I’ve also seen Igalia grow in size and move from local projects and customers to work with the biggest fishes in the IT industry.

I’m very grateful to the founders and all the people that have been involved in making Igalia a successful project during all this years. I’m also very proud of all what we have achieved so far, and how we have done it without sacrificing any of our principles.

We’re now more than 100 people, from all over the world. Awesome colleagues, partners and friends which share amazing values that define the direction of the company. Igalia is today the reference consultancy in some of the most important open source communities out there, which is an outstanding achievement for such a small company.

This time the celebration has to be at home, but for sure we’ll do a big party when we all can meet again together in the future. Meanwhile we have a new brand 20th anniversary logo, together with a small website in case you want to know more about Igalia’s history.

Myself with the Igalia 20th anniversary t-shirt Myself with the Igalia 20th anniversary t-shirt

Sometimes happens that the company you work for turns 20 years old. But it’s way less common that the company you co-own turns 20 years old. Let’s enjoy this moment. Looking forward to many more great years to come! 🎉

September 20, 2021 10:00 PM

September 02, 2021

Byungwoo Lee

CSS Selectors :has()

Selector? Combinator? Subject?

As described in the Selectors Level 4 spec, a selector represents a particular pattern of element(s) in a tree structure. We can select specific elements in a tree structure by matching the pattern to the tree.

Generally, this pattern involves two disctinct concepts: First, a means to express conditions to be tested on an element itself (simple selectors or compound selector). Second, a means to express conditions on the relationship between two elements (combinators).

And the subject of a selector is any element matched by the selector.

The limits of subjects, so far

When you have a reference element in a DOM tree, you can select other elements with a CSS selector.

In a generic tree structure, an element can have 4-way relationships to other elements.

  • an element is an ancestor of an other element.
  • an element is a previous sibling of an other element.
  • an element is a next sibling of an other element.
  • an element is a descendant of an other element.

CSS Selectors, to date, have only allowed the last 2 (‘is a next sibling of’ and ‘is a descendant of’).

So in the CSS world, Thor can say “I am Thor, son of Odin” like this: Odin > Thor. But there has been no way for Darth Vader to tell Luke, “I’m your father”.

At least, these are the limits of what has been implemented and is shipping in every browser to date. However, :has() in the CSS Selectors spec provides the expression: DarthVader:has(> Luke)

The reason of the limitation is mainly about efficiency.

The primary use of selectors has always been in CSS itself. Pages often have 500-2000 CSS rules and slightly more elements in them. Selectors act as filters in the process of applying style rules to elements. If we have 2000 css rules for 2000 elements, matching could be done at least 2,000 times, and in the worst case (in theory) 4,000,000 times. In the browser, the tree is changing constantly - even a static document is rapidly mutated (built) as it is parsed - and we try to render all of this incrementally and at 60 fps. In summary, the selector matching is performed very frequently in performance-critical processes. So, it must be designed and implemented to meet very high performance. And one of the efficient ways to make it is to make the problem simple by limiting complex problems.

In the tree structure, checking a descendant relationship is more efficient than checking an ancestor relationship because an element has only one parent, but it can have multiple children.

<div id=parent>
  <div id=subject>
    <div id=child1></div>
    <div id=child2></div>
    ...
    <div id=child10></div>
  </div>
</div>
<script>
subject.matches('#parent > :scope');
// matches   : Are you a child of #parent ?
// #subject  : Yes, my parent is #parent.

subject.matches(':has(> #child10)');
// matches   : Are you a parent of #child10 ?
// #subject  : Wait a second, I have to lookup all my children.
//             Yes, #child10 is one of my children.
</script>

By removing one of the two opposite directions, we can always place the subject of a selector to the right, no matter how complex the selector is.

  • ancestor subject
    -> subject is a descendant of ancestor
  • previous_sibling ~ subject
    -> subject is a next sibling of previous_sibling
  • previous_sibling ~ ancestor subject
    -> subject is a descendant of ancestor, which is a next sibling of previous_sibling

With this limitation, we can get the advantages of having simple data structures and simple matching sequences.

<style>
A > B + C { color: red; }
</style>
<!--
'A > B + C' can be parsed as a list of selector/combinator pair.
[
  {selector: 'C', combinator: '+'},
  {selector: 'B', combinator: '>'},
  {selector: 'A', combinator: null}
]
-->
<A>       <!-- 3. match 'A' and apply style to C if matched-->
  <B></B> <!-- 2. match 'B' and move to parent if matched-->
  <C></C> <!-- 1. match 'C' and move to previous if matched-->
</A>

:has() allows you to select subjects at any position

With combinators, we can only select downward (descendants, next siblings or descendants of next siblings) from a reference element. But there are many other elements that we can select if the other two relationships, ancestors and previous siblings, are supported.

<div>               <!-- ? -->
  <div></div>         <!-- ? -->
</div>
<div>               <!-- ? -->
  <div>               <!-- ? -->
    <div></div>         <!-- ? -->
  </div>
  <div id=reference>  <!-- #reference -->
    <div></div>         <!-- #reference > div -->
  </div>
  <div>               <!-- reference + div -->
    <div></div>         <!-- reference + div > div -->
  </div>
</div>
<div>               <!-- ? -->
  <div></div>         <!-- ? -->
</div>

:has() provides the way of selecting upward (ancestors, previous siblings, previous siblings of ancestors) from a reference element.

<div>               <!-- div:has(+ div > #reference) -->
  <div></div>         <!-- ? -->
</div>
<div>               <!-- div:has(> #reference) -->
  <div>               <!-- div:has(+ #reference) -->
    <div></div>         <!-- ? -->
  </div>
  <div id=reference>  <!-- #reference -->
    <div></div>         <!-- #reference > div -->
  </div>
  <div>               <!-- #reference + div -->
    <div></div>         <!-- #reference + div > div -->
  </div>
</div>
<div>               <!-- ? -->
  <div></div>         <!-- ? -->
</div>

And with some simple combinations, we can select all elements around the reference element.

<div>               <!-- div:has(+ div > #reference) -->
  <div></div>         <!-- div:has(+ div > #reference) > div -->
</div>
<div>               <!-- div:has(> #reference) -->
  <div>               <!-- div:has(+ #reference) -->
    <div></div>         <!-- div:has(+ #reference) > div -->
  </div>
  <div id=reference>  <!-- #reference -->
    <div></div>         <!-- #reference > div -->
  </div>
  <div>               <!-- #reference + div -->
    <div></div>         <!-- #reference + div > div -->
  </div>
</div>
<div>               <!-- div:has(> #reference) + div -->
  <div></div>         <!-- div:has(> #reference) + div > div -->
</div>

What is the problem with :has() ?

As you might already know, this pseudo class has been delayed for a long time despite the constant interest.

There are many complex situations that makes things difficult when we try to support :has().

  • There are many, many complex cases of selector combinations.
  • Those cases are handled in the selector matching operations and style invalidation operations in the style engine.
  • Selector matching operation and style invalidation operation is very critical to performance.
  • The style engine is carefully designed and highly optimized based on the existing two relationships. (is a descendant of, is a next sibling of).
  • Each Browser engine has its own design and optimization for those operations.

In this context, :has() provides the other two relationships (is a parent of, is a previous sibling of), and problems and concerns start from this.

When we meet a complex and difficult problem, the first strategy we can take is to break it down into smaller ones. For :has(), we can divide the problems with the CSS selector profiles

Problems of the :has() matching operation

:has() matching operation basically implies descendant lookup overhead as described previously. This is an unavoidable overhead we have to take on when we want to use :has() functionality.

In some cases, :has() matching can be O(n2) because of the duplicated argument matching operations. When we call document.querySelectorAll('A:has(B)') on the DOM <A><A><A><A><A><A><A><A><A><A><B>, there can be unnecessary argument selector matching because the descendant traversal can occur for every element A. If so, the number of argument matching operation can be 55(=10+9+8+7+6+5+4+3+2+1) without any optimization, whereas 10 is optimal for this case.

There can be more complex cases involving shadow tree boundary crossing.

Problems of the :has() Style invalidation

In a nutshell, the style engine tries to invalidate styles of elements that are possibly affected by a DOM mutation. It has long been designed and highly optimized based on the assumption that, any possibly affected element is the changed element itself or is downward from it.

<style>
.mutation .subject { color: red; }
</style>
<div>          <!-- classList.toggle('mutation') affect .subject -->
  <div class="subject"></div>       <!-- .subject is in downward -->
</div>

But :has() invalidation is different because the possibly affected element is upward of the changed element (an ancestor, rather than a descendant).

<style>
.subject:has(.mutation) { color: red; }
</style>
<div class="subject">                 <!-- .subject is in upward -->
  <div></div>  <!-- classList.toggle('mutation') affect .subject -->
</div>

In some cases, a change can affect elements in both the upward and downward directions.

<style>
.subject1:has(:is(.mutation1 .something)) { color: red; }
.something:has(.mutation2) .subject2 { color: red; }
</style>
<div class="subject1">              <!-- .subject1 is in upward -->
  <div>     <!-- classList.toggle('mutation1') affect .subject1 -->
    <div class="subject1">        <!-- .subject1 is in downward -->
      <div class="something"></div>
    </div>
  </div>
</div>
<div class="something">
  <div class="subject2">            <!-- .subject2 is in upward -->
    <div>   <!-- classList.toggle('mutation2') affect .subject2 -->
      <div class="subject2"></div><!-- .subject2 is in downward -->
    </div>
  </div>
</div>

Actually, a change can affect everywhere.

<style>
:has(~ .mutation) .subject { color: red; }
:has(.mutation) ~ .subject { color: red; }
</style>
<div>
  <div>
    <div class="subject">        <!-- not in upward or downward -->
    </div>
  </div>
  <div></div> <!-- classList.toggle('mutation') affect .subject -->
</div>
<div class="subject"></div>      <!-- not in upward or downward -->

The expansion of the invalidation traversal scope (from the downward sub-tree to the entire tree) can cause performance degradation. And the violation of the basic assumptions of the invalidation logic (finding a subject from the entire tree instead of finding it from downward) can cause performance degradation and can increase implementation complexity or maintenance overhead, because it will be hard or impossible for the existing invalidation logic to support :has() invalidation as it is.

(There are many more details about :has() invalidation, and those will be covered later.)

What is the current status of :has() ?

Thanks to funding from eye/o, the :has() prototyping in the Chromium project was started by Igalia after some investigations.

(You can get rich background about this from the post - “Can I :has() by Brian Kardell.)

Prototyping is still underway, but here is our progress so far.

  • Chromium
    • Landed CLs to support :has() selector matching (3 CLs)
    • Bug fix (2 CLs)
    • Add experimental feature flag for :has() in snapshot profile (1 CL)
  • WPT (web platform test)
    • Add tests (3 Pull requests)
  • CSS working group drafts

:has() in snapshot profile

For about the :has() in snapshot profile, as of now, Chrome Dev (Version 94 released at Aug 19) supports all the :has() functionalities except some cases involving shadow tree boundary crossing.

You can try :has() with javascript APIs (querySelectorAll, querySelector, matches, closest) in snapshot profile after enabling the runtime flag : enable-experimental-web-platform-features.

has-in-snapshot-profile-is-under-the-experimental-flag

You can also enable it with the commandline flag : CSSPseudoHasInSnapshotProfile.

$ google-chrome-unstable \
        --enable-blink-features=CSSPseudoHasInSnapshotProfile

:has() in both (snapshot/live) profile

You can enable :has() in both profiles with the commandline flag : CSSPseudoHas.

$ google-chrome-unstable --enable-blink-features=CSSPseudoHas

Support for :has() in the live profile is still in progress. When you enable :has() with this flag, you can see that style rules with :has() are working only at loading time. The style will not be recalculated after DOM changes.

has-in-live-profile-just-support-initial-style

by Byungwoo's Blog at September 02, 2021 03:00 PM

August 31, 2021

Juan A. Suárez

Implementing Performance Counters in V3D driver

Let me talk here about how we implemented the support for performance counters in the Mesa V3D driver, the OpenGL driver used by the Raspberry Pi 4. For reference, the implementation is very similar to the one already available (not done by me, by the way) for the VC4, OpenGL driver for the Raspberry Pi 3 and prior devices, also part of Mesa. If you are already familiar with how this is implemented in VC4, then this will mostly be a refresher.

First of all, what are these performance counters? Most of the processors nowadays contain some hardware facilities to get measurements about what is happening inside the processor. And of course graphics processors aren’t different. In this case, the graphics chips used by Raspberry Pi devices (manufactured by Broadcom) can record a bunch of different graphics-related parameters: how many quads are passing or failing depth/stencil tests, how many clock cycles are spent on doing vertex/fragment shading, hits/misses in the GPU cache, and many others values. In fact, with the V3D driver it is possible to measure around 87 different parameters, and up to 32 of them simultaneously. Quite a few less in VC4, though. But still a lot.

On a hardware level, using these counters is just a matter of writing and reading some GPU registers. First, write the registers to select what we want to measure, then a few more to start to measure, and finally read other registers containing the results. But of course, much like we don’t expect users to write GPU assembly code, we don’t expect users to write registers in the GPU directly. Moreover, even the Mesa drivers such as V3D can’t interact directly with the hardware; rather, this is done through the kernel, the one that can use the hardware directly, through the DRM subsystem in the kernel. For the case of V3D (and same applies to VC4, and in general to any other driver), we have a driver in user-space (whether the OpenGL driver, V3D, or the Vulkan driver, V3DV), and a kernel driver in the kernel-space, unsurprisingly also called V3D. The user-space driver is in charge of translating all the commands and options created with the OpenGL API or other API to batches of commands to be executed by the GPU, which are submitted to the kernel driver as DRM jobs. The kernel does the proper actions to send these to the GPU to execute them, including touching the proper registers. Thus, if we want to implement support for the performance counters, we need to modify the code in two places: the kernel and the (user-space) driver.

Implementation in the kernel

Here we need to think about how to deal with the GPU and the registers to make the performance counters work, as well as the API we provide to user-space to use them. As mentioned before, the approach we are following here is the same as the one used in the VC4 driver: performance counters monitors. That is, the user-space driver creates one or more monitors, specifying for each monitor what counters it is interested in (up to 32 simultaneously, the hardware limit). The kernel returns a unique identifier for each monitor, which can be used later to do the measurement, query the results, and finally destroy it when done.

In this case, there isn’t an explicit start/stop the measurement. Rather, every time the driver wants to measure a job, it includes the identifier of the monitor it wants to use for that job, if any. Before submitting a job to the GPU, the kernel checks if the job has a monitor identifier attached. If so, then it needs to check if the previous job executed by the GPU was also using the same monitor identifier, in which case it doesn’t need to do anything other than send the job to the GPU, as the performance counters required are already enabled. If the monitor is different, then it needs first to read the current counter values (through proper GPU registers), adding them to the current monitor, stop the measurement, configure the counters for the new monitor, start the measurement again, and finally submit the new job to the GPU. In this process, if it turns out there wasn’t a monitor under execution before, then it only needs to execute the last steps.

The reason to do all this is that multiple applications can be executing at the same time, some using (different) performance counters, and most of them probably not using performance counters at all. But the performance counter values of one application shouldn’t affect any other application so we need to make sure we don’t mix up the counters between applications. Keeping the values in their respective monitors helps to accomplish this. There is still a small requirement in the user-space driver to help with accomplishing this, but in general, this is how we avoid the mixing.

If you want to take a look at the full implementation, it is available in a single commit.

Implementation in the driver

Once we have a way to create and manage the monitors, using them in the driver is quite easy: as mentioned before, we only need to create a monitor with the counters we are interested in and attach it to the job to be submitted to the kernel. In order to make things easier, we keep a mirror-like version of the monitor inside the driver.

This approach is adequate when you are developing the driver, and you can add code directly on it to check performance. But what about the final user, who is writing an OpenGL application and wants to check how to improve its performance, or check any bottleneck on it? We want the user to have a way to use OpenGL for this.

Fortunately, there is in fact a way to do this through OpenGL: the GL_AMD_performance_monitor extension. This OpenGL extension provides an API to query what counters the hardware supports, to create monitors, to start and stop them, and to retrieve the values. It looks very similar to what we have described so far, except for an important difference: the user needs to start and stop the monitors explicitly. We will explain later why this is necessary. But the key point here is that when we start a monitor, this means that from that moment on, until stopping it, any job created and submitted to the kernel will have the identifier of that monitor attached. This implies that only one monitor can be enabled in the application at the same time. But this isn’t a problem, as this restriction is part of the extension.

Our driver does not implement this API directly, but through “queries”, which are used then by the Gallium subsystem in Mesa to implement the extension. For reference, the V3D driver (as well as the VC4) is implemented as part of the Gallium subsystem. The Gallium part basically handles all the hardware-independent OpenGL functionality, and just requires the driver hook functions to be implemented by the driver. If the driver implements the proper functions, then Gallium exposes the right extension (in this case, the GL_AMD_performance_monitor extension).

For our case, it requires the driver to implement functions to return which counters are available, to create or destroy a query (in this case, the query is the same as the monitor), start and stop the query, and once it is finished, to get the results back.

At this point, I would like to explain a bit better what it implies to stop the monitor and get the results back. As explained earlier, stopping the monitor or query means that from that moment on, any new job submitted to the kernel (and thus to the GPU) won’t contain a performance monitor identifier attached, and hence won’t be measured. But it is important to know that the driver submits jobs to the kernel to be executed at its own pace, but these aren’t executed immediatly; the GPU needs time to execute the jobs, and so the kernel puts the arriving jobs in a queue, to be submitted to the GPU. This means when the user stops the monitor, there could be still jobs in the queue that haven’t been executed yet and are thus pending to be measured.

And how do we know that the jobs have been executed by the GPU? The hook function to implement getting the query results has a “wait” parameter, which tells if the function needs to wait for all the pending jobs to be measured to be executed or not. If it doesn’t but there are pending jobs, then it just returns telling the caller this fact. This allows to do other work meanwhile and query again later, instead of becoming blocked waiting for all the jobs to be executed. This is implemented through sync objects. Every time a job is sent to the kernel, there’s a sync object that is used to signal when the job has finished executing. This is mainly used to have a way to synchronize the jobs. In our case, when the user finalizes the query we save this fence for the last submitted job, and we use it to know when this last job has been executed.

There are quite a few details I’m not covering here. If you are interested though, you can take a look at the merge request.

Gallium HUD

So far we have seen how the performance counters are implemented, and how to use them. In all the cases it requires writing code to create the monitor/query, start/stop it, and querying back the results, either in the driver itself or in the application through the GL_AMD_performance_monitor extension1.

But what if we want to get some general measurements without adding code to the application or the driver? Fortunately, there is an environmental variable GALLIUM_HUD that, when correctly, will show on top of the application some graphs with the measured counters.

Using it is very easy; set it to help to know how to use it, as well as to get a list of the available counters for the current hardware.

As example:

$ env GALLIUM_HUD=L2T-CLE-reads,TLB-quads-passing-z-and-stencil-test,QPU-total-active-clk-cycles-vertex-coord-shading scorched3d

You will see:

Performance Counters in Scorched 3D

Bear in mind that to be able to use this you will need a kernel that supports performance counters for V3D. At the moment of writing this, no kernel has been released yet with this support. If you don’t want to wait for it, you can download the patch, apply it to your raspberry pi kernel (which has been tested in the 5.12 branch), build and install it.

  1. All this is for the case of using OpenGL; if your application uses Vulkan, there are other similar extensions, which are not yet implemented in our V3DV driver at the moment of writing this post. 

August 31, 2021 10:00 PM

August 20, 2021

Brian Kardell

Experimenting with :has()

Experimenting with :has()

Back in May, I wrote Can I :has()?. In that piece, I discussed the :has() pseudo-class and the practical reasons it's been hard to advance. Today I'll give you some updates on advancing :has() efforts in Chromium, and how you can play with it today.

In my previous piece I explained that Igalia had been working to help move these discussions along by doing the research that has been difficult for vendors to prioritize (funded by eyeo) and that we believe that we'd gotten somewhere: We'd done lot of research, developed a prototype in a custom build of chromium and had provided what we believed were good proofs for discussion. The day that I wrote that last piece, we were filing an intent to prototype in chromium.

Today, I'd like to give some updates on those efforts...

Where things stand in Chromium, as of yesterday

As you may, or may not know, the process for shipping new features in Chromium is pretty involved and careful. There are several 'intent' steps, many, reviews along the way, many channels (canary, dev, beta, stable). Atop this are also things which launch with command line flags, runtime feature flags, origin trials (experimentally on for some sites opted in), reverse origin trials (some sites opted out) and field trials/finch flags (rollout to some % of users on or off by default).

Effectively, things get more serious and certain, and as that happens we want to expand the reach of these things by making it easier for more developers to experiment with it.

Previously...

For a while now our up-streaming efforts have allowed you to pass command line flags to enable some support in early channels. Either

--enable-blink-features=CSSPseudoHasInSnapshotProfile
--enable-blink-features=CSSPseudoHas

The former adds support for the use of the :has() pseudo class in the JavaScript selector APIs ('the snapshot/static profile'), and the latter enables support in CSS stylesheets too.

These ways still work, but it's obviously a lot more friction than most developers will take the time to learn, figure out, and try. Most of us don't launch from a command line.

New Advancements!

As things have gotten more stable and serious, we're moving along and making some thing easier...

As of the dev channel release 94.0.4606.12 (yesterday), enabling support in the JavaScript selector APIs is now as simple as enabling the experimental web platform features runtime flag. Chances are, a number of readers already have this flag flipped, so low friction indeed!

Support in the JavaScript APIs has always involved far fewer unknowns and challenges, but what's held us from adding support there first has always been a desire to prevent splitting and a lack of ability to answer questions about whether the main, live CSS profile could be supported, what limits it would need and so on. We feel like we have a much better grip on many of these questions now and so things are moving along a bit.

We hope that this encourages more people to try it out and provide feedback, open bugs, or just add encouragement. Let us know if you do!

Much more at Ad Blocker Dev Summit 2021

I'm also happy to note that I'll be speaking, along with my colleague Byungwoo Lee and eyeo's @shwetank and @WebReflection at Ad Blocker Dev Summit 2021 on October 21. Looking forward to being able to provide a lot more information there on the history, technical challenges, process, use cases and impacts! Hope to see you there!

August 20, 2021 04:00 AM

August 11, 2021

Danylo Piliaiev

Testing Vulkan drivers with games that cannot run on the target device

Here I’m playing “Spelunky 2” on my laptop and simultaneously replaying the same Vulkan calls on an ARM board with Adreno GPU running the open source Turnip Vulkan driver. Hint: it’s an x64 Windows game that doesn’t run on ARM.

The bottom right is the game I’m playing on my laptop, the top left is GFXReconstruct immediately replaying Vulkan calls from the game on ARM board.

How is it done? And why would it be useful for debugging? Read below!


Debugging issues a driver faces with real-world applications requires the ability to capture and replay graphics API calls. However, for mobile GPUs it becomes even more challenging since for Vulkan driver the main “source” of real-world workload are x86-64 apps that run via Wine + DXVK, mainly games which were made for desktop x86-64 Windows and do not run on ARM. Efforts are being made to run these apps on ARM but it is still work-in-progress. And we want to test the drivers NOW.

The obvious solution would be to run those applications on an x86-64 machine capturing all Vulkan calls. Then replaying those calls on a second machine where we cannot run the app. This way it would be possible to test the driver even without running the application directly on it.

The main trouble is that Vulkan calls made on one GPU + Driver combo are not generally compatible with other GPU + Driver combo, sometimes even for one GPU vendor. There are different memory capabilities (VkPhysicalDeviceMemoryProperties), different memory requirements for buffer and images, different extensions available, and different optional features supported. It is easier with OpenGL but there are also some incompatibilities there.

There are two open-source vendor-agnostic tools for capturing Vulkan calls: RenderDoc (captures single frame) and GFXReconstruct (captures multiple frames). RenderDoc at the moment isn’t suitable for the task of capturing applications on desktop GPUs and replaying on mobile because it doesn’t translate memory type and requirements (see issue #814). GFXReconstruct on the other hand has the necessary features for this.

I’ll show a couple of tricks with GFXReconstruct I’m using to test things on Turnip.


Capturing with GFXReconstruct

At this point you either have the application itself or, if it doesn’t use Vulkan, a trace of its calls that could be translated to Vulkan. There is a detailed instruction on how to use GFXReconstruct to capture a trace on desktop OS. However there is no clear instruction of how to do this on Android (see issue #534), fortunately there is one in Android’s documentation:

Android how-to (click me)
For Android 9 you should copy layers to the application which will be traced
For Android 10+ it's easier to copy them to com.lunarg.gfxreconstruct.replay
You should have userdebug build of Android or probably rooted Android

# Push GFXReconstruct layer to the device
adb push libVkLayer_gfxreconstruct.so /sdcard/

# Since there is to APK for capture layer,
# copy the layer to e.g. folder of com.lunarg.gfxreconstruct.replay
adb shell run-as com.lunarg.gfxreconstruct.replay cp /sdcard/libVkLayer_gfxreconstruct.so .

# Enable layers
adb shell settings put global enable_gpu_debug_layers 1

# Specify target application
adb shell settings put global gpu_debug_app <package_name>

# Specify layer list (from top to bottom)
adb shell settings put global gpu_debug_layers VK_LAYER_LUNARG_gfxreconstruct

# Specify packages to search for layers
adb shell settings put global gpu_debug_layer_app com.lunarg.gfxreconstruct.replay

If the target application doesn’t have rights to write into external storage - you should change where the capture file is created:

adb shell "setprop debug.gfxrecon.capture_file '/data/data/<target_app_folder>/files/'"


However, when trying to replay the trace you captured on another GPU - most likely it will result in an error:

[gfxrecon] FATAL - API call vkCreateDevice returned error value VK_ERROR_EXTENSION_NOT_PRESENT that does not match the result from the capture file: VK_SUCCESS.  Replay cannot continue.
Replay has encountered a fatal error and cannot continue: the specified extension does not exist

Or other errors/crashes. Fortunately we could limit the capabilities of desktop GPU with VK_LAYER_LUNARG_device_simulation

VK_LAYER_LUNARG_device_simulation when simulating another GPU should be told to intersect the capabilities of both GPUs, making the capture compatible with both of them. This could be achieved by recently added environment variables:

VK_DEVSIM_MODIFY_EXTENSION_LIST=whitelist
VK_DEVSIM_MODIFY_FORMAT_LIST=whitelist
VK_DEVSIM_MODIFY_FORMAT_PROPERTIES=whitelist

whitelist name is rather confusing because it’s essentially means “intersection”.

One would also need to get a json file which describes target GPU capabilities, this should be done by running:

vulkaninfo -j &> <device_name>.json

The final command to capture a trace would be:

VK_LAYER_PATH=<path/to/device-simulation-layer>:<path/to/gfxreconstruct-layer> \
VK_INSTANCE_LAYERS=VK_LAYER_LUNARG_gfxreconstruct:VK_LAYER_LUNARG_device_simulation \
VK_DEVSIM_FILENAME=<device_name>.json \
VK_DEVSIM_MODIFY_EXTENSION_LIST=whitelist \
VK_DEVSIM_MODIFY_FORMAT_LIST=whitelist \
VK_DEVSIM_MODIFY_FORMAT_PROPERTIES=whitelist \
<the_app>

Replaying with GFXReconstruct

gfxrecon-replay -m rebind --skip-failed-allocations <trace_name>.gfxr
  • -m Enable memory translation for replay on GPUs with memory types that are not compatible with the capture GPU’s
    • rebind Change memory allocation behavior based on resource usage and replay memory properties. Resources may be bound to different allocations with different offsets.
  • --skip-failed-allocations skip vkAllocateMemory, vkAllocateCommandBuffers, and vkAllocateDescriptorSets calls that failed during capture

Without these options replay would fail.

Now you could easily test any app/game on your ARM board, if you have enough RAM =) I even successfully ran a capture of “Metro Exodus” on Turnip.

But what if I want to test something that requires interactivity?

Or you don’t want to save a huge trace on disk, which could grow tens of gigabytes if application is running for considerable amount of time.

During the recording GFXReconstruct just appends calls to a file, there are no additional post-processing steps. Given that the next logical step is to just skip writing to a disk and send Vulkan calls over the network!

This would allow us to interact with the application and immediately see the results on another device with different GPU. And so I hacked together a crude support of over-the-network replay.

The only difference with ordinary tracing is that now instead of file we have to specify a network address of the target device:

VK_LAYER_PATH=<path/to/device-simulation-layer>:<path/to/gfxreconstruct-layer> \
    ...
GFXRECON_CAPTURE_FILE="<ip>:<port>" \
<the_app>

And on the target device:

while true; do gfxrecon-replay -m rebind --sfa ":<port>"; done

Why while true? It is common for DXVK to call vkCreateInstance several times leading to the creation of several traces. When replaying over the network we therefor want gfxrecon-replay to immediately restart when one trace ends to be ready for another.

You may want to bring the FPS down to match the capabilities of lower power GPU in order to prevent constant hiccups. It could be done either with libstrangle or with mangohud:

  • stranglevk -f 10
  • MANGOHUD_CONFIG=fps_limit=10 mangohud

You have seen the result at the start of the post.

by Danylo Piliaiev at August 11, 2021 09:00 PM

August 10, 2021

Iago Toral

An update on feature progress for V3DV

I’ve been silent here for quite some time, so here is a quick summary of some of the new functionality we have been exposing in V3DV, the Vulkan driver for Raspberry PI 4, over the last few months:

  • VK_KHR_bind_memory2
  • VK_KHR_copy_commands2
  • VK_KHR_dedicated_allocation
  • VK_KHR_descriptor_update_template
  • VK_KHR_device_group
  • VK_KHR_device_group_creation
  • VK_KHR_external_fence
  • VK_KHR_external_fence_capabilities
  • VK_KHR_external_fence_fd
  • VK_KHR_external_semaphore
  • VK_KHR_external_semaphore_capabilities
  • VK_KHR_external_semaphore_fd
  • VK_KHR_get_display_properties2
  • VK_KHR_get_memory_requirements2
  • VK_KHR_get_surface_capabilities2
  • VK_KHR_image_format_list
  • VK_KHR_incremental_present
  • VK_KHR_maintenance2
  • VK_KHR_maintenance3
  • VK_KHR_multiview
  • VK_KHR_relaxed_block_layout
  • VK_KHR_sampler_mirror_clamp_to_edge
  • VK_KHR_storage_buffer_storage_class
  • VK_KHR_uniform_buffer_standard_layout
  • VK_KHR_variable_pointers
  • VK_EXT_custom_border_color
  • VK_EXT_external_memory_dma_buf
  • VK_EXT_index_type_uint8
  • VK_EXT_physical_device_drm

Besides that list of extensions, we have also added basic support for Vulkan subgroups (this is a Vulkan 1.1 feature) and Geometry Shaders (we use this to implement multiview).

I think we now meet most (if not all) of the Vulkan 1.1 mandatory feature requirements, but we still need to check this properly and we also need to start doing Vulkan 1.1 CTS runs and fix test failures. In any case, the bottom line is that Vulkan 1.1 should be fairly close now.

by Iago Toral at August 10, 2021 08:10 AM

August 07, 2021

Enrique Ocaña

Beyond Google Bookmarks

I was a happy user of Del.icio.us for many years until the service closed. Then I moved my links to Google Bookmarks, which offered basically the same functionality (at least for my needs): link storage with title, tags and comments. I’ve carefully tagged and filed more than 2500 links since I started, and I’ve learnt to appreciate the usefulness of searching by tag to find again some precious information that was valuable to me in the past.

Google Bookmarks is a very old and simple service that “just works”. Sometimes it looked as if Google had just forgotten about it and let it run for years without anybody noticing… until now. It’s closing on September 2021.

I didn’t want to lose all my links, still need a link database searchable by tags and don’t want to be locked-in again in a similar service that might close in some years, so I wrote my own super-simple alternative to it. It’s called bs, sort of bookmark search.

The usage can’t be simpler, just add the tag you want to look for and it will print a list of links that have that tag:

$ bs webassembly
  title = Canvas filled three ways: JS, WebAssembly and WebGL | Compile 
    url = https://compile.fi/canvas-filled-three-ways-js-webassembly-and-webgl/ 
   tags = canvas,graphics,html5,wasm,webassembly,webgl 
   date = 2020-02-18 16:48:56 
comment =  
 
  title = Compiling to WebAssembly: It’s Happening! ★ Mozilla Hacks – the Web developer blog 
    url = https://hacks.mozilla.org/2015/12/compiling-to-webassembly-its-happening/ 
   tags = asm.js,asmjs,emscripten,llvm,toolchain,web,webassembly 
   date = 2015-12-18 09:14:35 
comment = 

If you call the tools without parameters, it will prompt data to insert a new link or edit it if the entered url matches a preexisting one:

$ bs 
url: https://compile.fi/canvas-filled-three-ways-js-webassembly-and-webgl/ 
title: Canvas filled three ways: JS, WebAssembly and WebGL | Compile 
tags: canvas,graphics,html5,wasm,webassembly,webgl 
comment: 

The data is stored in an sqlite database and I’ve written some JavaScript snippets to import the Delicious exported bookmarks file and the Google Bookmarks exported bookmarks file. Those snippets are meant to be copypasted in the JavaScript console of your browser while you have the exported bookmarks html file open on it. They’ll generate SQL sentences that will populate the database for the first time with your preexisting data.

By now the tool doesn’t allow to delete bookmarks (I haven’t had the need yet) and I still need to find a way to simplify its usage through the browser with a bookmarklet to ease adding new bookmarks automatically. But that’s a task for other day. By now I have enough just by knowing that my bookmarks are now safe.

Enjoy!

[UPDATE: 2020-09-08]

I’ve now coded an alternate variant of the database client that can be hosted on any web server with PHP and SQLite3. The bookmarks can now be managed from a browser in a centralized way, in a similar fashion as you could before with Google Bookmarks and Delicious. As you can see in the screenshot, the style resembles Google Bookmarks in some way.

You can easily create a quick search / search engine link in Firefox and Chrome (I use “d” as keyword, a tradition from the Delicious days, so that if I type “d debug” in the browser search bar it will look for that tag in the bookmark search page). Also, the 🔖 button opens a popup that shows a bookmarklet code that you can add to your browser bookmark bar. When you click on that bookmarklet, the edit page prefilled with the current page info is opened, so you can insert or edit a new entry.

There’s a trick to use the bookmarklet on Android Chrome: Use a rare enough name for the bookmarklet (I used “+ Bookmark 🔖”). Then, when you want to add the current page to the webapp, just start typing “+ book”… in the search bar and the saved bookmarklet link will appear as an autocomplete option. Click on it and that’s it.

Enjoy++!

by eocanha at August 07, 2021 12:29 PM

August 06, 2021

Tiago Vignatti

My Startup Dream

Prototyping At the beginning of 2016, myself, Tuomas and João drafted out a skateboard business in Brazil. João, blasé about his electrical engineering endeavours, desperately wanted to practice skydiving and live abroad. He was down to whatever to accomplish his goals, and so he was alright with the much lower endorphin that skateboarding was proving also. That was great, because besides being a great and funny friend, he was also a truly handyman, and we needed that kind of person in our business.

by Author at August 06, 2021 08:27 PM

August 05, 2021

Chris Lord

OffscreenCanvas update

Hold up, a blog post before a year’s up? I’d best slow down, don’t want to over-strain myself 🙂 So, a year ago, OffscreenCanvas was starting to become usable but was missing some key features, such as asynchronous updates and text-related functions. I’m pleased to say that, at least for Linux, it’s been complete for quite a while now! It’s still going to be a while, I think, before this is a truly usable feature in every browser. Gecko support is still forthcoming, support for non-Linux WebKit is still off by default and I find it can be a little unstable in Chrome… But the potential is huge, and there are now double the number of independent, mostly-complete implementations that prove it’s a workable concept.

Something I find I’m guilty of, and I think that a lot of systems programmers tend to be guilty of, is working on a feature but not using that feature. With that in mind, I’ve been spending some time in the last couple of weeks to try and bring together demos and information on the various features that the WebKit team at Igalia has been working on. With that in mind, I’ve written a little OffscreenCanvas demo. It should work in any browser, but is a bit pointless if you don’t have OffscreenCanvas, so maybe spin up Chrome or a canary build of Epiphany.

OffscreenCanvas fractal renderer demo, running in GNOME Web Canary

Those of us old-skool computer types probably remember running fractal renderers back on their old home computers, whatever they may have been (PC for me, but I’ve seen similar demos on Amigas, C64s, Amstrad CPCs, etc.) They would take minutes to render a whole screen. Of course, with today’s computing power, they are much faster to render, but they still aren’t cheap by any stretch of the imagination. We’re talking 100s of millions of operations to render a full-HD frame. Running on the CPU on a single thread, this is still something that isn’t really real-time, at least implemented naively in JavaScript. This makes it a nice demonstration of what OffscreenCanvas, and really, Worker threads allow you to do without too much fuss.

The demo, for which you can look at my awful code, splits that rendering into 64 tiles and gives each tile to the first available Worker in a pool of rendering threads (different parts of the fractal are much more expensive to render than others, so it makes sense to use a work queue, rather than just shoot them all off distributed evenly amongst however many Workers you’re using). Toggle one of the animation options (palette cycling looks nice) and you’ll get a frame-rate counter in the top-right, where you can see the impact on performance that adding Workers can have. In Chrome, I can hit 60fps on this 40-core Xeon machine, rendering at 1080p. Just using a single worker, I barely reach 1fps (my frame-rates aren’t quite as good in WebKit, I expect because of some extra copying – there are some low-hanging fruit around OffscreenCanvas/ImageBitmap and serialisation when it comes to optimisation). If you don’t have an OffscreenCanvas-capable browser (or a monster PC), I’ve recorded a little demonstration too.

The important thing in this demo is not so much that we can render fractals fast (this is probably much, much faster to do using WebGL and shaders), but how easy it is to massively speed up a naive implementation with relatively little thought. Google Maps is great, but even on this machine I can get it to occasionally chug and hitch – OffscreenCanvas would allow this to be entirely fluid with no hitches. This becomes even more important on less powerful machines. It’s a neat technology and one I’m pleased to have had the opportunity to work on. I look forward to seeing it used in the wild in the future.

by Chris Lord at August 05, 2021 03:33 PM

August 02, 2021

Philippe Normand

Introducing the GNOME Web Canary flavor

Today I am happy to unveil GNOME Web Canary which aims to provide bleeding edge, most likely very unstable builds of Epiphany, depending on daily builds of the WebKitGTK development version. Read on to know more about this.

Until recently the GNOME Web browser was available for end-users in two flavors. The primary, stable release provides the vanilla experience of the upstream Web browser. It is shipped as part of the GNOME release cycle and in distros. The second flavor, called Tech Preview, is oriented towards early testers of GNOME Web. It is available as a Flatpak, included in the GNOME nightly repo. The builds represent the current state of the GNOME Web master branch, the WebKitGTK version it links to is the one provided by the GNOME nightly runtime.

Tech Preview is great for users testing the latest development of GNOME Web, but what if you want to test features that are not yet shipped in any WebKitGTK version? Or what if you are GNOME Web developer and you want to implement new features on Web that depend on API that was not released yet in WebKitGTK?

Historically, the answer was simply “you can build WebKitGTK yourself“. However, this requires some knowledge and a good build machine (or a lot of patience). Even as WebKit developer builds have become easier to produce thanks to the Flatpak SDK we provide, you would still need to somehow make Epiphany detect your local build of WebKit. Other browsers offer nightly or “Canary” builds which don’t have such requirements. This is exactly what Epiphany Canary aims to do! Without building WebKit yourself!

A brief interlude about the term: Canary typically refers to highly unstable builds of a project, they are named after Sentinel species. Canary birds were taken into mines to warn coal miners of carbon monoxide presence. For instance Chrome has been providing Canary builds of its browser for a long time. These builds are useful because they allow early testing, by end-users. Hence potentially early detection of bugs that might not have been detected by the usual automated test harness that buildbots and CI systems run.

To similar ends, a new build profile and icon were added in Epiphany, along with a new Flatpak manifest. Everything is now nicely integrated in the Epiphany project CI. WebKit builds are already done for every upstream commit using the WebKit Buildbot. As those builds are made with the WebKit Flatpak SDK, they can be reused elsewhere (x86_64 is the only arch supported for now) as long as the WebKit Flatpak platform runtime is being used as well. Build artifacts are saved, compressed, and uploaded to a web server kindly hosted and provided by Igalia. The GNOME Web CI now has a new job, called canary, that generates a build manifest that installs WebKitGTK build artifacts in the build sandbox, that can be detected during the Epiphany Flatpak build. The resulting Flatpak bundle can be downloaded and locally installed. The runtime environment is the one provided by the WebKit SDK though, so not exactly the same as the one provided by GNOME Nightly.

Back to the two main use-cases, and who would want to use this:

  • You are a GNOME Web developer looking for CI coverage of some shiny new WebKitGTK API you want to use from GNOME Web. Every new merge request on the GNOME Web Gitlab repo now produces installable Canary bundles, that can be used to test the code changes being submitted for review. This bundle is not automatically updated though, it’s good only for one-off testing.
  • You are an early tester of GNOME Web, looking for bleeding edge version of both GNOME Web and WebKitGTK. You can install Canary using the provided Flatpakref. Every commit on the GNOME Web master branch produces an update of Canary, that users can get through the usual flatpak update or through their flatpak-enabled app-store.

Update:

Due to an issue in the Flatpakref file, the WebKit SDK flatpak remote is not automatically added during the installation of GNOME Web Canary. So it needs to be manually added before attempting to install the flatpakref:

$ flatpak --user remote-add --if-not-exists webkit https://software.igalia.com/flatpak-refs/webkit-sdk.flatpakrepo
$ flatpak --user install https://nightly.gnome.org/repo/appstream/org.gnome.Epiphany.Canary.flatpakref

As you can see in the screenshot below, the GNOME Web branding is clearly modified compared to the other flavors of the application. The updated logo, kindly provided by Tobias Bernard, has some yellow tones and the Tech Preview stripes. Also the careful reader will notice the reported WebKitGTK version in the screenshot is a development build of SVN revision r280382. Users are strongly advised to add this information to bug reports.

As WebKit developers we are always interested in getting users’ feedback. I hope this new flavor of GNOME Web will be useful for both GNOME and WebKitGTK communities. Many thanks to Igalia for sponsoring WebKitGTK build artifacts hosting and some of the work time I spent on this side project. Also thanks to Michael Catanzaro, Alexander Mikhaylenko and Jordan Petridis for the reviews in Gitlab.

by Philippe Normand at August 02, 2021 05:15 PM

July 22, 2021

Mario Sanchez Prada

Igalia and the Chromium project

A couple of months ago I had the pleasure of speaking at the 43rd International Conference on Software Engineering (aka ICSE 2021), in the context of its “Spanish Industry Case Studies” track. We were invited to give a high level overview of the Chromium project and how Igalia contributes to it upstream.

This was an unusual chance to speak at a forum other than the usual conferences I attend to, so I welcomed this as a double opportunity to explain the project to people less familiar with Chromium than those attending events such as BlinkOn or the Web Engines Hackfest, as well as to spread some awareness on our work in there.

Contributing to Chromium is something we’ve been doing for quite a few years already, but I think it’s fair to say that in the past 2-3 years we have intensified our contributions to the project even more and diversified the areas that we contribute to, something I’ve tried to reflect in this talk in no more than 25 minutes (quite a challenge!). Actually, it’s precisely because of this amount of contributions that we’re currently the 2nd biggest non-Google contributor to the project in number of commits, and among the Top 5 contributors by team size (see a highlight on this from BlinkOn 14’s keynote). For a small consultancy company such as ours, it’s certainly something to feel proud of.

With all this in mind, I organized the talk into 2 main parts: First a general introduction to the Chromium project and then a summary of the main upstream work that we at Igalia have contributed recently to it. I focused on the past year and a half, since that seemed like a good balance that allowed me to highlight the most important bits without adding too much  information. And from what I can tell based on the feedback received so far, it seems the end result has been helpful and useful for some people without prior knowledge to understand things such as the differences between Chromium and Chrome, what ChromiumOS is and how our work on several different fronts (e.g. CSS, Accessibility, Ozone/X11/Wayland, MathML, Interoperability…) fits into the picture.

Obviously, the more technically inclined you are, and the more you know about the project, the more you’ll understand the different bits of information condensed into this talk, but my main point here is that you shouldn’t need any of that to be able to follow it, or at least that was my intention (but please let me know in the comments if you have any feedback). Here you have it:

You can watch the talk online (24:05 min) on our YouTube channel, as well as grab the original slide deck as a PDF in case you also want it for references, or to check the many links I included with pointers for further information and also for reference to the different sources used.

Last, I don’t want to finish this post without thanking once again to the organizers for the invitation and for runing the event, and in particular to Andrés-Leonardo Martínez-Ortiz and Javier Provecho for taking care of the specific details involved with the “Spanish Industry Case Studies” track.

Thank you all

by mario at July 22, 2021 02:16 PM