Planet Igalia

October 15, 2021

Fernando Jiménez

2021 WebKit Contributors Meeting talk - WPE Android

A couple of weeks ago I attended my first WebKit Contributors Meeting and I presented this talk about WPE WebKit for Android.

October 15, 2021 12:00 AM

September 30, 2021

Brian Kardell

Making the whole web better, one canvas at a time.

Making the whole web better, one canvas at a time.

One can have an entire career on the web and never write a single canvas.getContext('2d'), so "Why should I care about this new OffscreenCanvas thing?" is a decent question for many. In this post, I'll tell you why I'm certain that it will matter to you, in real ways.

How relevant is canvas?

As a user, you know from lived experience that <video> on the web is pretty popular. It isn't remotely niche. However, many developers I talk to think that <canvas> is. The sentiment seems to be something like...

I can see how it is useful if you want to make a photo editor or something, but... It's not really a thing I've ever added to a site or think I experience much... It's kind of niche, right?

What's interesting though, is that in reality, <canvas>'s prevalence in the the HTTPArchive isn't so far behind <video> (63rd/70th most popular elements respectively). It's considerably more widely used than many other standard HTML elements.

Amazing, right? I mean, how could that even be?!

The short answer is, it's just harder to recognize. A great example of this is maps. As a user, you recognize maps. You know they are common and popular. But what perhaps you don't recognize that it's on a canvas.

As a developer, there is a fair chance you have included a <canvas> somewhere without even realizing it. But again, since it is harder to recognize "ah this is a canvas" we don't idenitfy it the way we do video. Think about it: We include videos similarly all the time - not by directly including a <video> but via an abtraction - maybe it is a custom element or an iframe. Still, as a user you still clearly idenitfy it, so in your mind, as a developer you count it.

If canvas is niche, it is only so in the sense of who has to worry about those details. So let's talk about why you'll care, even if you don't directly use the API...

The trouble with canvas...

Unfortunately, <canvas> itself has a fundamental flaw. Let me show you...

Canvas (old)

This video is made by Andreas Hocevar using a common mapping library, on some fairly powerful hardware. You'll note how janky it gets - what you also can't tell from the video is that user interactions are temporarily interrupted on and off as rendering tries to keep up. The interface feels a little broken and frustrating.

For whom the bell tolls

For as bad as the video above is, as is the case on all performance related things, it's tempting to kind of shrug it off and think "Well, I don't know.. it's pretty usable, still - and hardware will catch up".

For all of the various appeals that have been made over the years to get us to care more about performance ("What about the fact that the majority of people use hardware less powerful than yours?" or "What about the fact that you're losing potential customers and users?" etc), we haven't moved that ball as meaningfully as we'd like. But,W I'd like to add one more to the list of things to think about here...

Ask not for whom the performance bell tolls, because increasingly: It tolls for you.

While we've been busy talking about phones and computers, something interesting happened: Billions of new devices using embedded web rendering engines appeared. TVs, game consoles, GPS systems, audio systems, infotainment systems in cars, planes and trains, kiosks, point of sale, digital signage, refridgerators, cooking appliances, ereaders, etc.. They're all using web engines.

Interstingly, if you own a high-end computer or phone, you're similarly more likely to enounter even more of these, as a user.

Embedded systems are generally way less powered than the universal devices we talk about often when they're brand new -- and their replacement rate is way slower.

So, while that moderately uncomfortable jank on your new iPhone still seems pretty bearable, it might translate to just a few (or even 1) FPS on your embedded device. Zoiks!

In other words, increasingly, that person that all of the other talks ask you to consider and empathize with... is you.

Enter: OffscreenCanvas

OffscreenCanvas is a solution to this. It's API surface is really small: It has a constructor, and a getContext('2d') method. Unlike the canvas element itself, however, it is neatly decoupled from the DOM. It can be used in a worker - in fact, they are tranferrable - you can pass them between windows and workers via postMessage. The existing DOM <canvas> API itself adds a .transferControlToOffscreen which will (explcitly) give you one back, and is in charge of painting in this rectangle.

If you are one of the many people who don't program against canvases yourself, don't worry about the details... Instead, let me show you what that means. The practical upshot of simply decoupling this is pretty clear, even on good hardware, as you can see in this demo...

OffscreenCanvas based maps
Using OffscreenCanvas, user interactions are not blocked - the rendering is way more fluid and the interface is able to feel smooth and responsive.

A Unique Opportunity

Canvas is also pretty unique in the history of the web because it began as unusually low level. That has its pros and its cons - but one positive thing is that the fact that most people use it by abstraction presents an intersting opportunity. We can radically improve things for pretty much all real users, through the actions of comparatively group of people who directly write things against the actual canvas APIs. Your own work can realize this, in most cases, without any changes to your code. Potentially without you even knowing. Nice.

New super powers, same great taste

There's a knock on effect here too that might be hard to notice at first. OffscreenCanvas doesn't create a whole new API to do its work - it's basically the same canvas context. And so are Houdini Custom Paint worklets. In fact, it's pretty hard to not see the relationship between painting on a canvas in a worker, and painting on a canvas in a worklet - right? They are effectively the same idea. There is minimal new platform "stuff" but we gain whole new superpowers and a clearer architecture. To me, this seems great.

What's more, while breaking off control and decoupling the main thread is a kind of easy win for performance and an intersting super power on it's own, we actually get more than that: In the case of Houdini we are suddenly able to tap into all of the rest of the CSS infrastructure and use this to brainstorm, explore and test and polyfill interesting new paint ideas before we talk about standardizing them. Amazing! That's really good for both standards and users.

Really interestingly though: In the case of OffscreenCanvas, we now suddenly have the ability to parallelize tasks and throw more hardware at highly parallelizable problems. Maps are also an example of that, but they aren't the only one.

My colleague Chris Lord recently gave a talk in which he gave a great demo visualizing an interactive and animated Mandlebrot Set (below). If you're unfamilliar with why this is impressive: A fractal is a self repeating geometric pattern, and they can be pretty intense to visualize. Even harder to make explorable in a UI. At 1080p resolution, and 250 iterations, that's about half a billion complex equations per rendered frame. Fortunately, they are also an example of a highly parallelizable problem, so they make for a nice demo of a thing that was just totally impossible with web technology yesterday, suddenly becomming possible with this new superpower.

OffscreenCanvas super powers!
A video of a talk from a recent WebKit Contributors meeting, showing impressive rendering. It should be time jumped, but on the chance that that fails, you can skip to about the 5 minute mark to see the demo.

What other doors will this open, and what will we see come from it? It will be super exciting to see!

September 30, 2021 04:00 AM

September 29, 2021

Thibault Saunier

GStreamer: one repository to rule them all

For the last years, the GStreamer community has been analysing and discussing the idea of merging all the modules into one single repository. Since all the official modules are released in sync and the code evolves simultaneously between those repositories, having the code split was a burden and several core GStreamer developers believed that it was worth making the effort to consolidate them into a single repository. As announced a while back this is now effective and this post is about explaining the technical choices and implications of that change.

You can also check out our Monorepo FAQ for a list of questions and answers.

Technicall details of the unification

Since we moved to meson as a build system a few years ago we implemented gst-build which leverages the meson subproject feature to build all GStreamer modules as one single project. This greatly enhanced the development experience of the GStreamer framework but we considered that we could improve it even more by having all GStreamer code in a single repository that looks the same as gst-build.

This is what the new unified git repository looks like, gst-build in the main gstreamer repository, except that all the code from the GStreamer modules located in the subprojects/ directory are checked in.

This new setup now lives in the main default branch of the gstreamer repository, the master branches for all the other modules repositories are now retired and frozen, no new merge request or code change will be accepted there.

This is only the first step and we will consider reorganizing the repository in the future, but the goal is to minimize disruptions.

The technical process for merging the repositories looks like:

foreach GSTREAMER_MODULE
    git remote add GSTREAMER_MODULE.name GSTREAMER_MODULE.url
    git fetch GSTREAMER_MODULE.name
    git merge GSTREAMER_MODULE.name/master
    git mv list_all_files_from_merged_gstreamer_module() GSTREAMER_MODULE.shortname
    git commit -m "Moved all files from " + GSTREAMER_MODULE.name
endforeach

This allows us to keep the exact same history (and checksum of each commit) for all the old gstreamer modules in the new repository which guarantees that the code is still exactly the same as before.

Releases with the new setup

In the same spirit of avoiding disruption, releases will look exactly the same as before. In the new unique gstreamer repository we still have meson subprojects for each GStreamer modules and they will have their own release tarballs. In practice, this means that not much (nothing?) should change for distribution packagers and consumers of GStreamer tarballs.

What should I do with my pending MRs in old modules repositories?

Since we can not create new merge requests in your name on gitlab, we wrote a move_mrs_to_monorepo script that you can run yourself. The script is located in the gstreamer repository and you can start moving all your pending MRs by simply calling it (scripts/move_mrs_to_monorepo.py and follow the instructions).


You can also check out our Monorepo FAQ for a list of questions and answers.

Thanks to everyone in the community for providing us with all the feedback and thanks to Xavier Claessens for co-leading the effort.

We are still working on ensuring the transition as smoothly as possible and if you have any question don’t hesitate to come talk to us in #gstreamer on the oftc IRC network.

Happy GStreamer hacking!

by thiblahute at September 29, 2021 09:34 PM

September 24, 2021

Samuel Iglesias

X.Org Developers Conference 2021

Last week we had our most loved annual conference: X.Org Developers Conference 2021. As a reminder, due to COVID-19 situation in Europe (and its respective restrictions on travel and events), we kept it virtual again this year… which is a pity as the former venue was Gdańsk, a very beautiful city (see picture below if you don’t believe me!) in Poland. Let’s see if we can finally have an XDC there!

XDC 2021

This year we had a very strong program. There were talks covering all aspects of the open-source graphics stack: from the kernel (including an Outreachy talk about VKMS) and Mesa drivers of all kind, inputs, libraries, X.org security and Wayland robustness… we had talks about testing drivers, debugging them, our infra at freedesktop.org, and even Vulkan specs (such Vulkan Video and VK_EXT_multi_draw) and their support in the open-source graphics stack. Definitely, a very complete program that is very interesting to all open-source developers working on this area. You can watch all the talks here or here and the slides were already uploaded in the program.

On behalf of the Call For Papers Committee, I would like to thank all speakers for their talks… this conference won’t make sense without you!

Big shout-out to the XDC 2021 organizers (Intel) represented by Radosław Szwichtenberg, Ryszard Knop and Maciej Ramotowski. They did an awesome job on having a very smooth conference. I can tell you that they promptly fixed any issue that happened, all of that behind the scenes so that the attendees not even noticed anything most of the times! That is what good conference organizers do!

XDC 2021 Organizers Can I invite you to a drink at least? You really deserve it!

If you want to know more details about what this virtual conference entailed, just watch Ryszard’s talk at XDC (info, video) or you can reuse their materials for future conferences. That’s very useful info for future conference organizers!

Talking about our streaming platforms, the big novelty this year was the use of media.ccc.de as a privacy-friendly alternative to our traditional Youtube setup (last year we got feedback about this). Media.ccc.de is an open-source platform that respects your privacy and we hope it worked fine for all attendees. Our stats indicate that ~50% of our audience connected to it during the three days of the conference. That’s awesome!

Last but not least, we couldn’t make this conference without our sponsors. We are very lucky to have on board Intel as our Platinum sponsor and organizer, our Gold sponsors (Google, NVIDIA, ARM, Microsoft and AMD, our Silver sponsors (Igalia, Collabora, The Linux Foundation), our Bronze sponsors (Gitlab and Khronos Group) and our Supporters (C3VOC). Big thank you from the X.Org community!

XDC 2021 Sponsors

Feedback

We would like to hear from you and learn about what worked and what needs to be improved for future editions of XDC! Share us your experience!

We have sent an email asking for feedback to different mailing lists (for example this). Don’t hesitate to send an email to X.Org Foundation board with all your feedback!

XDC 2022 announced!

X.Org Developers Conference 2022 has been announced! Jeremy White, from Codeweavers, gave a lightning talk presenting next year edition! Next year the XDC will not be alone… WineConf 2022 is going to be organized by Codeweavers as well and co-located with XDC!

Save the dates! October 4-5-6, 2022 in Minneapolis, Minnesota, USA.

XDC 2022: Minneapolis, Minnesota, USA Image from Wikipedia. License CC BY-SA 4.0.

XDC 2023 hosting proposals

Have you enjoyed XDC 2021? Do you think you can do it better? ;-) We are looking for organizers for XDC 2023 (most likely in Europe but we are open to other places).

We know this is a decision that takes time (trigger internal discussion, looking for volunteers, budget, a venue suitable for the event, etc). Therefore, we encourage potential interested parties to start the internal discussions now, so any question they have can be answered before we open the call for proposals for XDC 2023 at some point next year. Please read what it is required to organize this conference and feel free to contact me or the X.Org Foundation board for more info if needed.

Final acknowledgment

I would like to thank Igalia for all the support I got when I decided to run for re-election this year in the X.Org Foundation board and to allow me to participate in XDC organization during my work hours. It’s amazing that our Free Software and collaboration values are still present after 20 years rocking in the free world!

Igalia 20th anniversary Igalia

September 24, 2021 05:20 AM

Brian Kardell

Dad: A Personal Post

Dad: A Personal Post

Last month, my dad passed away, very unexpectedly. That night, alone with my thoughts and unable to sleep, or do anything else, I wrote this post. I didn't write it for my blog, I wrote it for me. I needed to. I didn't post it then for a lot of reasons, not the least of which is that I don't generally share personal or vulnerable things here. I can understand if that's not why you're here. Today, I decided I would, as a kind of memorial... And immediately cried. So, please: Feel free to skip this one if it pops up in your feed and you're here for the tech. This isn't that post. This one isn't for you, it's for me, and my dad.

[Posted later] Today my dad passed away, unexpectedly. I am thinking a lot, and so sad. I need to put words on a page and get them out of my head.

my dad's obit photo
My dad's obit photo. He was barely 63.

My Dad

When I was 5 my mother and my biological father, barely in their mid-20s, got a divorce. Even I could see that weren't compatible. My mom, just finishing college had a lot of friends and they would occasionally help us out in many ways: From picking me up from school because my mom was held up, to to helping us move into our first apartment - or sometimes, just inviting us over.

One of those people, who I saw more and more of was a young man named Jim Wyse.. Jimmy... My dad, who passed away today, unexpectedly.

Legally speaking, I guess, Jimmy became my "dad" in a ceremony when I was 7 - but that's bullshit, because the truth is, I can't even tell you when it became clear that this distinction was uttlerly meaningless to us. I was his son, and he was my dad. It wasn't because of biology or law or ceremony, but by virtue of all of the things that ultimately matter so much more...and by choice. I couldn't tell you when, because it is seamless in my mind.

From the very beginning he cared for me. He took me camping, and fishing. He taught me to shift gears while he worked the clutch. He played with me in the yard. We wrestled and "boxed". We swam and we boated. He took me to see the movies of my childhood: The Empire Strikes Back, Superman II and Rocky 3. He gave me my first tastes of coffee, beer and wine. He told me stories of his childhood. We laughed together. He taught me to build and fix things, or at least he included me, as if my "help" (often counter-productive) really mattered. What really mattered was something more than that. It's easy to see that, now.

Early photos of my dad and me, maybe even before he was technically my dad (I am in the black hat, with me is his nephew, my late cousin Jason who died a couple of years ago).

In fact, we spent what seems like, in retrospect, an impossible amount of time together. He cared when I was sad. He celebrated my victories. He taught me to be respectful and empathetic and generous and forgiving. He provided discipline too.

Jimmy came from a large family by today's standards, 4 brothers and a sister who all grew up and spent their entire lives in the same small 3 bedroom, 1 small bathroom house. It is generous, in fact, to call it 3 bedrooms. One of them, I believe, was converted out of the largest of two when my aunt was born. I worked on houses with my dad that had walk-in closets that are larger. They weren't wealthy by any stretch of the imagination, but they were close, and he still lived in that house with his parents when I met him. He was younger than my mom.

In this I got a whole new (big) family too. Cousins, aunts, uncles and grandparents with grand children who would become fixtures in my life. They were, of course, all actually biologically related and yet this distinction seems to have been totally irrelevant to all of them from the beginning as well. We spent holidays and vacaations together. In fact, while we lived near enough, we spent many weekends and evenings together too. Several of them lived with us and worked for him for a stint during difficult times in their own lives.

When I was 9 my sister Jennifer was born. It would be impossible to overstate how much I loved this new baby that came into our house. And it would be impossible to not see how much he did too. Perhaps it was the fact that some people began congratulate him on his "first child" that caused me to hear him first address the issue. It may well be the first time, though it was certainly not the last - that I heard him express just how much he loved me and reassure me that I was every bit his child. It was genuine.

By the time my sister Sarah was born there was certainly nobody I met who doubted this. I was "Jimmy and Adele's kid" and most people referred to me as a Wyse.

My sisters are much younger than me. I don't tell them enough anymore, but I hope they know how much I love them, and how much he did. Because of our age differences, I probably have different memories than them. By the time they were probably old enough to remember much, I was already in my teenage years and spending less time at home. But I have so many wonderful memories of time we all spent together.

Somehow, it is amazing to me that the first time heard anyone refer to us as "half-brother" or "half-sister" I was 40. Despite knowing this to be a biological truth in my mind, I considerably was taken aback just to hear it and it still feels... wrong.

Tonight this memory dawned on me again as I realized it might be difficult for me to help with arrangements. He and my mother divorced long ago, so on paper we're as good as strangers, probably.

As I spoke to my sister on the phone, this realization fresh in my head, I began to worry that perhaps there was a difference. My heart broke again as I imagined the pain my sisters must feel - is it more than my own? Perhaps it is even insensitve to not acknowledge? He was, after all, the man who held them in his arms at the hospital moments after their birth - they have known nothing else. I offered a stuggled, "I know it probably isn't quite the same for us... I'm so sorry.".

That this was the moment that finally prompted her to audible tears filled me with instant regret. "No one ever thought that. How could you say that? He definitely didn't see it that way." She's right, of course, and I know it. I'm saddened that I brought it up. He was my dad - and throughout my entire life he has always been there.

My teenage years were difficult. I was difficult. I didn't take school, or much of anything else seriously. But through it all, he never gave up on me. By then he had started a small business as a general contractor and he put me to work weekends and summers (and even the occasional school day when he was very shorthanded and it was clear I wasn't going to go to school anyway). He was a constant force who walked a thin line - both teaching me valueable skills that I might need with pride, and simultaneously constantly pushing me to please use my brain and not my back to make a living.

When I graduated highschool, by some miracle, I went to work for him full time.

The following February we went to a job near Lake Erie to work on a roof. It was just about the last day anyone would want to do such a thing. It was windy, and biting cold - just above freezing. There was easily a foot of snow on the roof, and in inch of ice below it. By 9, a freezing rain had started whipping across us too.

Cold, soaked, and more uncomfortable than I have ever been, I realized that I couldn't imagine lasting the rest of the day. Did I really imagine doing this for another 40 years or more?. It was then that I realized he was right, I should do something else. I wanted to leave right then, but the shame I'd feel walking off the job because I couldn't take it kept me going for another hour... But it couldn't last.

Around 10am, I quit.

Miles from home, I sat in his truck (still very cold, wet and without heat) for many hours pondering my future. I'm sure he took some shit from the rest of the crew about it. I spent months finding a college to accept me on a trial admission program.

I tell this story so that I can add that years later, after honors and success, he told me "That was the plan. I had to show you very clearly the choice in front of you. It was one of the happiest days of my life, when you realized you didn't have to do this.... But man it was cold. That was a shitty day.".

Throughout my life, he's always been teaching me - sometimes directly, sometimes indirectly by letting me fall flat and being there to pick me up and set me right.

He was the model in my young life that set the bar for what I wanted to be for my own chidren. I also watched him a show kindness, patience and understanding to many people, over the years, in ways that remain unparalleled examples to me. He was my example and the man I tried to be in so many ways.

He was the warmest soul in my darkest hours. There were times in my life where he was the only one I could talk to. On more than one occasion, he consoled and supported me in ways no one else could. He sat with me and calmed me while I cried so hard I couldn't speak. I tried be there for him in some of his difficult times too, and he had some rough ones. He wasn't perfect either, but who is?

The truth is, he was more to me than "dad" expresses. Much more.

7 years ago, during one of those difficult times in my life, after my own long relationship broke up, I purchased the home that he grew up in from my Aunt. It needed a lot of work at the time, and he came and did some of it. He replaced the roof and installed new windows in the front. We were planning on doing the back last fall, until the pandemic. A boom in new work after things began to turn around meant we'd put it off till this fall.

It's funny, and sad, how much we (or at least I) put off till tomorrow, and then miss the chance because there are no more tomorrows. I haven't phsyically seen him (or much of anyone, really) in a year.

A lot of our conversation since the pandemic has centered on the old place: Me asking him questions about how to do something, or sending him pictures of improvements or changes I'd made. He'd always reply encouragingly, celebrating my work and expressing happiness that this home remained in the family. "Your grandparents would be happy".

Tonight, as I went to make a call, I realized that I have an unread message on my phone from him from last week. He was replying to to a photo I sent him of some new landscaping. It was a simple message. "Looks great!" Two words and an exclamation point. That's it. Nothing deep, but it made me cry. I missed it at the time, and these are his last words to me. Encouraging me.

Each night I fall asleep in the same room that he did until we met. My bedroom is his old bedroom that I used to go and play in and wrestle with my cousins. I think about all of this often - and how lucky I am that Jim was my dad and that he loved me. I loved him too - and I'm glad I can say that we both knew it. Tonight, won't be different in that respect - I'm sure I'll replay all of this in my mind... But.. It is quite a bit different, isn't it? He's gone now.

Photo memories

One of the things I spent a long time doing since is looking through old photos. Most of these are bad photos of photos, but they give some context to all of this and are some great memories for me... Even if they aren't all of him, he's in all of the memories.

I was in my mom and dad's wedding party, in fact. I am pretty sure he helped pick my suit. This is me (in the suit), outside the reception where a bunch of us helped decorate their car with paper flowers.
This is a photo of my dad on his honeymoon after he married my mom. He was 21. So young. Only 14 years older than me, in fact.
<
A photo of me and my sister Jennifer. I was 9 by the time she was born. My dad took this photo of us on vacation.
Me and my youngest sister, Sarah, when she was born. Also, taken by dad. He loved taking pictures of us (he got much better at it later).
This is me at my 6th grade graduation. My dad got right up there on the stage to take a photo. He was like that - always cheering me on, boisterously. I almost didn't go to my highschool graduation. He talked me into it. I could hear him over the entire crowd when I walked up.
A photo of me and my two sisters, taken by my dad.
My dad and I were always horsing around. These memories of him are so firmly engrained in my mind, and still how I see him that a just few years ago, in his pool I initiated similar horseplay in his pool (we had fun), before remembering that he was 60 and had a bad back, and just very quickly let him take me down. I say "let" only to say I phsyically stopped - but I'm not gonna lie, my dad was rugged as hell, even then.
In 2014, a photo of me, my dad and my two sisters (and my sister's husband) at his house for Christmas, after I moved back to Pittsburgh. He and my mom had been divorced for maybe a decade. He'd been remarried and divorced again since. He never stopped being my dad for a minute.

September 24, 2021 04:00 AM

September 20, 2021

Manuel Rego

Igalia 20th Anniversary

This is a brief post about an important event that is happening today.

Back in 2001 a group of 10 engineers from the University of A Coruña in Galicia (Spain) founded Igalia to run a cooperative business around the free software world. Today it’s its 20th anniversary so it’s time to celebrate! 🎂

Igalia 20th anniversary logo Igalia 20th anniversary logo

In my particular case I joined the company in 2007, just after graduating from the University. During these years I have had the chance to be involved in many interesting projects and communities. On top of that, I’ve learnt a ton of thigs about how a company is managed. I’ve also seen Igalia grow in size and move from local projects and customers to work with the biggest fishes in the IT industry.

I’m very grateful to the founders and all the people that have been involved in making Igalia a successful project during all this years. I’m also very proud of all what we have achieved so far, and how we have done it without sacrificing any of our principles.

We’re now more than 100 people, from all over the world. Awesome colleagues, partners and friends which share amazing values that define the direction of the company. Igalia is today the reference consultancy in some of the most important open source communities out there, which is an outstanding achievement for such a small company.

This time the celebration has to be at home, but for sure we’ll do a big party when we all can meet again together in the future. Meanwhile we have a new brand 20th anniversary logo, together with a small website in case you want to know more about Igalia’s history.

Myself with the Igalia 20th anniversary t-shirt Myself with the Igalia 20th anniversary t-shirt

Sometimes happens that the company you work for turns 20 years old. But it’s way less common that the company you co-own turns 20 years old. Let’s enjoy this moment. Looking forward to many more great years to come! 🎉

September 20, 2021 10:00 PM

September 02, 2021

Byungwoo Lee

CSS Selectors :has()

Selector? Combinator? Subject?

As described in the Selectors Level 4 spec, a selector represents a particular pattern of element(s) in a tree structure. We can select specific elements in a tree structure by matching the pattern to the tree.

Generally, this pattern involves two disctinct concepts: First, a means to express conditions to be tested on an element itself (simple selectors or compound selector). Second, a means to express conditions on the relationship between two elements (combinators).

And the subject of a selector is any element matched by the selector.

The limits of subjects, so far

When you have a reference element in a DOM tree, you can select other elements with a CSS selector.

In a generic tree structure, an element can have 4-way relationships to other elements.

  • an element is an ancestor of an other element.
  • an element is a previous sibling of an other element.
  • an element is a next sibling of an other element.
  • an element is a descendant of an other element.

CSS Selectors, to date, have only allowed the last 2 (‘is a next sibling of’ and ‘is a descendant of’).

So in the CSS world, Thor can say “I am Thor, son of Odin” like this: Odin > Thor. But there has been no way for Darth Vader to tell Luke, “I’m your father”.

At least, these are the limits of what has been implemented and is shipping in every browser to date. However, :has() in the CSS Selectors spec provides the expression: DarthVader:has(> Luke)

The reason of the limitation is mainly about efficiency.

The primary use of selectors has always been in CSS itself. Pages often have 500-2000 CSS rules and slightly more elements in them. Selectors act as filters in the process of applying style rules to elements. If we have 2000 css rules for 2000 elements, matching could be done at least 2,000 times, and in the worst case (in theory) 4,000,000 times. In the browser, the tree is changing constantly - even a static document is rapidly mutated (built) as it is parsed - and we try to render all of this incrementally and at 60 fps. In summary, the selector matching is performed very frequently in performance-critical processes. So, it must be designed and implemented to meet very high performance. And one of the efficient ways to make it is to make the problem simple by limiting complex problems.

In the tree structure, checking a descendant relationship is more efficient than checking an ancestor relationship because an element has only one parent, but it can have multiple children.

<div id=parent>
  <div id=subject>
    <div id=child1></div>
    <div id=child2></div>
    ...
    <div id=child10></div>
  </div>
</div>
<script>
subject.matches('#parent > :scope');
// matches   : Are you a child of #parent ?
// #subject  : Yes, my parent is #parent.

subject.matches(':has(> #child10)');
// matches   : Are you a parent of #child10 ?
// #subject  : Wait a second, I have to lookup all my children.
//             Yes, #child10 is one of my children.
</script>

By removing one of the two opposite directions, we can always place the subject of a selector to the right, no matter how complex the selector is.

  • ancestor subject
    -> subject is a descendant of ancestor
  • previous_sibling ~ subject
    -> subject is a next sibling of previous_sibling
  • previous_sibling ~ ancestor subject
    -> subject is a descendant of ancestor, which is a next sibling of previous_sibling

With this limitation, we can get the advantages of having simple data structures and simple matching sequences.

<style>
A > B + C { color: red; }
</style>
<!--
'A > B + C' can be parsed as a list of selector/combinator pair.
[
  {selector: 'C', combinator: '+'},
  {selector: 'B', combinator: '>'},
  {selector: 'A', combinator: null}
]
-->
<A>       <!-- 3. match 'A' and apply style to C if matched-->
  <B></B> <!-- 2. match 'B' and move to parent if matched-->
  <C></C> <!-- 1. match 'C' and move to previous if matched-->
</A>

:has() allows you to select subjects at any position

With combinators, we can only select downward (descendants, next siblings or descendants of next siblings) from a reference element. But there are many other elements that we can select if the other two relationships, ancestors and previous siblings, are supported.

<div>               <!-- ? -->
  <div></div>         <!-- ? -->
</div>
<div>               <!-- ? -->
  <div>               <!-- ? -->
    <div></div>         <!-- ? -->
  </div>
  <div id=reference>  <!-- #reference -->
    <div></div>         <!-- #reference > div -->
  </div>
  <div>               <!-- reference + div -->
    <div></div>         <!-- reference + div > div -->
  </div>
</div>
<div>               <!-- ? -->
  <div></div>         <!-- ? -->
</div>

:has() provides the way of selecting upward (ancestors, previous siblings, previous siblings of ancestors) from a reference element.

<div>               <!-- div:has(+ div > #reference) -->
  <div></div>         <!-- ? -->
</div>
<div>               <!-- div:has(> #reference) -->
  <div>               <!-- div:has(+ #reference) -->
    <div></div>         <!-- ? -->
  </div>
  <div id=reference>  <!-- #reference -->
    <div></div>         <!-- #reference > div -->
  </div>
  <div>               <!-- #reference + div -->
    <div></div>         <!-- #reference + div > div -->
  </div>
</div>
<div>               <!-- ? -->
  <div></div>         <!-- ? -->
</div>

And with some simple combinations, we can select all elements around the reference element.

<div>               <!-- div:has(+ div > #reference) -->
  <div></div>         <!-- div:has(+ div > #reference) > div -->
</div>
<div>               <!-- div:has(> #reference) -->
  <div>               <!-- div:has(+ #reference) -->
    <div></div>         <!-- div:has(+ #reference) > div -->
  </div>
  <div id=reference>  <!-- #reference -->
    <div></div>         <!-- #reference > div -->
  </div>
  <div>               <!-- #reference + div -->
    <div></div>         <!-- #reference + div > div -->
  </div>
</div>
<div>               <!-- div:has(> #reference) + div -->
  <div></div>         <!-- div:has(> #reference) + div > div -->
</div>

What is the problem with :has() ?

As you might already know, this pseudo class has been delayed for a long time despite the constant interest.

There are many complex situations that makes things difficult when we try to support :has().

  • There are many, many complex cases of selector combinations.
  • Those cases are handled in the selector matching operations and style invalidation operations in the style engine.
  • Selector matching operation and style invalidation operation is very critical to performance.
  • The style engine is carefully designed and highly optimized based on the existing two relationships. (is a descendant of, is a next sibling of).
  • Each Browser engine has its own design and optimization for those operations.

In this context, :has() provides the other two relationships (is a parent of, is a previous sibling of), and problems and concerns start from this.

When we meet a complex and difficult problem, the first strategy we can take is to break it down into smaller ones. For :has(), we can divide the problems with the CSS selector profiles

Problems of the :has() matching operation

:has() matching operation basically implies descendant lookup overhead as described previously. This is an unavoidable overhead we have to take on when we want to use :has() functionality.

In some cases, :has() matching can be O(n2) because of the duplicated argument matching operations. When we call document.querySelectorAll('A:has(B)') on the DOM <A><A><A><A><A><A><A><A><A><A><B>, there can be unnecessary argument selector matching because the descendant traversal can occur for every element A. If so, the number of argument matching operation can be 55(=10+9+8+7+6+5+4+3+2+1) without any optimization, whereas 10 is optimal for this case.

There can be more complex cases involving shadow tree boundary crossing.

Problems of the :has() Style invalidation

In a nutshell, the style engine tries to invalidate styles of elements that are possibly affected by a DOM mutation. It has long been designed and highly optimized based on the assumption that, any possibly affected element is the changed element itself or is downward from it.

<style>
.mutation .subject { color: red; }
</style>
<div>          <!-- classList.toggle('mutation') affect .subject -->
  <div class="subject"></div>       <!-- .subject is in downward -->
</div>

But :has() invalidation is different because the possibly affected element is upward of the changed element (an ancestor, rather than a descendant).

<style>
.subject:has(.mutation) { color: red; }
</style>
<div class="subject">                 <!-- .subject is in upward -->
  <div></div>  <!-- classList.toggle('mutation') affect .subject -->
</div>

In some cases, a change can affect elements in both the upward and downward directions.

<style>
.subject1:has(:is(.mutation1 .something)) { color: red; }
.something:has(.mutation2) .subject2 { color: red; }
</style>
<div class="subject1">              <!-- .subject1 is in upward -->
  <div>     <!-- classList.toggle('mutation1') affect .subject1 -->
    <div class="subject1">        <!-- .subject1 is in downward -->
      <div class="something"></div>
    </div>
  </div>
</div>
<div class="something">
  <div class="subject2">            <!-- .subject2 is in upward -->
    <div>   <!-- classList.toggle('mutation2') affect .subject2 -->
      <div class="subject2"></div><!-- .subject2 is in downward -->
    </div>
  </div>
</div>

Actually, a change can affect everywhere.

<style>
:has(~ .mutation) .subject { color: red; }
:has(.mutation) ~ .subject { color: red; }
</style>
<div>
  <div>
    <div class="subject">        <!-- not in upward or downward -->
    </div>
  </div>
  <div></div> <!-- classList.toggle('mutation') affect .subject -->
</div>
<div class="subject"></div>      <!-- not in upward or downward -->

The expansion of the invalidation traversal scope (from the downward sub-tree to the entire tree) can cause performance degradation. And the violation of the basic assumptions of the invalidation logic (finding a subject from the entire tree instead of finding it from downward) can cause performance degradation and can increase implementation complexity or maintenance overhead, because it will be hard or impossible for the existing invalidation logic to support :has() invalidation as it is.

(There are many more details about :has() invalidation, and those will be covered later.)

What is the current status of :has() ?

Thanks to funding from eye/o, the :has() prototyping in the Chromium project was started by Igalia after some investigations.

(You can get rich background about this from the post - “Can I :has() by Brian Kardell.)

Prototyping is still underway, but here is our progress so far.

  • Chromium
    • Landed CLs to support :has() selector matching (3 CLs)
    • Bug fix (2 CLs)
    • Add experimental feature flag for :has() in snapshot profile (1 CL)
  • WPT (web platform test)
    • Add tests (3 Pull requests)
  • CSS working group drafts

:has() in snapshot profile

For about the :has() in snapshot profile, as of now, Chrome Dev (Version 94 released at Aug 19) supports all the :has() functionalities except some cases involving shadow tree boundary crossing.

You can try :has() with javascript APIs (querySelectorAll, querySelector, matches, closest) in snapshot profile after enabling the runtime flag : enable-experimental-web-platform-features.

has-in-snapshot-profile-is-under-the-experimental-flag

You can also enable it with the commandline flag : CSSPseudoHasInSnapshotProfile.

$ google-chrome-unstable \
        --enable-blink-features=CSSPseudoHasInSnapshotProfile

:has() in both (snapshot/live) profile

You can enable :has() in both profiles with the commandline flag : CSSPseudoHas.

$ google-chrome-unstable --enable-blink-features=CSSPseudoHas

Support for :has() in the live profile is still in progress. When you enable :has() with this flag, you can see that style rules with :has() are working only at loading time. The style will not be recalculated after DOM changes.

has-in-live-profile-just-support-initial-style

by Byungwoo's Blog at September 02, 2021 03:00 PM

August 31, 2021

Juan A. Suárez

Implementing Performance Counters in V3D driver

Let me talk here about how we implemented the support for performance counters in the Mesa V3D driver, the OpenGL driver used by the Raspberry Pi 4. For reference, the implementation is very similar to the one already available (not done by me, by the way) for the VC4, OpenGL driver for the Raspberry Pi 3 and prior devices, also part of Mesa. If you are already familiar with how this is implemented in VC4, then this will mostly be a refresher.

First of all, what are these performance counters? Most of the processors nowadays contain some hardware facilities to get measurements about what is happening inside the processor. And of course graphics processors aren’t different. In this case, the graphics chips used by Raspberry Pi devices (manufactured by Broadcom) can record a bunch of different graphics-related parameters: how many quads are passing or failing depth/stencil tests, how many clock cycles are spent on doing vertex/fragment shading, hits/misses in the GPU cache, and many others values. In fact, with the V3D driver it is possible to measure around 87 different parameters, and up to 32 of them simultaneously. Quite a few less in VC4, though. But still a lot.

On a hardware level, using these counters is just a matter of writing and reading some GPU registers. First, write the registers to select what we want to measure, then a few more to start to measure, and finally read other registers containing the results. But of course, much like we don’t expect users to write GPU assembly code, we don’t expect users to write registers in the GPU directly. Moreover, even the Mesa drivers such as V3D can’t interact directly with the hardware; rather, this is done through the kernel, the one that can use the hardware directly, through the DRM subsystem in the kernel. For the case of V3D (and same applies to VC4, and in general to any other driver), we have a driver in user-space (whether the OpenGL driver, V3D, or the Vulkan driver, V3DV), and a kernel driver in the kernel-space, unsurprisingly also called V3D. The user-space driver is in charge of translating all the commands and options created with the OpenGL API or other API to batches of commands to be executed by the GPU, which are submitted to the kernel driver as DRM jobs. The kernel does the proper actions to send these to the GPU to execute them, including touching the proper registers. Thus, if we want to implement support for the performance counters, we need to modify the code in two places: the kernel and the (user-space) driver.

Implementation in the kernel

Here we need to think about how to deal with the GPU and the registers to make the performance counters work, as well as the API we provide to user-space to use them. As mentioned before, the approach we are following here is the same as the one used in the VC4 driver: performance counters monitors. That is, the user-space driver creates one or more monitors, specifying for each monitor what counters it is interested in (up to 32 simultaneously, the hardware limit). The kernel returns a unique identifier for each monitor, which can be used later to do the measurement, query the results, and finally destroy it when done.

In this case, there isn’t an explicit start/stop the measurement. Rather, every time the driver wants to measure a job, it includes the identifier of the monitor it wants to use for that job, if any. Before submitting a job to the GPU, the kernel checks if the job has a monitor identifier attached. If so, then it needs to check if the previous job executed by the GPU was also using the same monitor identifier, in which case it doesn’t need to do anything other than send the job to the GPU, as the performance counters required are already enabled. If the monitor is different, then it needs first to read the current counter values (through proper GPU registers), adding them to the current monitor, stop the measurement, configure the counters for the new monitor, start the measurement again, and finally submit the new job to the GPU. In this process, if it turns out there wasn’t a monitor under execution before, then it only needs to execute the last steps.

The reason to do all this is that multiple applications can be executing at the same time, some using (different) performance counters, and most of them probably not using performance counters at all. But the performance counter values of one application shouldn’t affect any other application so we need to make sure we don’t mix up the counters between applications. Keeping the values in their respective monitors helps to accomplish this. There is still a small requirement in the user-space driver to help with accomplishing this, but in general, this is how we avoid the mixing.

If you want to take a look at the full implementation, it is available in a single commit.

Implementation in the driver

Once we have a way to create and manage the monitors, using them in the driver is quite easy: as mentioned before, we only need to create a monitor with the counters we are interested in and attach it to the job to be submitted to the kernel. In order to make things easier, we keep a mirror-like version of the monitor inside the driver.

This approach is adequate when you are developing the driver, and you can add code directly on it to check performance. But what about the final user, who is writing an OpenGL application and wants to check how to improve its performance, or check any bottleneck on it? We want the user to have a way to use OpenGL for this.

Fortunately, there is in fact a way to do this through OpenGL: the GL_AMD_performance_monitor extension. This OpenGL extension provides an API to query what counters the hardware supports, to create monitors, to start and stop them, and to retrieve the values. It looks very similar to what we have described so far, except for an important difference: the user needs to start and stop the monitors explicitly. We will explain later why this is necessary. But the key point here is that when we start a monitor, this means that from that moment on, until stopping it, any job created and submitted to the kernel will have the identifier of that monitor attached. This implies that only one monitor can be enabled in the application at the same time. But this isn’t a problem, as this restriction is part of the extension.

Our driver does not implement this API directly, but through “queries”, which are used then by the Gallium subsystem in Mesa to implement the extension. For reference, the V3D driver (as well as the VC4) is implemented as part of the Gallium subsystem. The Gallium part basically handles all the hardware-independent OpenGL functionality, and just requires the driver hook functions to be implemented by the driver. If the driver implements the proper functions, then Gallium exposes the right extension (in this case, the GL_AMD_performance_monitor extension).

For our case, it requires the driver to implement functions to return which counters are available, to create or destroy a query (in this case, the query is the same as the monitor), start and stop the query, and once it is finished, to get the results back.

At this point, I would like to explain a bit better what it implies to stop the monitor and get the results back. As explained earlier, stopping the monitor or query means that from that moment on, any new job submitted to the kernel (and thus to the GPU) won’t contain a performance monitor identifier attached, and hence won’t be measured. But it is important to know that the driver submits jobs to the kernel to be executed at its own pace, but these aren’t executed immediatly; the GPU needs time to execute the jobs, and so the kernel puts the arriving jobs in a queue, to be submitted to the GPU. This means when the user stops the monitor, there could be still jobs in the queue that haven’t been executed yet and are thus pending to be measured.

And how do we know that the jobs have been executed by the GPU? The hook function to implement getting the query results has a “wait” parameter, which tells if the function needs to wait for all the pending jobs to be measured to be executed or not. If it doesn’t but there are pending jobs, then it just returns telling the caller this fact. This allows to do other work meanwhile and query again later, instead of becoming blocked waiting for all the jobs to be executed. This is implemented through sync objects. Every time a job is sent to the kernel, there’s a sync object that is used to signal when the job has finished executing. This is mainly used to have a way to synchronize the jobs. In our case, when the user finalizes the query we save this fence for the last submitted job, and we use it to know when this last job has been executed.

There are quite a few details I’m not covering here. If you are interested though, you can take a look at the merge request.

Gallium HUD

So far we have seen how the performance counters are implemented, and how to use them. In all the cases it requires writing code to create the monitor/query, start/stop it, and querying back the results, either in the driver itself or in the application through the GL_AMD_performance_monitor extension1.

But what if we want to get some general measurements without adding code to the application or the driver? Fortunately, there is an environmental variable GALLIUM_HUD that, when correctly, will show on top of the application some graphs with the measured counters.

Using it is very easy; set it to help to know how to use it, as well as to get a list of the available counters for the current hardware.

As example:

$ env GALLIUM_HUD=L2T-CLE-reads,TLB-quads-passing-z-and-stencil-test,QPU-total-active-clk-cycles-vertex-coord-shading scorched3d

You will see:

Performance Counters in Scorched 3D

Bear in mind that to be able to use this you will need a kernel that supports performance counters for V3D. At the moment of writing this, no kernel has been released yet with this support. If you don’t want to wait for it, you can download the patch, apply it to your raspberry pi kernel (which has been tested in the 5.12 branch), build and install it.

  1. All this is for the case of using OpenGL; if your application uses Vulkan, there are other similar extensions, which are not yet implemented in our V3DV driver at the moment of writing this post. 

August 31, 2021 10:00 PM

August 27, 2021

Qiuyi Zhang (Joyee)

Building V8 on an M1 MacBook

I’ve recently got an M1 MacBook and played around with it a bit. It seems many open source projects still haven’t added MacOS with ARM64

August 27, 2021 03:52 PM

On deps/v8 in Node.js

I recently ran into a V8 test failure that only showed up in the V8 fork of Node.js but not in the upstream. Here I’ll write down my

August 27, 2021 03:52 PM

Tips and Tricks for Node.js Core Development and Debugging

I thought about writing some guides on this topic in the nodejs/node repo, but it’s easier to throw whatever tricks I personally use on

August 27, 2021 03:52 PM

August 20, 2021

Brian Kardell

Experimenting with :has()

Experimenting with :has()

Back in May, I wrote Can I :has()?. In that piece, I discussed the :has() pseudo-class and the practical reasons it's been hard to advance. Today I'll give you some updates on advancing :has() efforts in Chromium, and how you can play with it today.

In my previous piece I explained that Igalia had been working to help move these discussions along by doing the research that has been difficult for vendors to prioritize (funded by eyeo) and that we believe that we'd gotten somewhere: We'd done lot of research, developed a prototype in a custom build of chromium and had provided what we believed were good proofs for discussion. The day that I wrote that last piece, we were filing an intent to prototype in chromium.

Today, I'd like to give some updates on those efforts...

Where things stand in Chromium, as of yesterday

As you may, or may not know, the process for shipping new features in Chromium is pretty involved and careful. There are several 'intent' steps, many, reviews along the way, many channels (canary, dev, beta, stable). Atop this are also things which launch with command line flags, runtime feature flags, origin trials (experimentally on for some sites opted in), reverse origin trials (some sites opted out) and field trials/finch flags (rollout to some % of users on or off by default).

Effectively, things get more serious and certain, and as that happens we want to expand the reach of these things by making it easier for more developers to experiment with it.

Previously...

For a while now our up-streaming efforts have allowed you to pass command line flags to enable some support in early channels. Either

--enable-blink-features=CSSPseudoHasInSnapshotProfile
--enable-blink-features=CSSPseudoHas

The former adds support for the use of the :has() pseudo class in the JavaScript selector APIs ('the snapshot/static profile'), and the latter enables support in CSS stylesheets too.

These ways still work, but it's obviously a lot more friction than most developers will take the time to learn, figure out, and try. Most of us don't launch from a command line.

New Advancements!

As things have gotten more stable and serious, we're moving along and making some thing easier...

As of the dev channel release 94.0.4606.12 (yesterday), enabling support in the JavaScript selector APIs is now as simple as enabling the experimental web platform features runtime flag. Chances are, a number of readers already have this flag flipped, so low friction indeed!

Support in the JavaScript APIs has always involved far fewer unknowns and challenges, but what's held us from adding support there first has always been a desire to prevent splitting and a lack of ability to answer questions about whether the main, live CSS profile could be supported, what limits it would need and so on. We feel like we have a much better grip on many of these questions now and so things are moving along a bit.

We hope that this encourages more people to try it out and provide feedback, open bugs, or just add encouragement. Let us know if you do!

Much more at Ad Blocker Dev Summit 2021

I'm also happy to note that I'll be speaking, along with my colleague Byungwoo Lee and eyeo's @shwetank and @WebReflection at Ad Blocker Dev Summit 2021 on October 21. Looking forward to being able to provide a lot more information there on the history, technical challenges, process, use cases and impacts! Hope to see you there!

August 20, 2021 04:00 AM

August 13, 2021

Qiuyi Zhang (Joyee)

My 2019

It’s that time of the year again! I did not manage to write a recap about my 2018, so I’ll include some reflection about that year in

August 13, 2021 06:21 PM

Uncaught exceptions in Node.js

In this post, I’ll jot down some notes that I took when refactoring the uncaught exception handling routines in Node.js. Hopefully it

August 13, 2021 06:21 PM

My 2017

I decided to write a recap of my 2017 because looking back, it was a very important year to me.

August 13, 2021 06:21 PM

New Blog

I’ve been thinking about starting a new blog for a while now. So here it is.

Not sure if I am going to write about tech here.

August 13, 2021 06:21 PM

August 11, 2021

Danylo Piliaiev

Testing Vulkan drivers with games that cannot run on the target device

Here I’m playing “Spelunky 2” on my laptop and simultaneously replaying the same Vulkan calls on an ARM board with Adreno GPU running the open source Turnip Vulkan driver. Hint: it’s an x64 Windows game that doesn’t run on ARM.

The bottom right is the game I’m playing on my laptop, the top left is GFXReconstruct immediately replaying Vulkan calls from the game on ARM board.

How is it done? And why would it be useful for debugging? Read below!


Debugging issues a driver faces with real-world applications requires the ability to capture and replay graphics API calls. However, for mobile GPUs it becomes even more challenging since for Vulkan driver the main “source” of real-world workload are x86-64 apps that run via Wine + DXVK, mainly games which were made for desktop x86-64 Windows and do not run on ARM. Efforts are being made to run these apps on ARM but it is still work-in-progress. And we want to test the drivers NOW.

The obvious solution would be to run those applications on an x86-64 machine capturing all Vulkan calls. Then replaying those calls on a second machine where we cannot run the app. This way it would be possible to test the driver even without running the application directly on it.

The main trouble is that Vulkan calls made on one GPU + Driver combo are not generally compatible with other GPU + Driver combo, sometimes even for one GPU vendor. There are different memory capabilities (VkPhysicalDeviceMemoryProperties), different memory requirements for buffer and images, different extensions available, and different optional features supported. It is easier with OpenGL but there are also some incompatibilities there.

There are two open-source vendor-agnostic tools for capturing Vulkan calls: RenderDoc (captures single frame) and GFXReconstruct (captures multiple frames). RenderDoc at the moment isn’t suitable for the task of capturing applications on desktop GPUs and replaying on mobile because it doesn’t translate memory type and requirements (see issue #814). GFXReconstruct on the other hand has the necessary features for this.

I’ll show a couple of tricks with GFXReconstruct I’m using to test things on Turnip.


Capturing with GFXReconstruct

At this point you either have the application itself or, if it doesn’t use Vulkan, a trace of its calls that could be translated to Vulkan. There is a detailed instruction on how to use GFXReconstruct to capture a trace on desktop OS. However there is no clear instruction of how to do this on Android (see issue #534), fortunately there is one in Android’s documentation:

Android how-to (click me)
For Android 9 you should copy layers to the application which will be traced
For Android 10+ it's easier to copy them to com.lunarg.gfxreconstruct.replay
You should have userdebug build of Android or probably rooted Android

# Push GFXReconstruct layer to the device
adb push libVkLayer_gfxreconstruct.so /sdcard/

# Since there is to APK for capture layer,
# copy the layer to e.g. folder of com.lunarg.gfxreconstruct.replay
adb shell run-as com.lunarg.gfxreconstruct.replay cp /sdcard/libVkLayer_gfxreconstruct.so .

# Enable layers
adb shell settings put global enable_gpu_debug_layers 1

# Specify target application
adb shell settings put global gpu_debug_app <package_name>

# Specify layer list (from top to bottom)
adb shell settings put global gpu_debug_layers VK_LAYER_LUNARG_gfxreconstruct

# Specify packages to search for layers
adb shell settings put global gpu_debug_layer_app com.lunarg.gfxreconstruct.replay

If the target application doesn’t have rights to write into external storage - you should change where the capture file is created:

adb shell "setprop debug.gfxrecon.capture_file '/data/data/<target_app_folder>/files/'"


However, when trying to replay the trace you captured on another GPU - most likely it will result in an error:

[gfxrecon] FATAL - API call vkCreateDevice returned error value VK_ERROR_EXTENSION_NOT_PRESENT that does not match the result from the capture file: VK_SUCCESS.  Replay cannot continue.
Replay has encountered a fatal error and cannot continue: the specified extension does not exist

Or other errors/crashes. Fortunately we could limit the capabilities of desktop GPU with VK_LAYER_LUNARG_device_simulation

VK_LAYER_LUNARG_device_simulation when simulating another GPU should be told to intersect the capabilities of both GPUs, making the capture compatible with both of them. This could be achieved by recently added environment variables:

VK_DEVSIM_MODIFY_EXTENSION_LIST=whitelist
VK_DEVSIM_MODIFY_FORMAT_LIST=whitelist
VK_DEVSIM_MODIFY_FORMAT_PROPERTIES=whitelist

whitelist name is rather confusing because it’s essentially means “intersection”.

One would also need to get a json file which describes target GPU capabilities, this should be done by running:

vulkaninfo -j &> <device_name>.json

The final command to capture a trace would be:

VK_LAYER_PATH=<path/to/device-simulation-layer>:<path/to/gfxreconstruct-layer> \
VK_INSTANCE_LAYERS=VK_LAYER_LUNARG_gfxreconstruct:VK_LAYER_LUNARG_device_simulation \
VK_DEVSIM_FILENAME=<device_name>.json \
VK_DEVSIM_MODIFY_EXTENSION_LIST=whitelist \
VK_DEVSIM_MODIFY_FORMAT_LIST=whitelist \
VK_DEVSIM_MODIFY_FORMAT_PROPERTIES=whitelist \
<the_app>

Replaying with GFXReconstruct

gfxrecon-replay -m rebind --skip-failed-allocations <trace_name>.gfxr
  • -m Enable memory translation for replay on GPUs with memory types that are not compatible with the capture GPU’s
    • rebind Change memory allocation behavior based on resource usage and replay memory properties. Resources may be bound to different allocations with different offsets.
  • --skip-failed-allocations skip vkAllocateMemory, vkAllocateCommandBuffers, and vkAllocateDescriptorSets calls that failed during capture

Without these options replay would fail.

Now you could easily test any app/game on your ARM board, if you have enough RAM =) I even successfully ran a capture of “Metro Exodus” on Turnip.

But what if I want to test something that requires interactivity?

Or you don’t want to save a huge trace on disk, which could grow tens of gigabytes if application is running for considerable amount of time.

During the recording GFXReconstruct just appends calls to a file, there are no additional post-processing steps. Given that the next logical step is to just skip writing to a disk and send Vulkan calls over the network!

This would allow us to interact with the application and immediately see the results on another device with different GPU. And so I hacked together a crude support of over-the-network replay.

The only difference with ordinary tracing is that now instead of file we have to specify a network address of the target device:

VK_LAYER_PATH=<path/to/device-simulation-layer>:<path/to/gfxreconstruct-layer> \
    ...
GFXRECON_CAPTURE_FILE="<ip>:<port>" \
<the_app>

And on the target device:

while true; do gfxrecon-replay -m rebind --sfa ":<port>"; done

Why while true? It is common for DXVK to call vkCreateInstance several times leading to the creation of several traces. When replaying over the network we therefor want gfxrecon-replay to immediately restart when one trace ends to be ready for another.

You may want to bring the FPS down to match the capabilities of lower power GPU in order to prevent constant hiccups. It could be done either with libstrangle or with mangohud:

  • stranglevk -f 10
  • MANGOHUD_CONFIG=fps_limit=10 mangohud

You have seen the result at the start of the post.

by Danylo Piliaiev at August 11, 2021 09:00 PM

August 10, 2021

Iago Toral

An update on feature progress for V3DV

I’ve been silent here for quite some time, so here is a quick summary of some of the new functionality we have been exposing in V3DV, the Vulkan driver for Raspberry PI 4, over the last few months:

  • VK_KHR_bind_memory2
  • VK_KHR_copy_commands2
  • VK_KHR_dedicated_allocation
  • VK_KHR_descriptor_update_template
  • VK_KHR_device_group
  • VK_KHR_device_group_creation
  • VK_KHR_external_fence
  • VK_KHR_external_fence_capabilities
  • VK_KHR_external_fence_fd
  • VK_KHR_external_semaphore
  • VK_KHR_external_semaphore_capabilities
  • VK_KHR_external_semaphore_fd
  • VK_KHR_get_display_properties2
  • VK_KHR_get_memory_requirements2
  • VK_KHR_get_surface_capabilities2
  • VK_KHR_image_format_list
  • VK_KHR_incremental_present
  • VK_KHR_maintenance2
  • VK_KHR_maintenance3
  • VK_KHR_multiview
  • VK_KHR_relaxed_block_layout
  • VK_KHR_sampler_mirror_clamp_to_edge
  • VK_KHR_storage_buffer_storage_class
  • VK_KHR_uniform_buffer_standard_layout
  • VK_KHR_variable_pointers
  • VK_EXT_custom_border_color
  • VK_EXT_external_memory_dma_buf
  • VK_EXT_index_type_uint8
  • VK_EXT_physical_device_drm

Besides that list of extensions, we have also added basic support for Vulkan subgroups (this is a Vulkan 1.1 feature) and Geometry Shaders (we use this to implement multiview).

I think we now meet most (if not all) of the Vulkan 1.1 mandatory feature requirements, but we still need to check this properly and we also need to start doing Vulkan 1.1 CTS runs and fix test failures. In any case, the bottom line is that Vulkan 1.1 should be fairly close now.

by Iago Toral at August 10, 2021 08:10 AM

August 07, 2021

Enrique Ocaña

Beyond Google Bookmarks

I was a happy user of Del.icio.us for many years until the service closed. Then I moved my links to Google Bookmarks, which offered basically the same functionality (at least for my needs): link storage with title, tags and comments. I’ve carefully tagged and filed more than 2500 links since I started, and I’ve learnt to appreciate the usefulness of searching by tag to find again some precious information that was valuable to me in the past.

Google Bookmarks is a very old and simple service that “just works”. Sometimes it looked as if Google had just forgotten about it and let it run for years without anybody noticing… until now. It’s closing on September 2021.

I didn’t want to lose all my links, still need a link database searchable by tags and don’t want to be locked-in again in a similar service that might close in some years, so I wrote my own super-simple alternative to it. It’s called bs, sort of bookmark search.

The usage can’t be simpler, just add the tag you want to look for and it will print a list of links that have that tag:

$ bs webassembly
  title = Canvas filled three ways: JS, WebAssembly and WebGL | Compile 
    url = https://compile.fi/canvas-filled-three-ways-js-webassembly-and-webgl/ 
   tags = canvas,graphics,html5,wasm,webassembly,webgl 
   date = 2020-02-18 16:48:56 
comment =  
 
  title = Compiling to WebAssembly: It’s Happening! ★ Mozilla Hacks – the Web developer blog 
    url = https://hacks.mozilla.org/2015/12/compiling-to-webassembly-its-happening/ 
   tags = asm.js,asmjs,emscripten,llvm,toolchain,web,webassembly 
   date = 2015-12-18 09:14:35 
comment = 

If you call the tools without parameters, it will prompt data to insert a new link or edit it if the entered url matches a preexisting one:

$ bs 
url: https://compile.fi/canvas-filled-three-ways-js-webassembly-and-webgl/ 
title: Canvas filled three ways: JS, WebAssembly and WebGL | Compile 
tags: canvas,graphics,html5,wasm,webassembly,webgl 
comment: 

The data is stored in an sqlite database and I’ve written some JavaScript snippets to import the Delicious exported bookmarks file and the Google Bookmarks exported bookmarks file. Those snippets are meant to be copypasted in the JavaScript console of your browser while you have the exported bookmarks html file open on it. They’ll generate SQL sentences that will populate the database for the first time with your preexisting data.

By now the tool doesn’t allow to delete bookmarks (I haven’t had the need yet) and I still need to find a way to simplify its usage through the browser with a bookmarklet to ease adding new bookmarks automatically. But that’s a task for other day. By now I have enough just by knowing that my bookmarks are now safe.

Enjoy!

[UPDATE: 2020-09-08]

I’ve now coded an alternate variant of the database client that can be hosted on any web server with PHP and SQLite3. The bookmarks can now be managed from a browser in a centralized way, in a similar fashion as you could before with Google Bookmarks and Delicious. As you can see in the screenshot, the style resembles Google Bookmarks in some way.

You can easily create a quick search / search engine link in Firefox and Chrome (I use “d” as keyword, a tradition from the Delicious days, so that if I type “d debug” in the browser search bar it will look for that tag in the bookmark search page). Also, the 🔖 button opens a popup that shows a bookmarklet code that you can add to your browser bookmark bar. When you click on that bookmarklet, the edit page prefilled with the current page info is opened, so you can insert or edit a new entry.

There’s a trick to use the bookmarklet on Android Chrome: Use a rare enough name for the bookmarklet (I used “+ Bookmark 🔖”). Then, when you want to add the current page to the webapp, just start typing “+ book”… in the search bar and the saved bookmarklet link will appear as an autocomplete option. Click on it and that’s it.

Enjoy++!

by eocanha at August 07, 2021 12:29 PM

August 06, 2021

Tiago Vignatti

My Startup Dream

Prototyping At the beginning of 2016, myself, Tuomas and João drafted out a skateboard business in Brazil. João, blasé about his electrical engineering endeavours, desperately wanted to practice skydiving and live abroad. He was down to whatever to accomplish his goals, and so he was alright with the much lower endorphin that skateboarding was proving also. That was great, because besides being a great and funny friend, he was also a truly handyman, and we needed that kind of person in our business.

by Author at August 06, 2021 08:27 PM

August 05, 2021

Chris Lord

OffscreenCanvas update

Hold up, a blog post before a year’s up? I’d best slow down, don’t want to over-strain myself 🙂 So, a year ago, OffscreenCanvas was starting to become usable but was missing some key features, such as asynchronous updates and text-related functions. I’m pleased to say that, at least for Linux, it’s been complete for quite a while now! It’s still going to be a while, I think, before this is a truly usable feature in every browser. Gecko support is still forthcoming, support for non-Linux WebKit is still off by default and I find it can be a little unstable in Chrome… But the potential is huge, and there are now double the number of independent, mostly-complete implementations that prove it’s a workable concept.

Something I find I’m guilty of, and I think that a lot of systems programmers tend to be guilty of, is working on a feature but not using that feature. With that in mind, I’ve been spending some time in the last couple of weeks to try and bring together demos and information on the various features that the WebKit team at Igalia has been working on. With that in mind, I’ve written a little OffscreenCanvas demo. It should work in any browser, but is a bit pointless if you don’t have OffscreenCanvas, so maybe spin up Chrome or a canary build of Epiphany.

OffscreenCanvas fractal renderer demo, running in GNOME Web Canary

Those of us old-skool computer types probably remember running fractal renderers back on their old home computers, whatever they may have been (PC for me, but I’ve seen similar demos on Amigas, C64s, Amstrad CPCs, etc.) They would take minutes to render a whole screen. Of course, with today’s computing power, they are much faster to render, but they still aren’t cheap by any stretch of the imagination. We’re talking 100s of millions of operations to render a full-HD frame. Running on the CPU on a single thread, this is still something that isn’t really real-time, at least implemented naively in JavaScript. This makes it a nice demonstration of what OffscreenCanvas, and really, Worker threads allow you to do without too much fuss.

The demo, for which you can look at my awful code, splits that rendering into 64 tiles and gives each tile to the first available Worker in a pool of rendering threads (different parts of the fractal are much more expensive to render than others, so it makes sense to use a work queue, rather than just shoot them all off distributed evenly amongst however many Workers you’re using). Toggle one of the animation options (palette cycling looks nice) and you’ll get a frame-rate counter in the top-right, where you can see the impact on performance that adding Workers can have. In Chrome, I can hit 60fps on this 40-core Xeon machine, rendering at 1080p. Just using a single worker, I barely reach 1fps (my frame-rates aren’t quite as good in WebKit, I expect because of some extra copying – there are some low-hanging fruit around OffscreenCanvas/ImageBitmap and serialisation when it comes to optimisation). If you don’t have an OffscreenCanvas-capable browser (or a monster PC), I’ve recorded a little demonstration too.

The important thing in this demo is not so much that we can render fractals fast (this is probably much, much faster to do using WebGL and shaders), but how easy it is to massively speed up a naive implementation with relatively little thought. Google Maps is great, but even on this machine I can get it to occasionally chug and hitch – OffscreenCanvas would allow this to be entirely fluid with no hitches. This becomes even more important on less powerful machines. It’s a neat technology and one I’m pleased to have had the opportunity to work on. I look forward to seeing it used in the wild in the future.

by Chris Lord at August 05, 2021 03:33 PM

August 02, 2021

Philippe Normand

Introducing the GNOME Web Canary flavor

Today I am happy to unveil GNOME Web Canary which aims to provide bleeding edge, most likely very unstable builds of Epiphany, depending on daily builds of the WebKitGTK development version. Read on to know more about this.

Until recently the GNOME Web browser was available for end-users in two flavors. The primary, stable release provides the vanilla experience of the upstream Web browser. It is shipped as part of the GNOME release cycle and in distros. The second flavor, called Tech Preview, is oriented towards early testers of GNOME Web. It is available as a Flatpak, included in the GNOME nightly repo. The builds represent the current state of the GNOME Web master branch, the WebKitGTK version it links to is the one provided by the GNOME nightly runtime.

Tech Preview is great for users testing the latest development of GNOME Web, but what if you want to test features that are not yet shipped in any WebKitGTK version? Or what if you are GNOME Web developer and you want to implement new features on Web that depend on API that was not released yet in WebKitGTK?

Historically, the answer was simply “you can build WebKitGTK yourself“. However, this requires some knowledge and a good build machine (or a lot of patience). Even as WebKit developer builds have become easier to produce thanks to the Flatpak SDK we provide, you would still need to somehow make Epiphany detect your local build of WebKit. Other browsers offer nightly or “Canary” builds which don’t have such requirements. This is exactly what Epiphany Canary aims to do! Without building WebKit yourself!

A brief interlude about the term: Canary typically refers to highly unstable builds of a project, they are named after Sentinel species. Canary birds were taken into mines to warn coal miners of carbon monoxide presence. For instance Chrome has been providing Canary builds of its browser for a long time. These builds are useful because they allow early testing, by end-users. Hence potentially early detection of bugs that might not have been detected by the usual automated test harness that buildbots and CI systems run.

To similar ends, a new build profile and icon were added in Epiphany, along with a new Flatpak manifest. Everything is now nicely integrated in the Epiphany project CI. WebKit builds are already done for every upstream commit using the WebKit Buildbot. As those builds are made with the WebKit Flatpak SDK, they can be reused elsewhere (x86_64 is the only arch supported for now) as long as the WebKit Flatpak platform runtime is being used as well. Build artifacts are saved, compressed, and uploaded to a web server kindly hosted and provided by Igalia. The GNOME Web CI now has a new job, called canary, that generates a build manifest that installs WebKitGTK build artifacts in the build sandbox, that can be detected during the Epiphany Flatpak build. The resulting Flatpak bundle can be downloaded and locally installed. The runtime environment is the one provided by the WebKit SDK though, so not exactly the same as the one provided by GNOME Nightly.

Back to the two main use-cases, and who would want to use this:

  • You are a GNOME Web developer looking for CI coverage of some shiny new WebKitGTK API you want to use from GNOME Web. Every new merge request on the GNOME Web Gitlab repo now produces installable Canary bundles, that can be used to test the code changes being submitted for review. This bundle is not automatically updated though, it’s good only for one-off testing.
  • You are an early tester of GNOME Web, looking for bleeding edge version of both GNOME Web and WebKitGTK. You can install Canary using the provided Flatpakref. Every commit on the GNOME Web master branch produces an update of Canary, that users can get through the usual flatpak update or through their flatpak-enabled app-store.

Update:

Due to an issue in the Flatpakref file, the WebKit SDK flatpak remote is not automatically added during the installation of GNOME Web Canary. So it needs to be manually added before attempting to install the flatpakref:

$ flatpak --user remote-add --if-not-exists webkit https://software.igalia.com/flatpak-refs/webkit-sdk.flatpakrepo
$ flatpak --user install https://nightly.gnome.org/repo/appstream/org.gnome.Epiphany.Canary.flatpakref

As you can see in the screenshot below, the GNOME Web branding is clearly modified compared to the other flavors of the application. The updated logo, kindly provided by Tobias Bernard, has some yellow tones and the Tech Preview stripes. Also the careful reader will notice the reported WebKitGTK version in the screenshot is a development build of SVN revision r280382. Users are strongly advised to add this information to bug reports.

As WebKit developers we are always interested in getting users’ feedback. I hope this new flavor of GNOME Web will be useful for both GNOME and WebKitGTK communities. Many thanks to Igalia for sponsoring WebKitGTK build artifacts hosting and some of the work time I spent on this side project. Also thanks to Michael Catanzaro, Alexander Mikhaylenko and Jordan Petridis for the reviews in Gitlab.

by Philippe Normand at August 02, 2021 05:15 PM

July 22, 2021

Mario Sanchez Prada

Igalia and the Chromium project

A couple of months ago I had the pleasure of speaking at the 43rd International Conference on Software Engineering (aka ICSE 2021), in the context of its “Spanish Industry Case Studies” track. We were invited to give a high level overview of the Chromium project and how Igalia contributes to it upstream.

This was an unusual chance to speak at a forum other than the usual conferences I attend to, so I welcomed this as a double opportunity to explain the project to people less familiar with Chromium than those attending events such as BlinkOn or the Web Engines Hackfest, as well as to spread some awareness on our work in there.

Contributing to Chromium is something we’ve been doing for quite a few years already, but I think it’s fair to say that in the past 2-3 years we have intensified our contributions to the project even more and diversified the areas that we contribute to, something I’ve tried to reflect in this talk in no more than 25 minutes (quite a challenge!). Actually, it’s precisely because of this amount of contributions that we’re currently the 2nd biggest non-Google contributor to the project in number of commits, and among the Top 5 contributors by team size (see a highlight on this from BlinkOn 14’s keynote). For a small consultancy company such as ours, it’s certainly something to feel proud of.

With all this in mind, I organized the talk into 2 main parts: First a general introduction to the Chromium project and then a summary of the main upstream work that we at Igalia have contributed recently to it. I focused on the past year and a half, since that seemed like a good balance that allowed me to highlight the most important bits without adding too much  information. And from what I can tell based on the feedback received so far, it seems the end result has been helpful and useful for some people without prior knowledge to understand things such as the differences between Chromium and Chrome, what ChromiumOS is and how our work on several different fronts (e.g. CSS, Accessibility, Ozone/X11/Wayland, MathML, Interoperability…) fits into the picture.

Obviously, the more technically inclined you are, and the more you know about the project, the more you’ll understand the different bits of information condensed into this talk, but my main point here is that you shouldn’t need any of that to be able to follow it, or at least that was my intention (but please let me know in the comments if you have any feedback). Here you have it:

You can watch the talk online (24:05 min) on our YouTube channel, as well as grab the original slide deck as a PDF in case you also want it for references, or to check the many links I included with pointers for further information and also for reference to the different sources used.

Last, I don’t want to finish this post without thanking once again to the organizers for the invitation and for runing the event, and in particular to Andrés-Leonardo Martínez-Ortiz and Javier Provecho for taking care of the specific details involved with the “Spanish Industry Case Studies” track.

Thank you all

by mario at July 22, 2021 02:16 PM

July 21, 2021

Ricardo García

Debugging shaders in Vulkan using printf

Debugging programs using printf statements is not a technique that everybody appreciates. However, it can be quite useful and sometimes necessary depending on the situation. My past work on air traffic control software involved using several forms of printf debugging many times. The distributed and time-sensitive nature of the system being studied made it inconvenient or simply impossible to reproduce some issues and situations if one of the processes was stalled while it was being debugged.

In the context of Vulkan and graphics in general, printf debugging can be useful to see what shader programs are doing, but some people may not be aware it’s possible to “print” values from shaders. In Vulkan, shader programs are normally created in a high level language like GLSL or HLSL and then compiled to SPIR-V, which is then passed down to the driver and compiled to the GPU’s native instruction set. That final binary, many times outside the control of user applications, runs in a quite closed and highly parallel environment without many options to observe what’s happening and without text input and output facilities. Fortunately, tools like glslang can generate some debug information when compiling shaders to SPIR-V and other tools like Nsight can use that information to let you debug shaders being run.

Still, being able to print the values of different expressions inside a shader can be an easy way to debug issues. With the arrival of Ray Tracing, this is even more useful than before. In ray tracing pipelines, the shaders being executed and resources being used are chosen based on the scene geometry, the origin and the direction of the ray being traced. printf debugging can let you see where you are and what you’re using. So how do you print values from shaders?

Vulkan’s debug printf is implemented as part of the Validation Layers and the general procedure is well documented. If you were to implement this kind of mechanism yourself, you’d likely use a storage buffer to save the different values you want to print while shader invocations are running and, later, you’d go over the contents of that buffer and print the associated message with each value or values. And that is, essentially, what debug printf does but in a very convenient and automated way so that you don’t have to deal with the gory details and corner cases.

In a GLSL shader, simply:

  1. Enable the GL_EXT_debug_printf extension.

  2. Sprinkle your code with debugPrintfEXT() calls.

  3. Use the Vulkan Configurator that’s part of the SDK or manually edit vk_layer_settings.txt for your app enabling VK_VALIDATION_FEATURE_ENABLE_DEBUG_PRINTF_EXT.

  4. Normally, disable other validation features so as not to get too much output.

  5. Take a look at the debug report or debug utils info messages containing printf results, or set printf_to_stdout to true so printf messages are sent to stdout directly.

You can find an example shader in the validation layers test code. The debug printf feature has helped me a lot in the past, so I wanted to make sure it’s widely known and used.

Due to the observer effect, you may end up in situations where your code works correctly when enabling debug printf but incorrectly without it. This may be due to multiple reasons but one of the main ones I’ve encountered is improper synchronization. When debug printf is used, the layers use additional synchronization primitives to sync the contents of auxiliary buffers, which can mask synchronization bugs present in the app.

Finally, RenderDoc 1.14, released at the end of May, also supports Vulkan’s shader printf statements and will let you take a look at the print statements produced during a draw call. Furthermore, the print statements don’t have to be present in the original shader. You can also use the shader edit system to insert them on the fly and use them to debug the results of a particular shader invocation. Isn’t that awesome? Great work by Baldur Karlsson as always.

PS: As a happy coincidence, just yesterday LunarG published a white paper on Vulkan’s debug printf with additional information on this excellent feature. Be sure to check it out!

July 21, 2021 06:42 AM

July 20, 2021

Oriol Brufau

Improved raster transforms with non-uniform scales

Introduction

CSS Transforms Level 1 introduced 2D transforms, that can be specified using the transform property. For example, they can be used to rotate or scale an element:

  • transform: none
  • transform: rotate(45deg)
  • transform: scale(1, 0.5)

CSS Transforms Level 2 extends that feature to allow transforms in 3D space, for example:

  • transform: rotate3d(1, 1, 1, 45deg)
  • transform: scale3d(1, 0.5, 2)

Typically, using 3D transforms forces the element into its own rendering layer. This is sometimes desired by authors, since it can improve performance if for example the element is moving around.

Therefore, identity transformations in the Z axis, like scale3d(X, Y, 1) instead of scale(X, Y), are sometimes used to opt-in into this behavior. This trick works on Chromium, but note it’s not compliant with the spec.

Problems

Forcing an element to be rasterized in its own layer can have some disadvantages.

For example, Chromium used to rasterize it using a single float scale. When the transform had different X and Y scale components, Chromium just picked the bigger one, clamped by 5 times the smaller one (to avoid memory problems). And then it used this raster scale for both axes, producing suboptimal results.

Also, Chromium only uses LCD text antialiasing when the internal raster scale matches the actual X and Y scales in the transform. Therefore, non-uniform scales prevented the nicer LCD antialiasing.

And unrelated to uniform scales, if the transformed element doesn’t have an opaque background, LCD antialiasing is not used either, since Chromium needs to know the color behind the text.

The last problem remains unsolved, but I fixed the other two in Chromium 92, which has been released today.

Thanks to Bloomberg for sponsoring Igalia to do it!

Solution

The main patch that addressed both problems was https://crrev.com/872117. But LCD text antialiasing was still not used because I made a mistake 😳, which I fixed in https://crrev.com/872974

Basically, it was a matter of changing AxisTransform2d and PictureLayerImpl to store a 2D scale rather than a single float. I used gfx::Vector2dF, which is like a pair of floats with some nice methods to clamp by a minim or maximum, scale both floats by the same factor, etc.

I kept most tiling logic as it was, just taking the maximum component of the gfx::Vector2dF as the “scale key”. However, different 2D scales can have the same key, for example by dynamically changing scale3d(1, 5, 1) into scale3d(5, 1, 1), both with a scale key of 5. Therefore, when finding if there already was a tiling with the desired scale key, I made sure to the check the 2D scales, and recreate the tiling if they were different.

This is an example of how it looked like in Chromium:

This is how it looked when internally using 2D scales:

And finally, with LCD text antialiasing:

For reference, this is how your browser renders it (live example):

Lorem ipsum

Comparing the 1st and 2nd images, using 2D scales clearly improved the text, which was hard to read due to missing some thin parts of the glyphs, and also note the border diagonals in the corners look less jagged.

At first glance it may be hard to notice the difference between the 2nd and 3rd images, so you can compare the text antialiasing in these magnified images:

At the top, the edges of the glyphs simply use grayscale antialiasing, while at the bottom, the LCD antialiasing uses some colored pixels.

Limitations

While my patch improved the common basic cases, Chromium will still fall back to a 1D scale in these cases:

  • Directly composited images
  • Animations
  • Heads up layers
  • Scrollbar layers
  • Mirror layers
  • When the layer has perspective

Some of them may be addressed in the future, this is tracked in bug 1196414.

For example, this live example uses a CSS animation so it still looks wrong in Chromium 92:

Lorem ipsum

I actually started a patch to address animations, and it seemed to work well in simple cases, but it could be wrong when ancestors had additional transforms. Handling that properly would have required more complexity, and it wasn’t clear that it was worth it.

Therefore, I don’t plan to continue working on these edge cases, but if you are affected by them, you can star bug 1196414 and provide a good testcase. This may help increase the priority of the bug!

by Oriol Brufau at July 20, 2021 09:30 PM

July 15, 2021

Ricardo García

Linking deqp-vk much faster thanks to lld

Some days ago my Igalia colleague Adrián Pérez pointed us to mold, a new drop-in replacement for existing Unix linkers created by the original author of LLVM lld. While mold is pretty new and does not aim to be 100% compatible with GNU ld, GNU gold or LLVM lld (at least as of the time I’m writing this), I noticed the benchmark table in its README file also painted a pretty picture about the performance of lld, if inferior to that of mold.

In my job at Igalia I work most of the time on VK-GL-CTS, Vulkan and OpenGL’s Conformance Test Suite, which contains thousands of tests for OpenGL and Vulkan. These tests are provided by different executable files and the Vulkan tests on which I’m focused are contained in a binary called deqp-vk. When built with debug information, deqp-vk can be quite large. A recent build, for example, is taking 369 MB in my drive. But the worst part is that linking the binary typically takes around 25 seconds on my work laptop.

$ time cmakebuild.sh --target deqp-vk
  [6/6] Linking CXX executable external/vulkancts/modules/vulkan/deqp-vk

  real    0m25.137s
  user    0m22.280s
  sys     0m3.440s

I had never paid much attention to the linker before, always relying on the default choice in Fedora or any other distribution. However, I decided to install lld, which has an official package, and gave it a try. You Will Not Believe What Happened Next.

$ time cmakebuild.sh --target deqp-vk
  [6/6] Linking CXX executable external/vulkancts/modules/vulkan/deqp-vk

  real    0m2.622s
  user    0m5.456s
  sys     0m1.764s

lld is capable of correctly linking deqp-vk in 1/10th of the time the default linker (GNU ld) takes to do the same job. If you want to try lld yourself you have several options. Ideally, you’d be able to run update-alternatives --set ld /usr/bin/lld as root but that option is notably not available in Fedora. There was a proposal to make that work but it never materialized, so it cannot be made the default system-wide linker.

However, depending on the build system used by a particular project, there should be a way to make it use lld instead of /usr/bin/ld. For example, VK-GL-CTS uses CMake, which invokes the compiler to link executable files, instead of calling the linker directly, which would be unusual. Both GCC and Clang can be passed -fuse-ld=lld as a command line option to use lld instead of the default linker. That flag should be added to CMake’s CMAKE_EXE_LINKER_FLAGS variable, either by reconfiguring an existing project with, for example, ccmake, or by adding the flag to the LDFLAGS environment variable before running CMake on a build directory for the first time.

Looking forward to start using the mold linker in the future and its multithreading capabilities. In the mean time, I’m very happy to have checked lld. It’s not that usual that a simple tooling change as this one gives me such a clear advantage.

July 15, 2021 04:33 PM

July 09, 2021

Víctor Jáquez

Video decoding in GStreamer with Vulkan

Warning: Vulkan video is still work in progress, from specification to available drivers and applications. Do not use it for production software just yet.

Introduction

Vulkan is a cross-platform Application Programming Interface (API), backed by the Khronos Group, aimed at graphics developers for a wide range of different tasks. The interface is described by a common specification, and it is implemented by different drivers, usually provided by GPU vendors and Mesa.

One way to visualize Vulkan, at first glance, is like a low-level OpenGL API, but better described and easier to extend. Even more, it is possible to implement OpenGL on top of Vulkan. And, as far as I am told by my peers in Igalia, Vulkan drivers are easier and cleaner to implement than OpenGL ones.

A couple years ago, a technical specification group (TSG), inside the Vulkan Working Group, proposed the integration of hardware accelerated video compression and decompression into the Vulkan API. In April 2021 the formed Vulkan Video TSG published an introduction to the
specification
. Please, do not hesitate to read it. It’s quite good.

Matthew Waters worked on a GStreamer plugin using Vulkan, mainly for uploading, composing and rendering frames. Later, he developed a library mapping Vulkan objects to GStreamer. This work was key for what I am presenting here. In 2019, during the last GStreamer Conference, Matthew delivered a talk about his work. Make sure to watch it, it’s worth it.

Other key components for this effort were the base classes for decoders and the bitstream parsing libraries in GStreamer, jointly developed by Intel, Centricular, Collabora and Igalia. Both libraries allow using APIs for stateless video decoding and encoding within the GStreamer framework, such as Vulkan Video, VAAPI, D3D11, and so on.

When the graphics team in Igalia told us about the Vulkan Video TSG, we decided to explore the specification. Therefore, Igalia decided to sponsor part of my time to craft a GStreamer element to decode H.264 streams using these new Vulkan extensions.

Assumptions

As stated at the beginning of this text, this development has to be considered unstable and the APIs may change without further notice.

Right now, the only Vulkan driver that offers these extensions is the beta NVIDIA driver. You would need, at least, version 455.50.12 for Linux, but it would be better to grab the latest one. And, of course, I only tested this on Linux. I would like to thank NVIDIA for their Vk Video samples. Their test application drove my work.

Finally, this work assumes the use of the main development branch of GStreamer, because the base classes for decoders are quite recent. Naturally, you can use gst-build for an efficient upstream workflow.

Work done

This work basically consists of two new objects inside the GstVulkan code:

  • GstVulkanDeviceDecoder: a GStreamer object in GstVulkan library, inherited from GstVulkanDevice, which enables VK_KHR_video_queue and VK_KHR_video_decode_queue extensions. Its purpose is to handle codec-agnostic operations.
  • vulkanh264dec: a GStreamer element, inherited from GstH264Decoder, which tries to instantiate a GstVulkanDeviceDecoder to composite it and is in charge of handling codec-specific operations later, such as matching the parsed structures. It outputs, in the source pad, memory:VulkanImage featured frames, with NV12 color format.

  • So far this pipeline works without errors:

    $ gst-launch-1.0 filesrc location=big_buck_bunny_1080p_h264.mov ! parsebin ! vulkanh264dec ! fakesink
    

    As you might see, the pipeline does not use vulkansink to render frames. This is because the Vulkan format output by the driver’s decoder device is VK_FORMAT_G8_B8R8_2PLANE_420_UNORM, which is NV12 crammed in a single image, while for GstVulkan a NV12 frame is a buffer with two images, one per component. So the current color conversion in GstVulkan does not support this Vulkan format. That is future work, among other things.

    You can find the merge request for this work in GStreamer’s Gitlab.

    Future work

    As was mentioned before, it is required to fully support VK_FORMAT_G8_B8R8_2PLANE_420_UNORM format in GstVulkan. That requires thinking about how to keep backwards compatibility. Later, an implementation of the sampler to convert this format to RGB will be needed, so that decoded frames can be rendered by vulkansink.

    Also, before implementing any new feature, the code and its abstractions will need to be cleaned up, since currently the division between codec-specific and codec-agnostic code is not strict, and it must be fixed.

    Another important cleanup task is to enhance the way the Vulkan headers are handled. Since the required headers files for video extensions are beta, they are not expected to be available in the system, so temporally I had to add the those headers as part of the GstVulkan library.

    Then it will be possible to implement the H.265 decoder, since the NVIDIA driver also supports it.

    Later on, it will be nice to start thinking about encoders. But this requires extending support for stateless encoders in GStreamer, something I want do to for the new VAAPI plugin too.

    Thanks for bearing with me, and thanks to Igalia for sponsoring this work.

    by vjaquez at July 09, 2021 05:38 PM

    July 06, 2021

    Igalia Compilers Team

    JS Nation Talk: “How to Outsmart Time: Building Futuristic JavaScript Apps Using Temporal”

    Recently Compilers Team member Ujjwal Sharma gave a talk at the JS Nation 2021 conference about the Temporal proposal. Check out the recording here:

    https://portal.gitnation.org/contents/how-to-outsmart-time-building-futuristic-javascript-apps-using-temporal

    The talk goes over how to use Temporal’s new date & time API in real world programming, and also how Temporal interacts with other APIs such as JS Intl.

    We’ve written about Temporal previously on this blog and our other teammates have also written about how Temporal might be useful for the GNOME desktop.

    If you’re interested in an audio-format deep-dive about Temporal, also check out the Igalia Chats episode on the topic.

    by Compilers Team at July 06, 2021 06:09 PM

    July 04, 2021

    Brian Kardell

    Tabs in HTML?

    Tabs in HTML?

    Please help us evaluate an idea!

    I've fallen a bit behind on my podcast consumption (thanks to being very busy), but I was recently catching up during a drive in the car and was pleased to hear Dave Rupert and Chris Coyier having some good discussion on the Shop Talk Show episode 466 (time jumped to the start) about some work we've been doing and that we're looking for more opinions/thoughts on. Dave did a bang-up job explaining I think, but just trying to imagine it all might be a little difficult, so I wanted to share something a little more concrete, provide some context around it, and to re-iterate that "we'd like (need) your help in evaluating this idea ".

    Show me something...

    Imagine that you could have something that worked .... kind of like this (insert giant hand wavey motions): You write some good old HTML that looks just like HTML you write today, but are able to express that want you to sometimes treat it with different interaction affordances - just like the way scroll panes work on the Web today...

    A running demo showing sections that gain tabset or collapse affordances as the screen resizes.

    You should be able to load the demo in this video yourself in any browser...

    Show me the demo

    If you view the source of that page, you'll see there is just a single element that we have cleverly named <spicy-sections>.

    Please check it out: It's very easy to try. Note that the actual syntax or way to express the association with interactive afforances is entirely up in the air. This custom element isn't a proposal itself. There are many efforts happening in parallel discussing precisely how this should (and shouldn't or can and can't) work, but the custom element should help you explore the ideas.

    Play with it. Build something useful. Ask questions, show us your uses, give us feedback about what you like or don't. This will help us shape good and successful approaches and inform actual proposals. Question #1 is on the crux of the idea itself.

    If the answer to "how do I get a native tabset in the browser?"" involved using 0...1 'new' elements (perhaps one to identify content which could fit these models) and some CSS (or CSS-like) means of expressing 'when'" - would you call that a win? Do you get it? Do you love it? Do you hate it? What are your questions?

    A little background

    There are lots of efforts around adding new elements to HTML - many of which are being coordiated through an effort in WICG called "Open UI". This effort involves browser implementers, developers, UI Toolkit makers, etc, and I think that's great. I really want the web to have a better set of tools for making basic UI and I hope that Open UI can show us a better way forward than we've done in the past.

    To me, this means involving more people. It means meeting developers closer to where they are and giving them something useful to evaluate, to tighten the feedback loop and make sure we're able to course correct. But it also means that we should be producing things out in the open along the way that allow anyone to see how we even got to here. Proposals shouldn't lack a back-story or explantion, or seem like they appeared out of nowhere. There shouln''t be any things "you just have to trust us on".

    So, that's what we're trying to do: A group of us have been looking at 'tabs' - and there is a reason we're asking. Chances are, I think, this won't match your first ideas about how the browser would get native support for tabs. It didn't match my own, in fact. I've built and used a lot of tabs over the last 20+ years, and never thought of it this way either. So, I wanted to add some context and explanation for the curious...

    Step 1: Define Tabs?

    I realize this sounds almost silly. In fact, when I asked the question "can we define 'tabs'" a friend replied to me...

    You know what tabs are, Brian".

    I mean... You use them every day, on every OS. Everybody knows they exist in every toolbox. All that's left is to "just pave the cowpaths!" But when you get right down to it, it's a lot more complicated than that.

    Different "tabset" implementations have different features and different limits. All of them have changed and evolved over time. Today, most UI toolboxes have multiple APIs for creating things that users would just call "tabs" - and there are reasons for that. Sometimes these APIs are even the basis of things a user might not call tabs! In many cases, the APIs themselves aren't called "tabs" (or anything close to that). So, if we want to discuss these things (any components, really), we need to begin with a survey and lay down some clear definitions and goals. We laid this all out in this research identifying the parts and features in the landscape.

    An interesting distinction

    One thing which falls out this is an intesting distinction of 2 broad "kinds of tabset-like controls":

    • One kind is actually a window manager. The tabs at the top of the browser you're looking at right now are windows that happen to be arranged visually as groups that look like tabs.
    • Another kind manages exclusive display and focus management patterns of what are, effectively, sections within the same document that happen to look like tabs.

    As end-users we probably don't think about this much, but it is a pretty important distinction actually and there are differences we understand commonly. In fact, so many expectations about both the shape of the UI and user interactions flow from this. Text searching happens in a window, not across windows, for example. Windows can be "dragged out" and displayed as, well, windows. The keyboard interactions and accessibility roles of windows are expected to be... windows. And so on.

    For our purposes, we have chosen to focus primarily on the ltter kind which are prevalent in UI kits for the Web.

    Markup and APIs

    Again, it feels like it shouldn't be difficult to pave these paths into markup. However, even in systems without markup, the shape of APIs varies pretty considerably. Translating this to markup adds a new set of challenges. There isn't a clear/self evident mapping to DOM/markup at all - and whatever decisions we make somehow dictate something about how APIs should work on the web.

    To illustrate this, we have also collected a bunch of research showing all sorts of variants over the years and various dissected pros and cons of each.

    Some things we thought were important...

    We thought it is worth thinking about progressive enhancement. Support for new features generally rolls out unevenly (we only got the last support for summary/details about a year ago, for example). This means that for potentially a long time, some browsers may not have support for native tabs.

    But that's only part of the story: When you consider "other browsers" - things like embedded devices which update more slowly still, or search engines or reader modes... What happens to content?

    Further, we'd would like to test out any theory with a custom element (as above), and the script can fail to download. In fact, these things seem to dovetail nicely. We'd like the content to be "good" even if script for some reason doesn't execute in that case.

    This fed into roads we didn't go down..

    Attributes?

    One popular approach involves putting the tab's "label" as an attribute. That has a lot of really nice qualities, but it falls down on a number of other points. Without support (including all of those cases above), users would be left with information loss and a wall of run-on and unlabelled content. This also has pretty extreme limitations on what can be put into a tab label. No additional elements means no ruby text, for example, or interesting icon treatments or stylistic markup or structured text of any kind. So, we suggest that attributes for labels are not our first choice.

    TOC Style?

    Another very popular set of solutions involves "table of contents (TOC) style" markup. These draw a parallel which equates tab labels to items in a table of contents. In fact, over the years this sort of pattern has been popular with progressive enhancement enthusiasts because you can use a list of links. The markup itself even "looks like tabs" already. It's probably surprising then that this isn't our first choice - why?

    The short answer is that there are some fundamental flaws in this analogy which have impacts. Tables of contents are an enhancement of headings, not a replacement for them. That is, they are generally built based on headings that label content and simply repeated/reflected earlier. Without these already in place, the un-enhanced content is just a wall of unbroken text without labels. It's almost exactly the same issue as attribute style. While it's possible to repeat the headings too, why repeat yourself? If you have the heading, you can build the TOC, but not vice-versa.

    Similarly, the argument that "the markup looks like tabs already" is less than perfect. As our research shows, tabset labels can exist along any axis, or even around the circumference of a circle. If you look at it just slightly differently, they can perhaps even be interleaved in the content ('responsive tabs' do this, and at one point ARIA's "single select accordions" were also tabs).

    So, we suggest TOC-style isn't our first choice either.

    A tabset element?

    All of this helps shine light on some interesting questions and led to my post Design Afforance Controls which talks about why the way we approach this matters. It holds up flaws in the inherent control-specific-ness of <summary> and <details> and contrasts this with the fact that we don't have a <scrollpane> element - but rather employ those affordances when they make sense.

    If a thing's fundamental, primary "nature" is just "good sections", don't we often want to change our minds on presentation based on things like media or design? It's worth considering.

    So, it's not that we shouldn't have an element specifically for tabs as much as "isn't this maybe more useful"?

    What do you think?

    Please let us know what you think about the ideas here! If you could provide "just good content" and have pretty much full stylistic control, and use CSS to express what kind of "showey/hidey" control it should be/when... How would you feel about that? Do you "get it"? Do you "buy the arguments"? Or are we just barking up the wrong tree? Your feedback will help inform how we discuss or advocate for things next in OpenUI to move things forward.

    Mentions do not imply endorsements, but many thanks to folks who proofread this post, met along the way, helped research, had discussions, and did work. Very special thanks especially @jon_neal @TerribleMia and @davatron5000 for many thoughtful discussions.

    July 04, 2021 04:00 AM

    July 01, 2021

    Felipe Erias

    Towards richer colors on the Web

    Preface

    This blog post is based on my talk at the BlinkOn 14 conference (May 2021). You can watch the talk here:

    Towards richer colors in Blink (BlinkOn 14)

    And the slides are available here: Towards richer colors in Blink - slides.

    This article will talk about the ongoing efforts to specify richer colors on the Web platform, plus some ideas about directions for future development on Blink/Chromium.

    Introduction

    The study of color brings together ideas from physics (how light works), biology (how our eyes see), computing, and more. There is a long and rich history following the desire to be able to use richer materials and colors when creating visual art, and the same is true of the Web today.

    A color space is a way to describe and organize colors so they can be identified and reproduced with accuracy. Some color spaces are more or less arbitrary (e.g. the Pantone collection) but the ones that we will focus on are based on detailed mathematical descriptions.

    These color spaces consist of a mathematical color model that specifies how colors are described (i.e. as tuples of numbers) and a precise description of how those components are to be interpreted.

    The range of colors that a hardware display is able to show is called its gamut. When we want to show an image that uses a larger color space than this gamut, its colors will have to be mapped to the ones that can be actually displayed: this process is called gamut mapping.

    Essentially, the colors in the original image are “squeezed” so they can be displayed by the device. This process can be rather complex, because we want the image being displayed to preserve as much of the intent of the original as possible.

    When we talk about software, we say that an application is color managed when it is aware of the different color spaces used by its source media and is able to use that information when deciding how that media should be displayed on the screen.

    Traditionally, the Web has been built on top of the sRGB color space (created in 1996) which describes colors with a RGB color model (red, green and blue) plus a non-linear transfer function to link the numerical value for each component with the intensity of the corresponding primary color.

    There are many other color spaces. The graph below represents the chromaticity of the CIE XYZ color space, which was specifically designed to cover all colors that an average human can see.

    CIE

    Source: WikiMedia

    From that large map of colors within human perception, we can identify those that fall within the sRGB color space.

    CIE_sRGB

    Source: WikiMedia

    As you can see, there are many colors that we can perceive but can not be described by sRGB!

    Learn more: Color: From Hexcodes to Eyeballs (Jamie Wong)

    (Note: these graphs are a useful tool to visualize and compare different gamuts but sometimes can be a bit confusing, because they use colors that we can obviously see but then tell us that some of the colors represented by them are outside the gamut that our device can display.)

    Colors on the Web

    The sRGB color space gained popularity because it was well suited to be displayed by the CRT monitors that were common at the time. CSS includes plenty of functions and shortcuts to define colors in the sRGB space, for example:

    #40E0D0
    rgb(218, 112, 214)
    PeachPuff
    rgba(211, 65, 0, .8)
    hsl(177, 70%, 41%)
    LightSkyBlue

    See also: Color CSS data type

    As technology has improved over time, nowaday many devices are able to display colors that go beyond the sRGB color space. On the Web platform there is increasing interest in adding support for wider color gamuts to different elements.

    Learn more: Unlocking Colors (Brian Kardell), LCH colors in CSS: what, why, and how? (Lea Verou)

    Several JavaScript libraries already provide a lot of functionality for manipulating colors (but are limited by the limits of what can be displayed by the browser).

    See: Color JS, D3 d3-interpolate, chroma JS.

    The major Web browsers offer different levels of support for color management and access to wider gamuts.

    This article will focus specifically on adding support on Blink and Chromium for richer colors in elements defined in HTML and CSS.

    CSS Color

    The reference specification for richer colors on the Web is the CSS Color Module elaborated by the CSS Working Group. CSS Color Module 4 describes most of the changes discussed here and CSS Color Module 5 will bring additional functionality.

    There is as well a Color on the Web community group at the W3C that among other things organises a workshop on wide color gamut for the Web.

    In 2020 there was also a very interesting discussion at the W3C’s Technical Architecture Group about how having colors outside of the sRGB gamut opened up questions about interoperability between the different elements of the platform, as well as interesting observations around how to support calculations for improved color contrast and accessibility.

    (Note: this list does not pretend to be exhaustive and it intentionally leaves aside the many groups working on standards beyond CSS and beyond the Web in general.)

    The CSS Color spec, among other things:

    • extends the color() function to let the author explicitly indicate the desired color space of a color, including those with a wide gamut;
    • defines the lab() and lch() functions to specify colors in the CIE LAB colorspace;
    • provides detailed control over how interpolation happens, as well as many other features;
    • contains a reference implementation for the operations described in it.

    So why is this a big deal?

    Display more colors

    First, using only sRGB limits the range of colors that can be displayed. Many modern monitors have a wider gamut than sRGB, often close to another standard called Display-P3.

    Here you can see both of those spaces over the same graph that we saw before:

    srgb p3

    The Display-P3 space is about one third larger than sRGB. This means that from CSS we have no access to roughly one third of the colors that modern monitors can display.

    Learn more: Why DCI-P3 is the New Standard of Color Gamut?

    See also: Wide-gamut color on the web

    This is another way of visualizing the same idea, where the white line in each case represents the boundary between what can be described by sRGB and what is within Display-P3.

    srgb p3 outline

    As you can see, the colors that fall within the Display-P3 space but outside of sRGB are the most intense and vivid.

    Learn more: Wide Gamut Color in CSS with Display-P3 (WebKit)

    When a Web browser is not able to display a color because of hardware and/or software limitations, it will use instead the closest one of the colors that it can display.

    Let’s see an example of this. The image on the left below is a uniform red square in the sRGB gamut. The image on the right is slightly different, as it actually uses two different shades of red: one that is within the sRGB gamut and another that is outside of it. On sRGB displays, both colors are painted the same and the result is a uniform red square, just like the first image. However, on a system that can display wide-gamut colors, both shades of red will be painted differently and you will be able to see a faint WebKit logo inside the square.

    sRGB color examples wide-gamut color examples

    Source and more information: Comparison between normal and wide-gamut images (WebKit).

    Furthermore, there are color spaces that are even larger than Display-P3; for now, they are mostly reserved to professional equipment and applications, but it is likely that at some point in the future some of them will probably become popular in their turn.

    Adding wider color spaces to the Web is as much about supporting what widely available hardware can do today as it is about setting us in the path to support what it will do in the future.

    Consistent and predictable colors

    Secondly, another limitation of sRGB on the Web is that it is not perceptually uniform: the same numeric amount of change in a value does not cause similar changes in the colors that we perceive.

    We can see this clearly with HLS, which is an alternate way to express the same sRGB colors in terms of hue, saturation, and lightness. Let’s see some examples.

    Here 20 degrees in hue are the difference between orange and yellow:

    HSL(30, 100%, 50%)
    HSL(50, 100%, 50%)

    Whereas here that same step produces very similar blues:

    HSL(230, 100%, 50%)
    HSL(250, 100%, 50%)

    Changing the lightness value may also change the saturation that we perceive (even when its numerical value stays the same).

    HSL(0, 90%, 40%)
    HSL(0, 90%, 80%)

    And colors with the same saturation and lightness values can be perceived very differently because of their hue:

    HSL(250, 100%, 50%)
    HSL(60, 100%, 50%)

    Learn more: Color spaces for human beings

    This means that in general sRGB (and by extension HSL) can not be used to accurately adjust lightness, saturation or hue, to find complementary colors, to calculate the perceived contrast between two colors, etc.

    One of the new functionalities in the CSS Color spec is to be able to use color spaces where the same numerical changes in one of the values brings similar perceived changes, like the LCH color space.

    LCH is based on the CIE LAB color space and defines colors according to their Lightness, Chroma, and Hue.

    In the LCH color space, the same numerical changes in a value bring about similar and predictable changes in the colors that we perceive without affecting the other characteristics.

    Changes in lightness:

    LCH(45% 60 60)
    LCH(60% 60 60)
    LCH(75% 60 60)

    Changes in chroma (or “amount of color”):

    LCH(50% 10 319)
    LCH(50% 60 319)
    LCH(50% 110 319)

    Changes in hue:

    LCH(50% 70 35)
    LCH(50% 70 135)
    LCH(50% 70 280)

    Learn more: LCH colour picker

    See also: Perceptually uniform color spaces

    Interpolation and more

    Since color spaces represent and organize colors differently, the path to reach one color from another is not the same on different spaces. This means that there are many possible ways to interpolate between two colors to create e.g. a gradient. For example:

    interpolation examples 1

    Try it out: Color JS - interpolation

    The CSS Color spec will provide more control over interpolation on additional color spaces. This is just one example where adding richer color capabilities to the Web dramatically broadens the range of tools available to authors when creating their sites.

    Learn more: Interpolation on CSS Color 4, Mixing colors on CSS Color 5.

    Color in Chromium

    Now let’s talk about Chromium. As you know, it is the Free Software portion of the Chrome and Edge Web browsers.

    The Web engine inside of it is called Blink and it implements the Web Platform standards that describe how to turn Web content into pixels on the screen.

    Blink itself is a fork of WebKit, which is the Web engine used by Safari and others.

    Render pipeline

    Blink basically creates a rendering pipeline that takes Web sources as input (pages, stylesheets, and so on).

    It parses them, applies styles, defines geometry, arranges the content into layers and tiles, paints those and sends them over to be displayed.

    This job of actually painting those pixels is carried out by a multiplatform graphics library called Skia.

    Blink pipeline

    Learn more: Life of a Pixel (Chromium team).

    Richer colors

    In Chromium, there is already some support for color management, @media queries (gamut), color profiles (tags) in images, and so on.

    Learn more about embedded color profiles in images: Digital-Image Color Spaces (Jeffrey Friedl).

    There is now also an intent to experiment with additional color spaces for canvas, WebGL and WebGPU.

    See: Color managing canvas contents, intent to ship.

    However, there isn’t yet support for using richer color spaces with individual Web elements like we have seen in the previous section.

    Within Blink, CSS colors are parsed and stored into a small structure with just 32 bits: that means 8 bits per RGB color channel (plus 8 more for transparency).

    See: color.h (Chromium).

    These colors are eventually handed over to the Skia library to carry out the actual drawing. Skia then uses its own similar 32-bit format.

    See: SkColor in SkColor.h (Chromium).

    We can show this on the previous diagram. A Web page specifies a color in sRGB which is stored in a 32-bit format and passed across the rendering pipeline until it reaches Skia, where is is converted to a similar format, rastered, and displayed.

    Blink pipeline colors

    This means that thoughout Blink’s rendering pipeline colors are represented using only 32 bits, and this limits the precision and the richness of the colors that can be used and displayed in websites by Chromium.

    Some ideas from WebKit

    Blink started as a fork of WebKit in 2013 and although they have evolved in different ways, we can still look at WebKit to get some inspiration for storing and manipulating high-precision colors.

    Without getting into too much detail, WebKit supports a high precision representation of colors that stores four float values plus a colorspace. LAB is one of those spaces that may be used to define colors in WebKit.

    Learn more: Improving Color on the Web, Wide Gamut Color in CSS with Display-P3 (WebKit).

    See also: Color.h, ColorComponents.h and ColorSpace.h (WebKit).

    Having this support for higher precision colors has already made it possible to implement several color features in WebKit, for example:

    An importabnt difference is that WebKit uses the platform’s graphic libraries directly (e.g. CoreGraphics on Mac) whereas Chromium uses Skia across different platforms. Support for displaying colors beyond the sRGB gamut may not be available in all platforms.

    High precision colors in Skia

    Interestingly, Skia does not have the same limits in color precision and range as Blink does.

    Internally, it has a format for high-precision colors that holds four float values, and it is also able to take color spaces into account.

    See: SkRGBA4f and SkColor4f in SkColor.h.

    Much of the Skia API is already able to take as input a colorspace and one or more high precision colors defined in it. Skia is also able to convert between source and destination color spaces, so colors can be manipulated with flexibility before being adapted to be displayed on concrete hardware.

    In Skia, a color space is defined by a transfer function and a gamut. See: SkColorSpace.h.

    So, Skia is able to paint richer colors on hardware that supports them.

    This means that, if we managed to get that rich color information defined in the Web sources at the beginning of the pipeline all the way to Skia at the end of the pipeline, we would be able to paint those colors correctly on the screen :)

    However, two more things would be needed in order to implement the full functionality of the CSS Color specs. First, Skia’s representation of high-precision colors still uses the RGBA structure, so out of the box Skia does not support other formats like LAB or LCH.

    Secondly, as we have seen, the CSS Color spec provides ways to specify the interpolation colorspace for gradients, transitions, etc. Blink relies on Skia for this interpolation, but Skia does not provide fine-grained control: Skia will always use the colorspace where the source colors have been defined, and does not support interpolating in a different space.

    These point towards the need for an additional layer between the Blink painting code and SKia that is able to translate the richer color information into formats that Skia can understand and use to display those colors on the screen.

    Summary

    As a very broad summary, the first step to support wider, richer color gamuts in Blink is to parse the CSS code using those new features.

    Those wide gamut colors and their colorspaces need to be stored in a high-precision format that can be used throughout Blink’s rendering pipeline.

    At the end of the pipeline, it needs to be translated so Skia can paint those colors correctly in the desired hardware. For this, we will also need more fine-grained control over interpolation and probably other changes.

    This work is not straightforward because it would touch a lot of different components, and it might also have an impact on memory, on performance, on how paint information is recorded and used, etc.

    For Web authors it is important that these features are available at the same time, so they can rely on the new functionality provided by the CSS Color spec.

    In Closing

    I hope that with this you got a better understanding of the value of adding richer colors to the Web and the scope of the work that would be needed to do so in Chromium.

    These are some steps in the long road to increase the expressivity of the web platform and to widen the range of tools that are available to authors when creating the Web.

    Thank you very much for reading.

    by Felipe Erias at July 01, 2021 12:00 AM

    June 28, 2021

    Miguel A. Gómez

    Unresponsive web processes in WPE and WebKitGTK

    In case you’re not familiar with WebKit‘s multiprocess model, allow me to explain some basics: In WebKitGTK and WPE we currently have three main processes which collaborate in order to ultimately render the web pages (this is subject to change, probably with the addition of new processes):

    1. The UI process: this is the process of the application that instantiates the web view. For example, Cog when using WPE or Epiphany when using WebKitGTK. This process is usually in charge of interacting with the user/application and send events and requests to the other processes.
    2. The network process: the goal of this process is to perform network requests for the different web processes and make the results available to them.
    3. The web process: this is the process that really performs the heavy lifting. It’s in charge of requesting the resources required by the page (through the network process), create the DOM tree, execute the JS code, render the content, respond to user events, etc.

    In a simple case, we would have a UI process, a web process and a network process working together to render a page. There are situations where (due to security reasons, performance, build options, etc.) the same UI process can be using more than a single web process at the same time, while these share the network process. I won’t go into details about all the possibilities here because that’s beyond the goal of this post (but it’s a good idea for a future post!). If you want more information related to this you can search for topics like process prewarm, process swap on navigation or service workers running on a dedicated processes. In any case, at any given moment, one of those web processes is always the main one performing all the tasks mentioned before, and the others are helpers for concrete tasks, so the situation is equivalent to the simpler case of a single UI process, a web process and an network process.

    Developers know that processes have the bad habit of getting hung sometimes, and that can happen here as well to any of the processes. Unfortunately, in many cases that’s probably due to some kind of bug in the code executed by the processes and we can’t do much more about that than fixing the bug. But there’s a special kind of block that affects the web process only, and it’s related to the execution of JS code. Consider this simple JS script for example:

    setTimeout(function() {
        while(true) { }
    }, 1000);
    

    This will create an infinite loop during the JS execution after one second, blocking the main thread of the web process, and taking 100% of the execution time. The main thread, besides executing the JS code, is also in charge of processing incoming events or messages from the UI process, perform the layout, initiate redraws of the page, etc. But as it’s locked in the JS loop, it won’t be able to perform any of those tasks, causing the web process to become unresponsive to user input or API calls that come through the UI process. There are other situations that could make the process unresponsive, but the execution of faulty JS is probably the most common one.

    WebKit has some internal tools to detect these situations, that mainly consist on timeouts that mark the process as unresponsive when some expected reply is not received in time from the web process. But on WebKitGTK and WPE there wasn’t a way to notify the application that’s using the web view that the web process had become unresponsive, or a way to recover from this situation (other than closing the application or the tab). This is fixed in trunk for both ports by adding two components to the WebKitWebView class:

    1. A new property named is-web-process-responsive: this property will signal responsiveness changes of the web process. The application using the web view can connect to the notifications of the object property as with any other GObject class, and can also get the its value using the new webkit_web_view_get_is_web_process_responsive() API method.
    2. A new API method webkit_web_view_terminate_web_process() that can be used kill the web process when it becomes unresponsive (but will also work with responsive ones). Any load performed after this call will spawn a new and shiny web process. When this method is used to kill a web process, the web view will emit the signal web-process-terminated with WEBKIT_WEB_PROCESS_TERMINATED_BY_API as the termination reason.

    With these changes, the browser can now connect to the is-web-process-responsive signal changes. If the web process becomes unresponsive, the browser can launch a dialog informing the user of the problem, and ask whether to wait or kill the process. If the user chooses to kill the process, the browser can use webkit_web_view_terminate_web_process() to kill the problematic process and then reload (or load a new page) to go back to functional state.

    These changes, among other wonderful features we’re developing at Igalia, will be available on the upcoming 2.34 release of WebKitGTK and WPE. I hope they are useful for you 🙂

    Happy coding!

    by magomez at June 28, 2021 02:15 PM

    Samuel Iglesias

    My experience in esLibre 2021

    This year, I decided to participate as speaker in esLibre 2021 conference. esLibre is a Spanish free software conference that covers a lot of different topics related to open-source projects: from the technical point of view to its social impact.

    This year the conference had talks about game development with Godot, KDE, LibreOffice, Free Software in Universities among many others. Check out the program.

    esLibre 2021

    This is my first time participating in this conference and I enjoyed it a lot. Huge applause to the organization team for the huge work to organize this edition, for helping out the speakers with different testing days and for their kindness to reply any question from me and other attendees. They did a superb job!

    My talk was an introduction to Mesa where I covered things like where is Mesa in the open-source graphics stack, a summary of what it does, the drivers implemented in Mesa, how our community is organized and how to contribute to it. If you know Spanish, you can check it out here (PDF). But in case you want an English version of it, this talk is very similar to the one I gave at Ubucon Europe 2018.

    My esLibre talk was recorded as well! I’ll update this post with the link to the recording once it is publicly available.

    Enjoy it!

    Introduction to Mesa

    June 28, 2021 08:20 AM

    June 24, 2021

    Gyuyoung Kim

    The progress of the legacy IPC migration in Chromium

    Recall of the legacy IPCs migration

    As you might know, Chromium has a multi-process architecture that involves many processes communicating with one another through IPC (Inter-Process Communication). However, traditionally IPC has used an approach which the Chromium project feels are not the best fit for achieving the project’s goals in terms of security, stability, and integration of a large number of components.  Igalia has been working on many aspects of changing this.  One very substantial effort has been in converting from the legacy named pipe IPC implementation to use the Mojo Framework. Mojo itself is approximately 3 times faster than the legacy IPC and involves one-third less context switching. We can remove unnecessary layers like //content/renderer layer to communicate between different processes. Besides we can easily connect interface clients and implementations across arbitrary inter-process boundaries.  Thus, replacing the legacy IPC with Mojo is one of the goals for Onion Soup 2.0 [1] and it’s a prerequisite to further Onion Souping and refactoring. The uses of the legacy IPCs were very spread widely and blocking several high-impact projects including BackForwardCache, Multiple blink isolates, and RenderDocument. There were about 450 legacy IPC in January 2020. Approximately 264 IPCs in //content layer and there were 194 IPCs in the other areas.

    The current Progress

    Igalia has been working on converting the legacy IPCs to Mojo since 2020 in earnest. We have mainly focused on converting the legacy IPCs in //content during the last year. 293 messages have been migrated and 3 IPC messages remain. 98% has done since August 2019. As IPC messages conversion was almost done in //content layer at the end of March 2021, there are only 3 IPC messages related to Jin Java Bridge messages now. And also, recently we started working on other areas. For example, Printing, Extensions, Android WebView, and so on. 49 messages have been migrated and 150 messages still remain. 24% has done since August 2019. We have been working on migrating the legacy IPC to Mojo in other modules since BlinkOn 13. All IPCs were successfully completed to migrate to Mojo in Android WebView, Media, Printing modules since BlinkOn13. And now, Igalia has been working on converting in Extensions. Besides that, the Java legacy IPC conversion for the communication between C++ native and Java layer for Android was held for a while because there are some issues with WebView API in the content layer.
    We have been working on migrating the legacy IPC to Mojo in other modules since BlinkOn 13. All IPCs were successfully completed to migrate to Mojo in Android WebView, Media, Printing modules since BlinkOn13. And now, Igalia has been working on converting in Extensions. Besides that, the Java legacy IPC conversion for the communication between C++ native and Java layers for Android was held for a while because there are some issues with WebView API in the content layer.
    The graph of the progress of the legacy IPC migration [2]
    The table of the progress of the legacy IPC migration [2]
    I shared this progress in the lightning talk of BlinkOn14. You can the video and the slides on the links.

    References

    [1] OnionSoup 2.0
    [2] Migration to Legacy IPC messages

    by gyuyoung at June 24, 2021 03:23 AM

    June 21, 2021

    Ricardo García

    VK_EXT_multi_draw released for Vulkan

    The Khronos Group has released today a new version of the Vulkan specification that includes the VK_EXT_multi_draw extension. This new extension has been championed by Mike Blumenkrantz, contracted by Valve to work on Zink, an OpenGL implementation that’s part of Mesa and runs on top of Vulkan. Mike has been working very hard to make OpenGL-on-Vulkan performant and better, and came up with this extension to close an existing gap between the two APIs. As part of the ongoing collaboration between Igalia and Valve, I had the chance to participate in the release process by reviewing the specification text in depth, providing feedback and fixes, and writing a set of CTS tests to check conformance for drivers implementing the extension. As you can see in the contributors list, VK_EXT_multi_draw had input and feedback from more vendors. Special mention to Jason Ekstrand from Intel, who provided an initial review of the text, and Piers Daniell from NVIDIA, who was also involved since the early stages.

    Thanks to VK_EXT_multi_draw, Vulkan will have equivalents to the glMultiDrawArrays and glMultiDrawElements functions from OpenGL. They’re called vkCmdDrawMultiEXT and vkCmdDrawMultiIndexedEXT. These two new functions allow recording a batch of draw commands in a command buffer using a single call, and they can be used in situations where an application would be recording a high number of draws without changing state. Although Vulkan already had mechanisms that allowed applications to record batches of draw commands in the form of indirect draws, these need the array of draw parameters to reside in a GPU-accessible buffer. VK_EXT_multi_draw, on the other hand, lets applications provide arrays of draw parameters using CPU memory.

    vkCmdDrawMultiEXT is essentially equivalent to calling vkCmdDraw multiple times in a row, and vkCmdDrawMultiIndexedEXT does the same for vkCmdDrawIndexed. To improve application performance and reduce CPU overhead, Vulkan drivers are allowed and encouraged to omit checks for API function arguments provided by applications (these correctness checks are provided by the Vulkan Validation Layers mainly during application development), and thanks to mechanisms like primary and secondary command buffers, Vulkan makes it possible to prepare sequences of commands for the GPU to execute using multiple threads and CPU cores. In this situation, you may be wondering how much of an improvement the new functions provide apart from saving a few microseconds processing some function calls. In other words, what’s the practical difference between calling vkCmdDraw a thousand times and batching a thousand draws using vkCmdDrawMultiEXT?

    The answer is that most of the overhead of recording a draw command doesn’t come from having to call a function, but in the checks the implementation has to run when recording the command. These checks may not be related to correctness, but to additional actions and options that may need to be taken depending on the state of the command buffer in the moment the draw command is recorded. For example, see the calls to radv_before_draw when RADV processes a draw command (note: RADV is Mesa’s super nice free software Vulkan driver for AMD cards). These checks only need to run once when using the new functions. In bechmark-like scenarios using real drivers, Mike has been able to verify that, while the overhead varies per driver and some of them are lightweight and have minimal overhead, some mainstream drivers can double their draw call processing rate when using VK_EXT_multi_draw.

    Mike has work-in-progress implementations for Mesa’s ANV and RADV drivers (the Vulkan drivers for Intel and AMD GPUs, respectively) which pass conformance and will hopefully land soon in Mesa’s main branch, and more drivers are expected to ship support for the extension in the near future.

    June 21, 2021 06:30 AM