Planet Igalia

November 16, 2017

Alberto Garcia

“Improving the performance of the qcow2 format” at KVM Forum 2017

I was in Prague last month for the 2017 edition of the KVM Forum. There I gave a talk about some of the work that I’ve been doing this year to improve the qcow2 file format used by QEMU for storing disk images. The focus of my work is to make qcow2 faster and to reduce its memory requirements.

The video of the talk is now available and you can get the slides here.

The KVM Forum was co-located with the Open Source Summit and the Embedded Linux Conference Europe. Igalia was sponsoring both events one more year and I was also there together with some of my colleages. Juanjo Sánchez gave a talk about WPE, the WebKit port for embedded platforms that we released.

The video of his talk is also available.

by berto at November 16, 2017 10:16 AM

November 14, 2017

Michael Catanzaro

Igalia is Hiring

Igalia is hiring web browser developers. If you think you’re a good candidate for one of these jobs, you’ll want to fill out the online application accompanying one of the postings. We’d love to hear from you.

We’re especially interested in hiring a browser graphics developer. We realize that not many graphics experts also have experience in web browser development, so it’s OK if you haven’t worked with web browsers before. Low-level Linux graphics experience is the more important qualification for this role.

Igalia is not just a great place to work on cool technical projects like WebKit. It’s also a political and social project: an egalitarian, worker-owned cooperative where everyone has an equal vote in company decisions and receives equal pay. It’s been around for 16 years, so it’s also not a startup. You can work remotely from wherever you happen to be, or from our office in A Coruña, Spain. You won’t have a boss, but you will be expected to work well with your colleagues. It’s not the right fit for everyone, but there’s nowhere I’d rather be.

by Michael Catanzaro at November 14, 2017 03:04 PM

November 13, 2017

Diego Pino

Snabb explained in less than 10 minutes

Last month I attended the 20th edition of GORE (the Spain’s Network Operator Group meeting) where I delivered an introductory talk about Snabb (Spanish). Slides of the talk are also available online (English).

Taking advantage of this presentation I decided to write down an introductory article about Snabb. Something that could allow anyone to understand what’s Snabb easily.

What is Snabb?

Snabb is a toolkit for developing network functions in user-space. This definition refers to two keywords that are worth clarifying: network functions and user-space.

What’s a network function?

A network function is any program that does something on network traffic. What kind of things can be done on traffic? For instance: to read packets, modify their headers, create new packets, discard packets or forward them. Any network function is a combination of these basic operations. Here are some examples:

  • Filtering function (i.e. firewalling): read incoming packets, compare to table of rules and execute an action (forward or drop).
  • Traffic mapping (i.e. NAT): read incoming packets, modify headers and forward packet.
  • Encapsulation (i.e. VPN): read incoming packets, create a new packet, embed packet into new one and send it.

What’s user-space networking?

In the last few years, there has been a new trend for writing down network functions. This new trend consists of writing down the entire network function in user-space and do not leave any processing to the kernel.

Traditionally when writing down network functions we use the abstractions provided by the OS. The goal of any OS is to create abstractions over hardware that programs can use. This happens at many levels. For instance, when dealing with a hard-drive we don’t need to think of heads, cylinders and sectors but use a higher level abstraction: the filesystem. Networking is another layer abstracted by the OS. As programmers, we don’t deal with the NIC directly, instead we work with sockets and have access to APIs to deal with the TCP/IP stack.

However, the addition of higher level abstractions implicitly adds an overhead to the processing of our network function. The first disadvantage is that the function is split in two lands: user-space and kernel-space, and switching between both lands has a cost. And even if we move as much logic as possible to the kernel, there are inherit costs caused by the kernel’s networking layer.

The need of skipping the kernel and program network functions entirely in user-space was triggered by the continuous improvement of hardware. Today is possible to buy a 10G NIC for less than 200 euros. Soon the idea of building high-performance network appliances out of commodity hardware seemed feasible. Someone could pick an Intel Xeon, fill in the available PCI slots with 10G NICs and expect to have the equivalent of a very expensive Cisco or Juniper router for a fraction of its cost.

If we drive the hardware described above entirely with Linux we won’t be able to squeeze all its performance. Every packet hitting the NICs will have to go through the kernel’s networking layer and that has a cost caused by all the operations the kernel does onto packets before they’re available to manipulate by our program. To understand how much this is a problem, I need to introduce the concept of budget in a network function.

Know your network function budget

If we want to make the most of our hardware we generally would like to run our network function at line-rate speed, that means, the maximum speed of the NIC. How much time is that? In a 10G NIC, if we are receiving packets of an average size of 550-bytes at the maximum speed then we’re receiving a new packet every 440ns. That’s all the time we have available to run our network function on a packet.

Usually the way a NIC works is by placing incoming packets in a queue or buffer. This buffer is actually a ring-buffer, that means there are two cursors pointing to the buffer, the Rx cursor and the Tx cursor. When a new packet arrives, the packet is written at the Rx position and the cursor gets updated. When a packet leaves the buffer, the packet is read at the Tx position and the cursor gets updated after read. Our network function fetches packets from the Tx cursor. If it’s too slow processing a packet, the Rx cursor will eventually overpass the TX cursor. When that happens there’s a packet drop (a packet was overwritten before it was consumed).

Let’s go back to the 440ns number. How much time is that? Kernel hacker Jesper Brouer discusses this issue on his excellent talk “Network stack challenges at increasing speed” (I also recommend LWN’s summary of the talk: Improving Linux networking performance). Here’s the cost of some common operations: (cost varies depending on hardware but the order of magnitude is similar across different hardware settings)

  • Spinlock (Lock/Unlock): 16ns.
  • L2 cache hit: 4.3ns.
  • L3 cache hit: 7.9ns.
  • Cache miss: 32ns.

Taking into account those numbers 440ns doesn’t seem like a lot of time. System calls cost is also prohibitive, which should be minimized as much as possible.

Another important thing to notice is that the smaller the size of the packet, the smaller the budget. On a 10G NIC if we’re receiving packets of 64-byte on average, the smallest IPv4 packet size possible, that means we are receiving a new packet every 59ns. In this scenario two straight cache misses would eat the whole budget.

In conclusion, at these NIC speeds the additional overhead the kernel networking layer adds is non trivial, but significantly big enough to affect the execution of our network function. Since our budget gets reduced packets are more likely to be dropped at higher speeds or at smaller packet sizes, limiting the overall performance of our network card.

NOTE: This is a general picture of the issue of doing high-performance networking in the Linux kernel. The kernel hackers are not ignorant of these problems and have been working on ways to fix them in the last years. In that regard is worth mentioning the addition of XDP (eXpress Data Path), a kernel abstraction to execute network functions as closer to the hardware as possible. But that’s a subject for another post.

By-passing the kernel

User-space networking needs to by-pass the kernel’s networking layer so it can squeeze all the performance of the underlying hardware. There are several strategies to do that: user-space drivers, PF_RING, Netmap, etc (Cloudflare has an excellent article on kernel by-pass, commenting several of those strategies). Snabb chooses to handling the hardware directly, that means, to provide user-space drivers for the NICs it supports.

Snabb offers support mostly for Intel cards (although some Solarflare and Mellanox models are also supported). Implementing a driver, either in kernel-space or user-space, is not an easy task. It’s fundamental to have access to the vendor’s datasheet (generally a very large document) to know how to initialize the NIC, how to read packets from it, how to transfer data, etc. Intel provides such datasheet. In fact, Intel started a few years ago a project with a similar goal: DPDK. DPDK is an open-source project that implements drivers in user-space. Although originally it only provided drivers for Intel NICs, as the adoption of the project increased, other vendors have started to add drivers for their hardware.

Inside Snabb

Snabb was started in 2012 by free software hacker Luke Gorrie. Snabb provides direct access to the high-performance NICs but in addition to that it also provides an environment for building and running network functions.

Snabb is composed of several elements:

  • An Engine, that runs the network functions.
  • Libraries, that ease the development of network functions.
  • Apps, reusable software components that generally manipulate packets.
  • Programs, ready-to-use standalone network functions.

A network function in Snabb is a combination of apps connected together by links. The Snabb’s engine is in charge of feeding the app graph with packets and give a chance to every app to execute.

The engine processes the app graph in breadths. A breadth consists of two steps:

  • Inhale, puts packet into the graph.
  • Process, every app has a chance to receive packets and manipulate them.

During the inhale phase the method pull of an app gets executed. Apps that implement such method act as packet generators within the app graph. Packets are placed at the app’s links. Generally there’s only one app of think kind for every graph.

During the process phase the method push of an app gets executed. This gives a chance to every app to read packet from its incoming link, do something with them and likely place them out their outgoing link.

Hands-on example

Let’s build a network function that captures packets from a 10G NIC filters them using a packet-filtering expression and writes down the filtered packets to a pcap file. Such network function would look like this:

Snabb basic filter
Snabb basic filter

In Snabb code the equivalent graph above could be coded like this:

function run()
	local c = config.new()

	-- App definition.
	config.add(c, "nic", Intel82599, {
		pci = "0000:04:00.0"
	})
	config.add(c, "filter", PcapFilter, "src port 80")
	config.add(c, "pcap", Pcap.PcapWriter, "output.pcap")

	-- Link definition.
	config.link(c, "nic.tx        -> filter.input")
	config.link(c, "filter.output -> pcap.input")

	engine.configure(c)
	engine.main({duration=10})
end

A configuration is created describing the app graph of the network function. The configuration is passed down to Snabb which executes it for 10 seconds.

When Snabb’s engine runs this network function it executes the pull method of each app to feed packets into the graph links, inhale step. During the process step, the method push of each app is executed so apps have a chance to fetch packets from their incoming links, do something with them and likely place them into their outgoing links.

Here’s how the real implementation of PcapFilter.push method looks like:

function PcapFilter:push ()
	while not link.empty(self.input.rx) do
 		local p = link.receive(self.input.rx)
  		if self.accept_fn(p.data, p.length) then
     		link.transmit(self.output.tx, p)
     	else
     		packet.free(p)
		end
	end
end

A packet in Snabb is a really simple data structure. Basically, it consists of a length field and an array of bytes of fixed size.

struct packet {
	uint16_t length;
  	unsigned char data[10*1024];
};

A link is a ring-buffer of packets.

struct link {
	struct packet *packets[1024];
  	// the next element to be read
  	int read;
  	// the next element to be written
  	int write;
};

Every app has zero or many input links and zero or many output links. The number of links is created on runtime when the graph is defined. In the example above, the nic app has one outgoing link (nic.tx); the filter app has one incoming link (filter.rx) and one outgoing link (filter.tx); and the pcap app has one incoming link (pcap.input).

It might be surprising that packets and links are defined in C code, instead of Lua. Snabb runs on top of LuaJIT, an ultra-fast virtual machine for executing Lua programs. LuaJIT implements an FFI (Foreign Function Interface) to interact with C data types and call C runtime functions or external libraries directly from Lua code. In Snabb most data structures are defined in C which allows to compact data more efficiently.

local ether_header_t = ffi.typeof [[
/* All values in network byte order.  */
struct {
   uint8_t  dhost[6];
   uint8_t  shost[6];
   uint16_t type;
} __attribute__((packed))
]]

Calling a C-runtime function is really easy too.

ffi.cdef[[
  void syslog(int priority, const char\* format, ...);
]]
ffi.C.syslog(2, "error:...");

Wrapping up and last thoughts

In this article I’ve covered the basics of Snabb. I showed how to use Snabb to build network functions and explained why Snabb is a very convenient toolkit to write such type of programs. Snabb runs very fast since it by-passes the kernel, which makes it very useful for high-performance networking. In addition, Snabb is written in the high-level language Lua which enormously simplifies the entry barrier to start writing network functions.

However, there’s more things in Snabb I left out in this article. Snabb comes with a preset of programs ready to run. It also comes with a vast collection of apps and libraries which can help to speed up the construction of new network functions.

You don’t need to own a Intel10G card to start using Snabb today. Snabb can be used over TAP interfaces. It won’t be highly performant but it’s the best way to start with Snabb.

In a next article I plan to cover a more elaborated example of a network function using TAP interfaces.

November 13, 2017 06:00 AM

November 08, 2017

Eleni Maria Stea

Fosscomm 2017

FOSSCOMM (Free and Open Source Software Communities Meeting) is a Greek conference aiming at free-software and open-source enthusiasts, developers, and communities. The event is solely organized and ran by volunteers (usually university students, communities, Linux User Groups) and is taking place in a different city every year. The attendance is free and everyone is welcome to make a presentation or a workshop related to free and open source projects.

I always try to attend this meeting when the dates and the place are convenient, as it is a great opportunity to meet old friends and hangout with geek people.

This year’s Fosscomm2017 (website is in Greek) was held at the Harokopio University of Athens, during the weekend: 4-5th November 2017.

I grabbed the opportunity to go and give a talk about Mesa 3D, a project where the Igalia’s graphics team makes several contributions and releases the last 5 years.

My presentation was titled: “Hacking on Mesa 3D” and it was a short introductory talk about the OpenGL implementation, the OpenGL extension system, the GLSL compiler, the drivers and some of the development and debugging processes we use.

Slides and video (in Greek because that was the conference language):

index


To my surprise, my talk wasn’t the only one mentioning Igalia. 🙂 Dimitris Glynos, one of the Co-Founders of Census (and FOSSCOMM sponsor) gave the talk “FOSS is all we got: building a competitive IT skill set in Greece today” and mentioned us among other examples of companies that work successfully on open source projects.

Among the other talks I’ve attended, I particularly liked the “Linux Metrics” workshop by Effie Mouzeli and Giorgos Kargiotakis (during FOSSCOMM day #1), that was aiming to teach users and developers how to use metrics tools to detect performance issues. It was so successful that they’ve been asked to re-run it the following day.

Most of the other presentations and workshops, as well as the schedule, can be found here (for those who can understand greek):  https://www.fosscomm.hua.gr/. The FOSSCOMM organizers will soon edit the videos and upload them on a channel on YouTube.

this is my post-FOSSCOMM cup collection

I’d like to thank the people who attended the presentation, the FOSSCOMM2017 organization team who did such a great job on preparing and hosting the event and of course Igalia that is giving me the opportunity to work on cool graphics stuff.

See you at FOSSCOMM 2018! 😉

 

by hikiko at November 08, 2017 07:30 AM

November 01, 2017

Martin Robinson

Small Things

Even between two highly-developed western countries, there are a lot of cultural differences. After moving, I experienced the sort of culture shock that the Internet warns you about. Thankfully, the passage of time means that grumbling noon-time stomachs gradually give way to curiously peckish 2:30pm lunches. Instead of sitting in dread, willing your useless, polite American hands to flag a waiter, you manage to order a tiny beer using only your eyeballs. Big differences fade into the background so much that maybe you start to keep a list of them, just to avoid the feeling that you are forgetting some original piece of yourself.

This new familiarity begins to expose the incredibly long tail of subtle differences that have been hanging out quietly in the background. Unnamed onomatopoeias have a completely different sound. People are making gestures with their hands while they speak, and those gestures actually mean something very clear. Your brain calmly catalogs these curiosities as they become too trivial to comment on.

If you are like me, you stare at the street, the stoplights, and the sidewalks. Suddenly, the endless, small scale war being waged in the space between the double (and triple) parkers and the buildings becomes apparent. You see the rows of bollards silently holding back a tide of cars and delivery vans. Unspoken rules from your home country no longer apply here, after having taken them for granted for years

I don’t want to blab on too long about mundane things, so I will just point to the example of curb cuts. In the US we use curb cuts to connect the roadway to private garages, driveways, and parking lots. Thousands of dollars are spent lovingly crafting each of these small cement altars to the passage of automobiles. The sidewalk itself kneels to the pavement, so that cars can smoothly and comfortably climb into pedestrian space. This is all, of course, at the expense of people walking and in wheelchairs who often have to travel across an uneven sidewalk and wait for cars as they appear and (hopefully) leave. A curb cut is a signal that at any moment a car may enter the sidewalk and that it has a right to be there.

Spanish cities sometimes use little ramps instead. They look cheap and their metal surface is usually painted a bright and gaudy yellow. Their angle is decidedly steeper compared to modern curb cuts in the US, which means it is not easy to drive onto the sidewalk quickly or comfortably. Additionally, they look like they can also be added and removed cheaply and without modifying the sidewalk at all. Even more bizarrely, they are installed on the sacred roadway itself, so the sidewalk remains level for the all people who might happen to be using it. Sometimes, they even extend so far out into the roadway that parallel parking would be difficult or impossible, which prevents the space from becoming a private parking spot.

These little ramps are a small detail of the city, but for me they send a clear message. They announce to cars that they are entering a segregated pedestrian space. This invitation is conditional on moving slowly and carefully and can be revoked at any time with a hydraulic wrench. Maybe they are common simply because they are a cheap leftover from a period when this was a poorer country. I have a feeling that as time goes on, they will slowly be replaced by compact curb cuts descending from nice, new sidewalks. Despite all this, I feel a little bit of sadness, because their economy and their imperfection made the sidewalk just that much nicer.

November 01, 2017 04:00 AM

October 30, 2017

Víctor Jáquez

GStreamer Conference 2017

This year, the GStreamer Conference happened in Prague, along with the traditional autumn Hackfest.

Prague is a beautiful city, though this year I couldn’t visit it as much as I wanted, since the Embedded Linux Conference Europe and the Open Source Summit also took place there, and Igalia, being a Linux Foundation sponsor, had a booth in the venue, where I talked about our work with WebKit, Snabb, and obviously, GStreamer.

But, let’s back to the GStreamer Hackfest and Conference.

One of the features that I like the most of the GStreamer project is its community, the people involved in it, by writing code, sharing their work with many others. They might appear a bit tough at beginning (or at least that looked to me) but in real they are all kind and talented persons. And I’m proud of consider myself part of this community. Nonetheless it has a diversity problem, as many other Open Source communities.

GStreamer Conference 2017

During the Hackfest, Hyunjun and I, met with Sree and talked about the plans for GStreamer-VAAPI, the new features in VA-API and libva and how we could map them to the GStreamer’s design. Also we talked about the future developments in the msdk elements, merged one year ago in gst-plugins-bad. Also, I talked a bit with Nicolas Dufresne regarding kmssink and DMABuf.

In the Conference, which happened in the same venue as the hackfest, I talked wit the authors of gstreamer-media-SDK. They are really energetic.

I delivered my usual talk about GStreamer-VAAPI. You can find the slides, as a web presentation, here. Also, as every year, our friends of Ubicast, recorded the talks, and made them available for streaming almost instantaneously:

My colleague Enrique talked in the Conference about the Media Source Extensions (MSE) on WebKit, and Hyunjun shared his experience with VA-API on Rust.

Also, in the conference venue, we showed a couple demos. One of them was a MinnowBoard running WPE, rendering videos from YouTube using gstreamer-vaapi to decode video.

by vjaquez at October 30, 2017 04:24 PM

October 22, 2017

Frédéric Wang

Recent Browser Events

TL;DR

At Igalia, we attend many browser events. This is a quick summary of some recents conferences I participated to… or that gave me the opportunity to meet Igalians in Paris 😉.

Week 31: Paris - CSS WG F2F - W3C

My teammate Sergio attended the CSS WG F2F meeting as an observer. On Tuesday morning, I also made an appearance (but it was so brief that ceux que j’ai rencontrés ne m’ont peut-être pas vu). Together with other browser vendors and WG members, Sergio gave an interview regarding the successful story of CSS Grid Layout. By the way, given our implementation work in WebKit and Blink, Igalia finally decided to join the CSS Working Group 😊. Of course, during that week I had dinner with Sergio and it was nice to chat with my colleague in a French restaurant of Montmartre.

Week 38: Tokyo - BlinkOn 8 - Google

Jacobo, Gyuyoung and I attended BlinkOn 8. I had nice discussions and listened to interesting talks about a wide range of topics (Layout NG, Accessibility, CSS, Fonts, Web Predictability & Standards, etc). It was a pleasure to finally meet in persons some developers I had been in touch with during my projects on Ozone/Wayland and WebKit/iOS. For the lightning talks, we presented our activities on embedded linux platforms and the Web Platform. Incidentally, it was great to see Igalia’s work mentioned during the Next Generation Rendering Engine session. Obviously, I had the opportunity to visit places and taste Japanese food in Asakusa, Ueno and Roppongi 😋.

Week 40: A Coruña - Web Engines Hackfest - Igalia

I attended one of my favorite events, that gathers the whole browser community during three days for technical presentations, breakout sessions, hacking and galician food. This year, we had many sponsors and attendees. It is good to see that the event is becoming more and more popular! It was long overdue, but I was finally able to make Brotli and WOFF2 installable as system libraries on Linux and usable by WebKitGTK+ 😊. I opened similar bugs in Gecko and the same could be done in Chromium. Among the things I enjoyed, I met Jonathan Kew in person and heard more about Antonio and Maksim’s progress on Ozone/Wayland. As usual, it was nice to share time with colleagues, attend the assembly meeting, play football matches, have meals, visit Asturias… and tell one’s story 😉.

Week 41: San Jose - WebKit Contributors Meeting - Apple

In the past months, I have mostly been working on WebKit at Igalia and I would have been happy to see my fellow WebKit developers. However, given the events in Japan and Spain, I was not willing to make another trip to the USA just after. Hence I had to miss the WebKit Contributors Meeting again this year 😞. Fortunately, my colleagues Alex, Michael and Žan were present. Igalia is an important contributor to WebKit and we will continue to send people and propose some talks next year.

Week 42: Paris - Monthly Speaker Series - Mozilla

This Wednesday, I attended a conference on Privacy as a Competitive Advantage in Mozilla’s office. It was nice to hear about the increasing interest on privacy and to see the regulation made by the European Union in that direction. My colleague Philippe was visiting the office to work with some Mozilla developers on one of our project, so I was also able to meet him in the conference room. Actually, Mozilla employees were kind enough to let me stay at the office after the conference… Hence I was able to work on Apple’s Web Engine on a project sponsored by Google at the Mozilla office… probably something you can only do at Igalia 😉. Last but not least, Guillaume was also in holidays in Paris this week, so I let you imagine what happens when three French guys meet (hint: it involves food 😋).

October 22, 2017 10:00 PM

October 20, 2017

Adrián Pérez

Web Engines Hackfest, 2017 Edition

At the beginning of October I had the wonderful chance of attending the Web Engines Hackfest in A Coruña, hosted by Igalia. This year we were over 50 participants, which was great to associate even more faces to IRC nick names, but more importantly allows hackers working at all the levels of the Web stack to share a common space for a few days, making it possible to discuss complex topics and figure out the future of the projects which allow humanity to see pictures of cute kittens — among many other things.

Mandatory fluff (CC-BY-NC).

During the hackfest I worked mostly on three things:

  • Preparing the code of the WPE WebKit port to start making preview releases.

  • A patch set which adds WPE packages to Buildroot.

  • Enabling support for the CSS generic system font family.

Fun trivia: Most of the WebKit contributors work from the United States, so the week of the Web Engines hackfest is probably the only single moment during the whole year that there is a sizeable peak of activity in European day times.

Watching repository activity during the hackfest.

Towards WPE Releases

At Igalia we are making an important investment in the WPE WebKit port, which is specially targeted towards embedded devices. An important milestone for the project was reached last May when the code was moved to main WebKit repository, and has been receiving the usual stream of improvements and bug fixes. We are now approaching the moment where we feel that is is ready to start making releases, which is another major milestone.

Our plan for the WPE is to synchronize with WebKitGTK+, and produce releases for both in parallel. This is important because both ports share a good amount of their code and base dependencies (GStreamer, GLib, libsoup) and our efforts to stabilize the GTK+ port before each release will benefit the WPE one as well, and vice versa. In the coming weeks we will be publishing the first official tarball starting off the WebKitGTK+ 2.18.x stable branch.

Wild WEBKIT PORT appeared!

Syncing the releases for both ports means that:

  • Both stable and unstable releases are done in sync with the GNOME release schedule. Unstable releases start at version X.Y.1, with Y being an odd number.

  • About one month before the release dates, we create a new release branch and from there on we work on stabilizing the code. At least one testing release with with version X.Y.90 will be made. This is also what GNOME does, and we will mimic this to avoid confusion for downstream packagers.

  • The stable release will have version X.Y+1.0. Further maintenance releases happen from the release branch as needed. At the same time, a new cycle of unstable releases is started based on the code from the tip of the repository.

Believe it or not, preparing a codebase for its first releases involves quite a lot of work, and this is what took most of my coding time during the Web Engines Hackfest and also the following weeks: from small fixes for build failures all the way to making sure that public API headers (only the correct ones!) are installed and usable, that applications can be properly linked, and that release tarballs can actually be created. Exhausting? Well, do not forget that we need to set up a web server to host the tarballs, a small website, and the documentation. The latter has to be generated (there is still pending work in this regard), and the whole process of making a release scripted.

Still with me? Great. Now for a plot twist: we won't be making proper releases just yet.

APIs, ABIs, and Releases

There is one topic which I did not touch yet: API/ABI stability. Having done a release implies that the public API and ABI which are part of it are stable, and they are not subject to change.

Right after upstreaming WPE we switched over from the cross-port WebKit2 C API and added a new, GLib-based API to WPE. It is remarkably similar (if not the same in many cases) to the API exposed by WebKitGTK+, and this makes us confident that the new API is higher-level, more ergonomic, and better overall. At the same time, we would like third party developers to give it a try (which is easier having releases) while retaining the possibility of getting feedback and improving the WPE GLib API before setting it on stone (which is not possible after a release).

It is for this reason that at least during the first WPE release cycle we will make preview releases, meaning that there might be API and ABI changes from one release to the next. As usual we will not be making breaking changes in between releases of the same stable series, i.e. code written for 2.18.0 will continue to build unchanged with any subsequent 2.18.X release.

At any rate, we do not expect the API to receive big changes because —as explained above— it mimics the one for WebKitGTK+, which has already proven itself both powerful enough for complex applications and convenient to use for the simpler ones. Due to this, I encourage developers to try out WPE as soon as we have the first preview release fresh out of the oven.

Packaging for Buildroot

At Igalia we routinely work with embedded devices, and often we make use of Buildroot for cross-compilation. Having actual releases of WPE will allow us to contribute a set of build definitions for the WPE WebKit port and its dependencies — something that I have already started working on.

Lately I have been taking care of keeping the WebKitGTK+ packaging for Buildroot up-to-date and it has been delightful to work with such a welcoming community. I am looking forward to having WPE supported there, and to keep maintaining the build definitions for both. This will allow making use of WPE with relative ease, while ensuring that Buildroot users will pick our updates promptly.

Generic System Font

Some applications like GNOME Web Epiphany use a WebKitWebView to display widget-like controls which try to follow the design of the rest of the desktop. Unfortunately for GNOME applications this means Cantarell gets hardcoded in the style sheet —it is the default font after all— and this results in mismatched fonts when the user has chosen a different font for the interface (e.g. in Tweaks). You can see this in the following screen capture of Epiphany:

Web using hardcoded Cantarell and (on hover) -webkit-system-font.

Here I have configured the beautiful Inter UI font as the default for the desktop user interface. Now, if you roll your mouse over the image, you will see how much better it looks to use a consistent font. This change also affects the list of plugins and applications, error messages, and in general all the about: pages.

If you are running GNOME 3.26, this is already fixed using font: menu (part of the CSS spec since ye olde CSS 2.1) — but we can do better: Safari has had support since 2015, for a generic “system” font family, similar to sans-serif or cursive:

/* Using the new generic font family (nice!). */
body {
    font-family: -webkit-system-font;
}

/* Using CSS 2.1 font shorthands (not so nice). */
body {
    font: menu;       /* Pick ALL font attributes... */
    font-size: 12pt;  /* ...then reset some of them. */
    font-weight: 400;
}

During the hackfest I implemented the needed moving parts in WebKitGTK+ by querying the GtkSettings::gtk-font-name property. This can be used in HTML content shown in Epiphany as part of the UI, and to make the Web Inspector use the system font as well.

Web Inspector using Cantarell, the default GNOME 3 font (full size).

I am convinced that users do notice and appreciate attention to detail, even if they do unconsciously, and therefore it is worthwhile to work on this kind of improvements. Plus, as a design enthusiast with a slight case of typographic OCD, I cannot stop myself from noticing inconsistent usage of fonts and my mind is now at ease knowing that opening the Web Inspector won't be such a jarring experience anymore.

Outro

But there's one more thing: On occasion we developers have to debug situations in which a process is seemingly stuck. One useful technique involves running the offending process under the control of a debugger (or, in an embedded device, under gdbserver and controlled remotely), interrupting its execution at intervals, and printing stack traces to try and figure out what is going on. Unfortunately, in some circumstances running a debugger can be difficult or impractical. Wouldn't it be grand if it was possible to interrupt the process without needing a debugger and request a stack trace? Enter “Out-Of-Band Stack Traces” (proof of concept):

  1. The process installs a signal handler using sigaction(7), with the SA_SIGINFO flag set.

  2. On reception of the signal, the kernel interrupts the process (even if it's in an infinite loop), and invokes the signal handler passing an extra pointer to an ucontext_t value, which contains a snapshot of the execution status of the thread which was in the CPU before the signal handler was invoked. This is true for many platform including Linux and most BSDs.

  3. The signal handler code can get obtain the instruction and stack pointers from the ucontext_t value, and walk the stack to produce a stack trace of the code that was being executed. Jackpot! This is of course architecture dependent but not difficult to get right (and well tested) for the most common ones like x86 and ARM.

The nice thing about this approach is that the code that obtains the stack trace is built into the program (no rebuilds needed), and it does not even require to relaunch the process in a debugger — which can be crucial for analyzing situations which are hard to reproduce, or which do not happen when running inside a debugger. I am looking forward to have some time to integrate this properly into WebKitGTK+ and specially WPE, because it will be most useful in embedded devices.

by aperez (adrian@perezdecastro.org) at October 20, 2017 11:30 PM

October 17, 2017

Enrique Ocaña

Attending the GStreamer Conference 2017

This weekend I’ll be in Node5 (Prague) presenting our Media Source Extensions platform implementation work in WebKit using GStreamer.

The Media Source Extensions HTML5 specification allows JavaScript to generate media streams for playback and lets the web page have more control on complex use cases such as adaptive streaming.

My plan for the talk is to start with a brief introduction about the motivation and basic usage of MSE. Next I’ll show a design overview of the WebKit implementation of the spec. Then we’ll go through the iterative evolution of the GStreamer platform-specific parts, as well as its implementation quirks and challenges faced during the development. The talk continues with a demo, some clues about the future work and a final round of questions.

Our recent MSE work has been on desktop WebKitGTK+ (the WebKit version powering the Epiphany, aka: GNOME Web), but we also have MSE working on WPE and optimized for a Raspberry Pi 2. We will be showing it in the Igalia booth, in case you want to see it working live.

I’ll be also attending the GStreamer Hackfest the days before. There I plan to work on webm support in MSE, focusing on any issue in the Matroska demuxer or the vp9/opus/vorbis decoders breaking our use cases.

See you there!

UPDATE 2017-10-22:

The talk slides are available at https://eocanha.org/talks/gstconf2017/gstconf-2017-mse.pdf and the video is available at https://gstconf.ubicast.tv/videos/media-source-extension-on-webkit (the rest of the talks here).

by eocanha at October 17, 2017 10:48 AM

October 15, 2017

Javier Muñoz

Attending LibreCon 2017

This week I will be attending LibreCon 2017, one of the largest international events on open source technologies. It will be held on 19 and 20 October in Santiago de Compostela (Spain).

This year’s theme is the application of open source technologies in the industrial and primary sector, as well as the new opportunities that these technologies offer in areas like Cloud Computing, Big Data, Internet of Things (IoT) and the Sharing Economy.

I will be delivering one talk, under the sponsorship of my company Igalia, on Ceph Object Storage and its S3 API. I will introduce the Ceph architecture and the basics to understand how make cloud storage products and services based on Ceph/RGW. I will also comment on the most useful and supported S3 API and tooling working with Ceph.

See you there!

by Javier at October 15, 2017 10:00 PM

October 02, 2017

Iago Toral

Working with lights and shadows – Part III: rendering the shadows

In the previous post in this series I introduced how to render the shadow map image, which is simply the depth information for the scene from the view point of the light. In this post I will cover how to use the shadow map to render shadows.

The general idea is that for each fragment we produce we compute the light space position of the fragment. In this space, the Z component tells us the depth of the fragment from the perspective of the light source. The next step requires to compare this value with the shadow map value for that same X,Y position. If the fragment’s light space Z is larger than the value we read from the shadow map, then it means that this fragment is behind an object that is closer to the light and therefore we can say that it is in the shadows, otherwise we know it receives direct light.

Changes in the shader code

Let’s have a look at the vertex shader changes required for this:

void main()
{
   vec4 pos = vec4(in_position.x, in_position.y, in_position.z, 1.0);
   out_world_pos = Model * pos;
   gl_Position = Projection * View * out_world_pos;

   [...]

   out_light_space_pos = LightViewProjection * out_world_pos;
} 

The vertex shader code above only shows the code relevant to the shadow mapping technique. Model is the model matrix with the spatial transforms for the vertex we are rendering, View and Projection represent the camera’s view and projection matrices and the LightViewProjection represents the product of the light’s view and projection matrices. The variables prefixed with ‘out’ represent vertex shader outputs to the fragment shader.

The code generates the world space position of the vertex (world_pos) and clip space position (gl_Position) as usual, but then also computes the light space position for the vertex (out_light_space_pos) by applying the View and Projection transforms of the light to the world position of the vertex, which gives us the position of the vertex in light space. This will be used in the fragment shader to sample the shadow map.

The fragment shader will need to:

  1. Apply perspective division to compute NDC coordinates from the interpolated light space position of the fragment. Notice that this process is slightly different between OpenGL and Vulkan, since Vulkan’s NDC Z is expected to be in the range [0, 1] instead of OpenGL’s [-1, 1].
  • Transform the X,Y coordinates from NDC space [-1, 1] to texture space [0, 1].

  • Sample the shadow map and compare the result with the light space Z position we computed for this fragment to decide if the fragment is shadowed.

  • The implementation would look something like this:

    float
    compute_shadow_factor(vec4 light_space_pos, sampler2D shadow_map)
    {
       // Convert light space position to NDC
       vec3 light_space_ndc = light_space_pos.xyz /= light_space_pos.w;
    
       // If the fragment is outside the light's projection then it is outside
       // the light's influence, which means it is in the shadow (notice that
       // such sample would be outside the shadow map image)
       if (abs(light_space_ndc.x) > 1.0 ||
           abs(light_space_ndc.y) > 1.0 ||
           abs(light_space_ndc.z) > 1.0)
          return 0.0;
    
       // Translate from NDC to shadow map space (Vulkan's Z is already in [0..1])
       vec2 shadow_map_coord = light_space_ndc.xy * 0.5 + 0.5;
    
       // Check if the sample is in the light or in the shadow
       if (light_space_ndc.z > texture(shadow_map, shadow_map_coord.xy).x)
          return 0.0; // In the shadow
    
       // In the light
       return 1.0;
    }  
    

    The function returns 0.0 if the fragment is in the shadows and 1.0 otherwise. Note that the function also avoids sampling the shadow map for fragments that are outside the light’s frustum (and therefore are not recorded in the shadow map texture): we know that any fragment in this situation is shadowed because it is obviously not visible from the light. This assumption is valid for spotlights and point lights because in these cases the shadow map captures the entire influence area of the light source, for directional lights that affect the entire scene however, we usually need to limit the light’s frustum to the surroundings of the camera, and in that case we probably want want to consider fragments outside the frustum as lighted instead.

    Now all that remains in the shader code is to use this factor to eliminate the diffuse and specular components for fragments that are in the shadows. To achieve this we can simply multiply these components by the factor computed by this function.

    Changes in the program

    The list of changes in the main program are straight forward: we only need to update the pipeline layout and descriptors to attach the new resources required by the shaders, specifically, the light’s view projection matrix in the vertex shader (which could be bound as a push constant buffer or a uniform buffer for example) and the shadow map sampler in the fragment shader.

    Binding the light’s ViewProjection matrix is no different from binding the other matrices we need in the shaders so I won’t cover it here. The shadow map sampler doesn’t really have any mysteries either, but since that is new let’s have a look at the code:

    ...
    VkSampler sampler;
    VkSamplerCreateInfo sampler_info = {};
    sampler_info.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO;
    sampler_info.addressModeU = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
    sampler_info.addressModeV = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
    sampler_info.addressModeW = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
    sampler_info.anisotropyEnable = false;
    sampler_info.maxAnisotropy = 1.0f;
    sampler_info.borderColor = VK_BORDER_COLOR_INT_OPAQUE_BLACK;
    sampler_info.unnormalizedCoordinates = false;
    sampler_info.compareEnable = false;
    sampler_info.compareOp = VK_COMPARE_OP_ALWAYS;
    sampler_info.magFilter = VK_FILTER_LINEAR;
    sampler_info.minFilter = VK_FILTER_LINEAR;
    sampler_info.mipmapMode = VK_SAMPLER_MIPMAP_MODE_NEAREST;
    sampler_info.mipLodBias = 0.0f;
    sampler_info.minLod = 0.0f;
    sampler_info.maxLod = 100.0f;
    
    VkResult result =
       vkCreateSampler(device, &sampler_info, NULL, &sampler);
    ...
    

    This creates the sampler object that we will use to sample the shadow map image. The address mode fields are not very relevant since our shader ensures that we do not attempt to sample outside the shadow map, we use linear filtering, but that is not mandatory of course, and we select nearest for the mipmap filter because we don’t have more than one miplevel in the shadow map.

    Next we have to bind this sampler to the actual shadow map image. As usual in Vulkan, we do this with a descriptor update. For that we need to create a descriptor of type VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, and then do the update like this:

    VkDescriptorImageInfo image_info;
    image_info.sampler = sampler;
    image_info.imageView = shadow_map_view;
    image_info.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
    
    VkWriteDescriptorSet writes;
    writes.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
    writes.pNext = NULL;
    writes.dstSet = image_descriptor_set;
    writes.dstBinding = 0;
    writes.dstArrayElement = 0;
    writes.descriptorCount = 1;
    writes.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
    writes.pBufferInfo = NULL;
    writes.pImageInfo = &image_info;
    writes.pTexelBufferView = NULL;
    
    vkUpdateDescriptorSets(ctx->device, 1, &writes, 0, NULL);
    

    A combined image sampler brings together the texture image to sample from (a VkImageView of the image actually) and the description of the filtering we want to use to sample that image (a VkSampler). As with all descriptor sets, we need to indicate its binding point in the set (in our case it is 0 because we have a separate descriptor set layout for this that only contains one binding for the combined image sampler).

    Notice that we need to specify the layout of the image when it will be sampled from the shaders, which needs to be VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL.
    If you revisit the definition of our render pass for the shadow map image, you’ll see that we had it automatically transition the shadow map to this layout at the end of the render pass, so we know the shadow map image will be in this layout immediately after it has been rendered, so we don’t need to add barriers to execute the layout transition manually.

    So that’s it, with this we have all the pieces and our scene should be rendering shadows now. Unfortunately, we are not quite done yet, if you look at the results, you will notice a lot of dark noise in surfaces that are directly lit. This is an artifact of shadow mapping called self-shadowing or shadow acne. The next section explains how to get rid of it.

    Self-shadowing artifacts

    Eliminating self-shadowing

    Self-shadowing can happen for fragments on surfaces that are directly lit by a source light for which we are producing a shadow map. The reason for this is that these are the fragments’s Z coordinate in light space should exactly match the value we read from the shadow map for the same X,Y coordinates. In other words, for these fragments we expect:

    light_space_ndc.z == texture(shadow_map, shadow_map_coord.xy).x.
    

    However, due to different precession errors that can be generated on both sides of that equation, we may end up with slightly different values for each side and when the value we produce for light_space_ndc.z end ups being larger than what we read from the shadow map, even if it is a very small amount, it will mark the pixel as shadowed, leading to the result we see in that image.

    The usual way to fix this problem involves adding a small depth offset or bias to the depth values we store in the shadow map so we ensure that we always read a larger value from the shadow map for the fragment. Another way to think about this is to think that when we record the shadow map, we push every object in the scene slightly away from the light source. Unfortunately, this depth offset bias should not be a constant value, since the angle between the surface normals and the vectors from the light source to the fragments also affects the bias value that we should use to correct the self-shadowing.

    Thankfully, GPU hardware provides means to account for this. In Vulkan, when we define the rasterization state of the pipeline we use to create the shadow map, we can add the following:

    VkPipelineRasterizationStateCreateInfo rs;
    ...
    rs.depthBiasEnable = VK_TRUE;
    rs.depthBiasConstantFactor = 4.0f;
    rs.depthBiasSlopeFactor = 1.5f;
    

    Where depthBiasConstantFactor is a constant factor that is automatically added to all depth values produced and depthBiasSlopeFactor is a factor that is used to compute depth offsets also based on the angle. This provides us with the means we need without having to do any extra work in the shaders ourselves to offset the depth values correctly. In OpenGL the same functionality is available via glPolygonOffset().

    Notice that the bias values that need to be used to obtain the best results can change for each scene. Also, notice that too big values can lead to shadows that are “detached” from the objects that cast them leading to very unrealistic results. This effect is also known as Peter Panning, and can be observed in this image:

    Peter Panning artifacts

    As we can see in the image, we no longer have self-shadowing, but now we have the opposite problem: the shadows casted by the red and blue blocks are visibly incorrect, as if they were being rendered further away from the light source than they should be.

    If the bias values are chosen carefully, then we should be able to get a good result, although some times we might need to accept some level of visible self-shadowing or visible Peter Panning:

    Correct shadowing

    The image above shows correct shadowing without any self-shadowing or visible Peter Panning. You may wonder why we can’t see some of the shadows from the red light in the floor where the green light is more intense. The reason is that even though it is not clear because I don’t actually render the objects projecting the lights, the green light is mostly looking down, so its reflection on the floor (that has normals pointing upwards) is strong enough that the contribution from the red light to the floor pixels in this area is insignificant in comparison making the shadows casted from the red light barely visible. You can still see some shadowing if you get close enough with the camera though, I promise 😉

    Shadow antialiasing

    The images above show aliasing around at the edges of the shadows. This happens because for each fragment we decide if it is shadowed or not as a boolean decision, and we use that result to fully shadow or fully light the pixel, leading to aliasing:

    Shadow aliasing

    Another thing contributing to the aliasing effect is that a single pixel in the shadow map image can possibly expand to multiple pixels in camera space. That can happen if the camera is looking at an area of the scene that is close to the camera, but far away from the light source for example. In that case, the resolution of that area of the scene in the shadow map is small, but it is large for the camera, meaning that we end up sampling the same pixel from the shadow map to shadow larger areas in the scene as seen by the camera.

    Increasing the resolution of the shadow map image will help with this, but it is not a very scalable solution and can quickly become prohibitive. Alternatively, we can implement something called Percentage-Closer Filtering to produce antialiased shadows. The technique is simple: instead of sampling just one texel from the shadow map, we take multiple samples in its neighborhood and average the results to produce shadow factors that do not need to be exactly 1 o 0, but can be somewhere in between, producing smoother transitions for shadowed pixels on the shadow edges. The more samples we take, the smoother the shadows edges get but do note that extra samples per pixel also come with a performance cost.

    Smooth shadows with PCF

    This is how we can update our compute_shadow_factor() function to add PCF:

    float
    compute_shadow_factor(vec4 light_space_pos,
                          sampler2D shadow_map,
                          uint shadow_map_size,
                          uint pcf_size)
    {
       vec3 light_space_ndc = light_space_pos.xyz /= light_space_pos.w;
    
       if (abs(light_space_ndc.x) > 1.0 ||
           abs(light_space_ndc.y) > 1.0 ||
           abs(light_space_ndc.z) > 1.0)
          return 0.0;
    
       vec2 shadow_map_coord = light_space_ndc.xy * 0.5 + 0.5;
    
       // compute total number of samples to take from the shadow map
       int pcf_size_minus_1 = int(pcf_size - 1);
       float kernel_size = 2.0 * pcf_size_minus_1 + 1.0;
       float num_samples = kernel_size * kernel_size;
    
       // Counter for the shadow map samples not in the shadow
       float lighted_count = 0.0;
    
       // Take samples from the shadow map
       float shadow_map_texel_size = 1.0 / shadow_map_size;
       for (int x = -pcf_size_minus_1; x <= pcf_size_minus_1; x++)
       for (int y = -pcf_size_minus_1; y <= pcf_size_minus_1; y++) {
          // Compute coordinate for this PFC sample
          vec2 pcf_coord = shadow_map_coord + vec2(x, y) * shadow_map_texel_size;
    
          // Check if the sample is in light or in the shadow
          if (light_space_ndc.z <= texture(shadow_map, pcf_coord.xy).x)
             lighted_count += 1.0;
       }
    
       return lighted_count / num_samples;
    }
    

    We now have a loop where we go through the samples in the neighborhood of the texel and average their respective shadow factors. Notice that because we sample the shadow map in texture space [0, 1], we need to consider the size of the shadow map image to properly compute the coordinates for the texels in the neighborhood so the application needs to provide this for every shadow map.

    Conclusion

    In this post we discussed how to use the shadow map image to produce shadows in the scene as well as typical issues that can show up with the shadow mapping technique, such as self-shadowing and aliasing, and how to correct them. This will be the last post in this series, there is a lot more stuff to cover about lighting and shadowing, such as Cascaded Shadow Maps (which I introduced briefly in this other post), but I think (or I hope) that this series provides enough material to get anyone interested in the technique a reference for how to implement it.

    by Iago Toral at October 02, 2017 09:42 AM

    September 30, 2017

    Samuel Iglesias

    II Google Devfest Asturias 2017

    Hoy os hablo en la lengua de Cervantes para comentaros que el miércoles pasado fui invitado a dar una charla sobre Vulkan en el II Google DevFest Asturias organizado por GDG Asturias. Cabe destacar que este evento parte de la VII Semana de Impulso TIC organizada por el COIIPA y CITIPA, la cual es una magnífica manera de conocer qué se está haciendo en el mundo de las TIC en el Principado de Asturias.

    Mi charla se centró en explicar qué problemas pretende solucionar Vulkan y cuáles son los conceptos que introduce este nuevo API para aplicaciones de gráficos 3D. Espero que sea una charla útil para la gente que quiera conocer Vulkan teniendo algo de conocimiento previo en gráficos.

    Las slides de la charla están subidas aquí.

    GDG Asturias

    September 30, 2017 10:33 AM

    September 19, 2017

    Asumu Takikawa

    IPFIX app for Snabb

    As you know if you’ve been following this blog, at Igalia we build network functions using the Snabb toolkit. When we’re not directly working on customer projects, we often invest time into building new features into Snabb.

    One of these recent investments has been building a basic IP flow export (IPFIX) app for Snabb. The app is now available in the v2017.08 Snabb release. The app’s documentation is up on the web here.

    As you can see from the commit log, Andy Wingo helped out a great deal in building the app (and was responsible for much of the performance engineering).

    What is IP flow export?

    IPFIX refers to a widely used set of tools that let you monitor IP flows in your network. An IP flow is a set of IP packets that share some common characteristics. Often these are described by a flow key composed of a standard 5-tuple of source & destination address, protocol, and TCP/UDP/etc ports.

    Monitoring flows can be useful for traffic measurement and management, usage-based billing, and other use cases (many of which are spelled out in RFC 3917).

    You may have also heard of “Netflow”, which is the trade name used by Cisco for these tools. IPFIX is the standardized version (see RFC 5470 and related RFCs) that is backwards compatible with Netflow.

    (A great long-form overview of the topic is “Flow Monitoring Explained” by Hofstede et al, which was very helpful in designing the app)

    The basic architecture of IPFIX consists of a flow metering process which monitors the traffic, an exporter that communicates the data collected by the meter, and finally a collector that aggregates the data. In practice, the metering process and exporter are often combined into one program (so I’ll just refer to it as an exporter or as a probe).

    Here’s some ASCII art from the RFC showing the structure:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
                                 +----------------+     +----------------+
                                 |[*Application 1]| ... |[*Application n]|
                                 +--------+-------+     +-------+--------+
                                          ^                     ^
                                          |                     |
                                          + = = = = -+- = = = = +
                                                     ^
                                                     |
       +------------------------+            +-------+------------------+
       |IPFIX Exporter          |            | Collector(1)             |
       |[Exporting Process(es)] |<---------->| [Collecting Process(es)] |
       +------------------------+            +--------------------------+
               ....                                  ....
       +------------------------+           +---------------------------+
       |IPFIX Device(i)         |           | Collector(j)              |
       |[Observation Point(s)]  |<--------->| [Collecting Process(es)]  |
       |[Metering Process(es)]  |     +---->| [*Application(s)]         |
       |[Exporting Process(es)] |     |     +---------------------------+
       +------------------------+     .
              ....                    .              ....
       +------------------------+     |     +--------------------------+
       |IPFIX Device(m)         |     |     | Collector(n)             |
       |[Observation Point(s)]  |<----+---->| [Collecting Process(es)] |
       |[Metering Process(es)]  |           | [*Application(s)]        |
       |[Exporting Process(es)] |           +--------------------------+
       +------------------------+
    

    (what the RFC calls a “Device” contains some number of “metering processes”)

    The exporter might track information such as the total number of packets, payload sizes, and start/end times for a unique flow (these are called information elements and are standardized by the IANA).

    The exporter periodically sends its summarized data to a flow collector, which uses some kind of database to keep and track all of the flow information (unlike the exporter which may evict inactive flows). The collector can present this information to users in a variety of ways (e.g., a web UI).

    An exporter communicates with a collector using the IPFIX protocol, so that you can use an exporter with any off-the-shelf collector.

    Snabb app

    For Snabb, we implemented an IPFIX exporter as an app that can be integrated with other apps. For convenience, we provide a snabb ipfix probe commandline program that runs an exporter with a simple configuration.

    The flow keys and record fields to be collected are described using Lua tables, like the following:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    v4 = make_template_info {
       id     = 256,
       filter = "ip",
       keys   = { "sourceIPv4Address",
                  "destinationIPv4Address",
                  "protocolIdentifier",
                  "sourceTransportPort",
                  "destinationTransportPort" },
       values = { "flowStartMilliseconds",
                  "flowEndMilliseconds",
                  "packetDeltaCount",
                  "octetDeltaCount"}
    }
    

    This table describes a flow record that’s then used to dynamically generate the appropriate FFI data structures and configure the control flow of the app. The fields are described using the names from the IANA information element table.

    Since the exporter collects some information from all of the IP packets going through it, it has to be performant and use minimal resources.

    Making IPFIX perform well

    Performance was the most difficult part of implementing the IPFIX app. The core of the app is fairly simple. For each packet, it looks at the fields that define a flow (currently we just support the usual 5-tuple) and uses this as a flow key to index a ctable.

    Since ctables are implemented as a hashtable, this means that the hashing can be a key bottleneck for the app. This isn’t really surprising since hashing is a key operation that is often optimized and also delegated to hardware in networking.

    It turns out that the hashing algorithm used by ctable performed well with keys of certain sizes (4 or 8 bytes), but not with larger keys such as flow keys. We tried out several hash algorithms (e.g., FNV hashing) and got some incremental improvements. In the end, Andy was able to get better performance out of implementing SipHash using DynASM.

    Andy also implemented a bunch of other performance improvements, such as separating out the IPv4 and IPv6 data paths and re-organizing the multiple original apps into a single app that contains several mini-queues and work loops. In the end, these improvements got the app to line-rate performance in our synthetic benchmark for most packet sizes (performance on 64-byte packet sizes needs more work).

    In terms of real-world performance, we still need to do more work but are making progress thanks to our early adopters. Alexander Gall from SWITCH has been providing us with valuable feedback from running and hacking on the app with production data.

    Future improvements

    At this point, we have an IPFIX app that can be integrated into other Snabb solutions or used standalone on commodity Intel hardware. There’s still a lot of work that can be done on it though. One idea for improvement on the performance side is to parallelize it using the RSS feature on 10G network cards. RSS (receive-side scaling) lets you use multiple processes running in parallel on several receive queues on a single NIC.

    Conveniently, it turns out that at Igalia we’ve also been working on improving Snabb’s network drivers to make it easier to use RSS. There’s a good chance that we can parallelize IPFIX very easily, since there’s minimal coordination that’s needed between IPFIX instances.

    There are a number of other improvements we could make too. Probably the most useful is to add support for more information elements, and to make the observed IEs more configurable.

    Another area for improvement is the app configuration. Snabb has support for app configuration with YANG schemas and the IETF has a schema for IPFIX that we could use.

    Another limitation is that the IPFIX app currently just exports over UDP to a collector. The RFCs technically requires support for SCTP as well (adding that could be a lot of work, maybe we would offload the work to an existing userspace library).

    Final thoughts

    Working on IPFIX was pretty fun overall. One of the rewarding aspects of working on the IPFIX app is that it’s relatively easy to test it and see that it’s doing something. For example, you can plug it up to Wireshark and manually check the IPFIX packets to see what’s going on.

    You can also plug it into an off-the-shelf flow collector like nfdump and see some useful output. In fact, we have a test written using nix-shell that will spawn a shell in a new Nix enviroment with nfdump installed and test the app with it. Nix can be very nice for this kind of test environment setup.

    In any case, please feel free to try out the app. We would appreciate any feedback or bug reports!

    by Asumu Takikawa at September 19, 2017 01:23 AM

    September 14, 2017

    Jacobo Aragunde

    Attending BlinkOn 8

    Next week I will be in Tokyo to attend BlinkOn 8! It will be a great opportunity to meet the Chromium community and share what we are doing.

    Godzilla at Shinjuku

    I will give a lightning talk about the challenges of making Chromium run on embedded platforms. I hope to spark the curiosity of the audience in this complex field!

    EDIT: some pictures from the event:

    by Jacobo Aragunde Pérez at September 14, 2017 05:28 PM

    September 09, 2017

    Carlos García Campos

    WebDriver support in WebKitGTK+ 2.18

    WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.

    WebDriver in WebKitGTK+

    There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.

    The clients

    The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.

    from selenium import webdriver
    
    # Create a WebKitGTK driver instance. It spawns WebKitWebDriver 
    # process automatically that will launch MiniBrowser.
    wkgtk = webdriver.WebKitGTK()
    
    # Let's load the WebKitGTK+ website.
    wkgtk.get("https://www.webkitgtk.org")
    
    # Find the GNOME link.
    gnome = wkgtk.find_element_by_partial_link_text("GNOME")
    
    # Click on the link. 
    gnome.click()
    
    # Find the search form. 
    search = wkgtk.find_element_by_id("searchform")
    
    # Find the first input element in the search form.
    text_field = search.find_element_by_tag_name("input")
    
    # Type epiphany in the search field and submit.
    text_field.send_keys("epiphany")
    text_field.submit()
    
    # Let's count the links in the contents div to check we got results.
    contents = wkgtk.find_element_by_class_name("content")
    links = contents.find_elements_by_tag_name("a")
    assert len(links) > 0
    
    # Quit the driver. The session is closed so MiniBrowser 
    # will be closed and then WebKitWebDriver process finishes.
    wkgtk.quit()
    

    Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.

    Web browsers / applications

    As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.

    • First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
    • After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
    • A WebKitAutomationSession object is passed as parameter to the “automation-started” signal. This can be used to provide information about the application (name and version) to the driver that will match them with what the client requires accepting or rejecting the session request.
    • The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
    • Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.

    This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:

    • Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
    • Enabling automation is not the only thing the application should do, so add an automation mode to your application.
    • Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
    • Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
    • Use ephemeral web views in automation mode.
    • Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
    • Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.

    The WebKitGTK client driver

    Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:

    from selenium import webdriver
    from selenium.webdriver import WebKitGTKOptions
    
    options = WebKitGTKOptions()
    options.browser_executable_path = "/usr/bin/epiphany"
    options.add_browser_argument("--automation-mode")
    epiphany = webdriver.WebKitGTK(browser_options=options)
    

    Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.

    from selenium import webdriver
    epiphany = webdriver.Epiphany()
    

    Plans

    During the next release cycle, we plan to do the following tasks:

    • Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
    • Add support for running the WPT WebDriver tests in the WebKit bots.
    • Add a WebKitGTK driver implementation for other languages in Selenium.
    • Add support for automation in Epiphany.
    • Add WebDriver support to WPE/dyz.

    by carlos garcia campos at September 09, 2017 05:33 PM

    September 06, 2017

    Frédéric Wang

    Review of Igalia's Web Platform activities (H1 2017)

    Introduction

    For many years Igalia has been committed to and dedicated efforts to the improvement of Web Platform in all open-source Web Engines (Chromium, WebKit, Servo, Gecko) and JavaScript implementations (V8, SpiderMonkey, ChakraCore, JSC). We have been working in the implementation and standardization of some important technologies (CSS Grid/Flexbox, ECMAScript, WebRTC, WebVR, ARIA, MathML, etc). This blog post contains a review of these activities performed during the first half (and a bit more) of 2017.

    Projects

    CSS

    A few years ago Bloomberg and Igalia started a collaboration to implement a new layout model for the Web Platform. Bloomberg had complex layout requirements and what the Web provided was not enough and caused performance issues. CSS Grid Layout seemed to be the right choice, a feature that would provide such complex designs with more flexibility than the currently available methods.

    We’ve been implementing CSS Grid Layout in Blink and WebKit, initially behind some flags as an experimental feature. This year, after some coordination effort to ensure interoperability (talking to the different parties involved like browser vendors, the CSS Working Group and the web authors community), it has been shipped by default in Chrome 58 and Safari 10.1. This is a huge step for the layout on the web, and modern websites will benefit from this new model and enjoy all the features provided by CSS Grid Layout spec.

    Since the CSS Grid Layout shared the same alignment properties as the CSS Flexible Box feature, a new spec has been defined to generalize alignment for all the layout models. We started implementing this new spec as part of our work on Grid, being Grid the first layout model supporting it.

    Finally, we worked on other minor CSS features in Blink such as caret-color or :focus-within and also several interoperability issues related to Editing and Selection.

    MathML

    MathML is a W3C recommendation to represent mathematical formulae that has been included in many other standards such as ISO/IEC, HTML5, ebook and office formats. There are many tools available to handle it, including various assistive technologies as well as generators from the popular LaTeX typesetting system.

    After the improvements we performed in WebKit’s MathML implementation, we have regularly been in contact with Google to see how we can implement MathML in Chromium. Early this year, we had several meetings with Google’s layout team to discuss this in further details. We agreed that MathML is an important feature to consider for users and that the right approach would be to rely on the new LayoutNG model currently being implemented. We created a prototype for a small LayoutNG-based MathML implementation as a proof-of-concept and as a basis for future technical discussions. We are going to follow-up on this after the end of Q3, once Chromium’s layout team has made more progress on LayoutNG.

    Servo

    Servo is Mozilla’s next-generation web content engine based on Rust, a language that guarantees memory safety. Servo relies on a Rust project called WebRender which replaces the typical rasterizer and compositor duo in the web browser stack. WebRender makes extensive use of GPU batching to achieve very exciting performance improvements in common web pages. Mozilla has decided to make WebRender part of the Quantum Render project.

    We’ve had the opportunity to collaborate with Mozilla for a few years now, focusing on the graphics stack. Our work has focused on bringing full support for CSS stacking and clipping to WebRender, so that it will be available in both Servo and Gecko. This has involved creating a data structure similar to what WebKit calls the “scroll tree” in WebRender. The scroll tree divides the scene into independently scrolled elements, clipped elements, and various transformation spaces defined by CSS transforms. The tree allows WebRender to handle page interaction independently of page layout, allowing maximum performance and responsiveness.

    WebRTC

    WebRTC is a collection of communications protocols and APIs that enable real-time communication over peer-to-peer connections. Typical use cases include video conferencing, file transfer, chat, or desktop sharing. Igalia has been working on the WebRTC implementation in WebKit and this development is currently sponsored by Metrological.

    This year we have continued the implementation effort in WebKit for the WebKitGTK and WebKit WPE ports, as well as the maintenance of two test servers for WebRTC: Ericsson’s p2p and Google’s apprtc. Finally, a lot of progress has been done to add support for Jitsi using the existing OpenWebRTC backend.

    Since OpenWebRTC development is not an active project anymore and given libwebrtc is gaining traction in both Blink and the WebRTC implementation of WebKit for Apple software, we are taking the first steps to replace the original WebRTC implementation in WebKitGTK based on OpenWebRTC, with a new one based on libwebrtc. Hopefully, this way we will share more code between platforms and get more robust support of WebRTC for the end users. GStreamer integration in this new implementation is an issue we will have to study, as it’s not built in libwebrtc. libwebrtc offers many services, but not every WebRTC implementation uses all of them. This seems to be the case for the Apple WebRTC implementation, and it may become our case too if we need tighter integration with GStreamer or hardware decoding.

    WebVR

    WebVR is an API that provides support for virtual reality devices in Web engines. Implementation and devices are currently actively developed by browser vendors and it looks like it is going to be a huge thing. Igalia has started to investigate on that topic to see how we can join that effort. This year, we have been in discussions with Mozilla, Google and Apple to see how we could help in the implementation of WebVR on Linux. We decided to start experimenting an implementation within WebKitGTK. We announced our intention on the webkit-dev mailing list and got encouraging feedback from Apple and the WebKit community.

    ARIA

    ARIA defines a way to make Web content and Web applications more accessible to people with disabilities. Igalia strengthened its ongoing committment to the W3C: Joanmarie Diggs joined Richard Schwerdtfeger as a co-Chair of the W3C’s ARIA working group, and became editor of the Core Accessibility API Mappings, [Digital Publishing Accessibility API Mappings] (https://w3c.github.io/aria/dpub-aam/dpub-aam.html), and Accessible Name and Description: Computation and API Mappings specifications. Her main focus over the past six months has been to get ARIA 1.1 transitioned to Proposed Recommendation through a combination of implementation and bugfixing in WebKit and Gecko, creation of automated testing tools to verify platform accessibility API exposure in GNU/Linux and macOS, and working with fellow Working Group members to ensure the platform mappings stated in the various “AAM” specs are complete and accurate. We will provide more information about these activities after ARIA 1.1 and the related AAM specs are further along on their respective REC tracks.

    Web Platform Predictability for WebKit

    The AMP Project has recently sponsored Igalia to improve WebKit’s implementation of the Web platform. We have worked on many issues, the main ones being:

    • Frame sandboxing: Implementing sandbox values to allow trusted third-party resources to open unsandboxed popups or restrict unsafe operations of malicious ones.
    • Frame scrolling on iOS: Addressing issues with scrollable nodes; trying to move to a more standard and interoperable approach with scrollable iframes.
    • Root scroller: Finding a solution to the old interoperability issue about how to scroll the main frame; considering a new rootScroller API.

    This project aligns with Web Platform Predictability which aims at making the Web more predictable for developers by improving interoperability, ensuring version compatibility and reducing footguns. It has been a good opportunity to collaborate with Google and Apple on improving the Web. You can find further details in this blog post.

    JavaScript

    Igalia has been involved in design, standardization and implementation of several JavaScript features in collaboration with Bloomberg and Mozilla.

    In implementation, Bloomberg has been sponsoring implementation of modern JavaScript features in V8, SpiderMonkey, JSC and ChakraCore, in collaboration with the open source community:

    • Implementation of many ES6 features in V8, such as generators, destructuring binding and arrow functions
    • Async/await and async iterators and generators in V8 and some work in JSC
    • Optimizing SpiderMonkey generators
    • Ongoing implementation of BigInt in SpiderMonkey and class field declarations in JSC

    On the design/standardization side, Igalia is active in TC39 and with Bloomberg’s support

    In partnership with Mozilla, Igalia has been involved in the specification of various JavaScript standard library features for internationalization, in specification, implementation in V8, code reviews in other JavaScript engines, as well as working with the underlying ICU library.

    Other activities

    Preparation of Web Engines Hackfest 2017

    Igalia has been organizing and hosting the Web Engines Hackfest since 2009. This event under an unconference format has been a great opportunity for Web Engines developers to meet, discuss and work together on the web platform and on web engines in general. We announced the 2017 edition and many developers already confirmed their attendance. We would like to thank our sponsors for supporting this event and we are looking forward to seeing you in October!

    Coding Experience

    Emilio Cobos has completed his coding experience program on implementation of web standards. He has been working in the implementation of “display: contents” in Blink but some work is pending due to unresolved CSS WG issues. He also started the corresponding work in WebKit but implementation is still very partial. It has been a pleasure to mentor a skilled hacker like Emilio and we wish him the best for his future projects!

    New Igalians

    During this semester we have been glad to welcome new igalians who will help us to pursue Web platform developments:

    • Daniel Ehrenberg joined Igalia in January. He is an active contributor to the V8 JavaScript engine and has been representing Igalia at the ECMAScript TC39 meetings.
    • Alicia Boya joined Igalia in March. She has experience in many areas of computing, including web development, computer graphics, networks, security, and software design with performance which we believe will be valuable for our Web platform activities.
    • Ms2ger joined Igalia in July. He is a well-known hacker of the Mozilla community and has wide experience in both Gecko and Servo. He has noticeably worked in DOM implementation and web platform test automation.

    Conclusion

    Igalia has been involved in a wide range of Web Platform technologies going from Javascript and layout engines to accessibility or multimedia features. Efforts have been made in all parts of the process:

    • Participation to standardization bodies (W3C, TC39).
    • Elaboration of conformance tests (web-platform-tests test262).
    • Implementation and bug fixes in all open source web engines.
    • Discussion with users, browser vendors and other companies.

    Although, some of this work has been sponsored by Google or Mozilla, it is important to highlight how external companies (other than browser vendors) can make good contributions to the Web Platform, playing an important role on its evolution. Alan Stearns already pointed out the responsibility of the Web Plaform users on the evolution of CSS while Rachel Andrew emphasized how any company or web author can effectively contribute to the W3C in many ways.

    As mentioned in this blog post, Bloomberg is an important contributor of several open source projects and they’ve been a key player in the development of CSS Grid Layout or Javascript. Similarly, Metrological’s support has been instrumental for the implementation of WebRTC in WebKit. We believe others could follow their examples and we are looking forward to seeing more companies sponsoring Web Platform developments!

    September 06, 2017 10:00 PM

    August 31, 2017

    Xabier Rodríguez Calvar

    Some rough numbers on WebKit code

    My wife asked me for some rough LOC numbers on the WebKit project and I think I could share them with you here as well. They come from r221232. As I’ll take into account some generated code it is relevant to mention that I built WebKitGTK+ with the default CMake options.

    First thing I did was running sloccount Source and got the following numbers:

    cpp: 2526061 (70.57%)
    ansic: 396906 (11.09%)
    asm: 207284 (5.79%)
    javascript: 175059 (4.89%)
    java: 74458 (2.08%)
    perl: 73331 (2.05%)
    objc: 44422 (1.24%)
    python: 38862 (1.09%)
    cs: 13011 (0.36%)
    ruby: 11605 (0.32%)
    xml: 11396 (0.32%)
    sh: 3747 (0.10%)
    yacc: 2167 (0.06%)
    lex: 1007 (0.03%)
    lisp: 89 (0.00%)
    php: 10 (0.00%)

    This number do not include IDL code so I did some grepping to get the number myself that gave me 19632 IDL lines:

    $ find Source/ -name ".idl" | xargs cat | grep -ve "^[[:space:]]\/*" -ve "^[[:space:]]*" -ve "^[[:space:]]$" -ve "^[[:space:]][$" -ve "^[[:space:]]};$" | wc -l
    19632

    The interesting part of the IDL files is that they are used to generate code so those 19632 IDL lines expand to:

    ansic: 699140 (65.25%)
    cpp: 368720 (34.41%)
    python: 1492 (0.14%)
    xml: 1040 (0.10%)
    javascript: 883 (0.08%)
    asm: 169 (0.02%)
    perl: 11 (0.00%)

    Let’s have a look now at the LayoutTests (they test the functionality of WebCore + the platform). Tests are composed mainly by HTML files so if you run sloccount LayoutTests you get:

    javascript: 401159 (76.74%)
    python: 87231 (16.69%)
    xml: 22978 (4.40%)
    php: 4784 (0.92%)
    ansic: 3661 (0.70%)
    perl: 2726 (0.52%)
    sh: 199 (0.04%)

    It’s quite interesting to see that sloccount does not consider HTML which is quite relevant when you’re testing a web engine so again, we have to count them manually (thanks to Carlos López who helped me to properly grep here as some binary lines were giving me a headache to get the numbers):

    find LayoutTests/ -name ".html" -print0 | xargs -0 cat | strings | grep -Pv "^[[:space:]]$" | wc -l
    2205690

    You can see 2205690 of “meaningful lines” that combine HTML + other languages that you can see above. I can’t substract here to just get the HTML lines because the number above take into account files with a different extension than HTML, though many of them do include other languages, specially JavaScript.

    But the LayoutTests do not include only pure WebKit tests. There are some imported ones so it might be interesting to run the same procedure under LayoutTests/imported to see which ones are imported and not written directly into the WebKit project. I emphasize that because they can be written by WebKit developers in other repositories and actually I can present myself and Youenn Fablet as an example as we wrote tests some tests that were finally moved into the specification and included back later when imported. So again, sloccount LayoutTests/imported:

    python: 84803 (59.99%)
    javascript: 51794 (36.64%)
    ansic: 3661 (2.59%)
    php: 575 (0.41%)
    xml: 250 (0.18%)
    sh: 199 (0.14%)
    perl: 86 (0.06%)

    The same procedure to count HTML + other stuff lines inside that directory gives a number of 295490:

    $ find LayoutTests/imported/ -name ".html" -print0 | xargs -0 cat | strings | grep -Pv "^[[:space:]]$" | wc -l
    295490

    There are also some other tests that we can talk about, for example the JSTests. I’ll mention already the numbers summed up regarding languages and the manual HTML code (if you made it here, you know the drill already):

    javascript: 1713200 (98.64%)
    xml: 20665 (1.19%)
    perl: 2449 (0.14%)
    python: 421 (0.02%)
    ruby: 56 (0.00%)
    sh: 38 (0.00%)
    HTML+stuff: 997

    ManualTests:

    javascript: 297 (41.02%)
    ansic: 187 (25.83%)
    java: 118 (16.30%)
    xml: 103 (14.23%)
    php: 10 (1.38%)
    perl: 9 (1.24%)
    HTML+stuff: 16026

    PerformanceTests:

    javascript: 950916 (83.12%)
    cpp: 147194 (12.87%)
    ansic: 38540 (3.37%)
    asm: 5466 (0.48%)
    sh: 872 (0.08%)
    ruby: 419 (0.04%)
    perl: 348 (0.03%)
    python: 325 (0.03%)
    xml: 5 (0.00%)
    HTML+stuff: 238002

    TestsWebKitAPI:

    cpp: 44753 (99.45%)
    ansic: 163 (0.36%)
    objc: 76 (0.17%)
    xml: 7 (0.02%)
    javascript: 1 (0.00%)
    HTML+stuff: 3887

    And this is all. Remember that these are just some rough statistics, not a “scientific” paper.

    Update:

    In her expert opinion, in the WebKit project we are devoting around 50% of the total LOC to testing, which makes it a software engineering “textbook” project regarding testing and I think we can be proud of it!

    by calvaris at August 31, 2017 09:03 AM

    August 29, 2017

    Frédéric Wang

    The AMP Project and Igalia working together to improve WebKit and the Web Platform

    TL;DR

    The AMP Project and Igalia have recently been collaborating to improve WebKit’s implementation of the Web platform. Both teams are committed to make the Web better and we expect that all developers and users will benefit from this effort. In this blog post, we review some of the bug fixes and features currently being considered:

    • Frame sandboxing: Implementing sandbox values to allow trusted third-party resources to open unsandboxed popups or restrict unsafe operations of malicious ones.

    • Frame scrolling on iOS: Trying to move to a more standard and interoperable approach via iframe elements; addressing miscellaneous issues with scrollable nodes (e.g. visual artifacts while scrolling, view not scrolled when using “Find Text”…).

    • Root scroller: Finding a solution to the old interoperability issue about how to scroll the main frame; considering a new rootScroller API.

    Some demo pages for frame sandboxing and scrolling are also available if you wish to test features discussed in this blog post.

    Introduction

    AMP is an open-source project to enable websites and ads that are consistently fast, beautiful and high-performing across devices and distribution platforms. Several interoperability bugs and missing features in WebKit have caused problems to AMP users and to Web developers in general. Although it is possible to add platform-specific workarounds to AMP, the best way to help the Web Platform community is to directly fix these issues in WebKit, so that everybody can benefit from these improvements.

    Igalia is a consulting company with a team dedicated to Web Platform developments in all open-source Web Engines (Chromium, WebKit, Servo, Gecko) working in the implementation and standardization of miscellaneous technologies (CSS Grid/flexbox, ECMAScript, WebRTC, WebVR, ARIA, MathML, etc). Given this expertise, the AMP Project sponsored Igalia so that they can lead these developments in WebKit. It is worth noting that this project aligns with the Web Predictability effort supported by both Google and Igalia, which aims at making the Web more predictable for developers. In particular, the following aspects are considered:

    • Interoperability: Effort is made to write Web Platform Tests (WPT), to follow Web standards and ensure consistent behaviors between web engines or operating systems.
    • Compatibility: Changes are carefully analyzed using telemetry techniques or user feedback in order to avoid breaking compatibility with previous versions of WebKit.
    • Reducing footguns: Removals of non-standard features (e.g. CSS vendor prefixes) are attempted while new features are carefully introduced.

    Below we provide further description of the WebKit improvements, showing concretely how the above principles are followed.

    Frame sandboxing

    A sandbox attribute can be specified on the iframe element in order to enable a set of restrictions on any content it hosts. These conditions can be relaxed by specifying a list of values such as allow-scripts (to allow javascript execution in the frame) or allow-popups (to allow the frame to open popups). By default, the same restrictions apply to a popup opened by a sandboxed frame.

    iframe sandboxing
    Figure 1: Example of sandboxed frames (Can they navigate their top frame or open popups? Are such popups also sandboxed?)

    However, sometimes this behavior is not wanted. Consider for example the case of an advertisement inside a sandboxed frame. If a popup is opened from this frame then it is likely that a non-sandboxed context is desired on the landing page. In order to handle this use case, a new allow-popups-to-escape-sandbox value has been introduced. The value is now supported in Safari Technology Preview 34.

    While performing that work, it was noticed that some WPT tests for the sandbox attribute were still failing. It turns out that WebKit does not really follow the rules to allow navigation. More specifically, navigating a top context is never allowed when such context corresponds to an opened popup. We have made some changes to WebKit so that it behaves more closely to the specification. This is integrated into Safari Technology Preview 35 and you can for example try this W3C test. Note that this test requires to change preferences to allow popups.

    It is worth noting that web engines may slightly depart from the specification regarding the previously mentioned rules. In particular, WebKit checks a same-origin condition to be sure that one frame is allowed to navigate another one. WebKit always has contained a special case to ignore this condition when a sandboxed frame with the allow-top-navigation flag tries and navigate its top frame. This feature, sometimes known as “frame busting,” has been used by third-party resources to perform malicious auto-redirecting. As a consequence, Chromium developers proposed to restrict frame busting to the case where the navigation is triggered by a user gesture.

    According to Chromium’s telemetry frame busting without a user gesture is very rare. But when experimenting with the behavior change of allow-top-navigation several regressions were reported. Hence it was instead decided to introduce the allow-top-navigation-by-user-activation flag in order to provide this improved safety context while still preserving backward compatibility. We implemented this feature in WebKit and it is now available in Safari Technology Preview 37.

    Finally, another proposed security improvement is to use an allow-modals flag to explicitly allow sandboxed frames to display modal dialogs (with alert, prompt, etc). That is, the default behavior for sandboxed frames will be to forbid such modal dialogs. Again, such a change of behavior must be done with care. Experiments in Chromium showed that the usage of modal dialogs in sandboxed frames is very low and no users complained. Hence we implemented that behavior in WebKit and the feature should arrive in Safari Technology Preview soon.

    Check out the frame sandboxing demos if if you want to test the new allow-popup-to-escape-sandbox, allow-top-navigation-without-user-activation and allow-modals flags.

    Frame scrolling on iOS

    Apple’s UI choice was to (almost) always “flatten” (expand) frames so that users do not require to scroll them. The rationale for this is that it avoids to be trapped into hierarchy of nested frames. Changing that behavior is likely to cause a big backward compatibility issue on iOS so for now we proposed a less radical solution: Add a heuristic to support the case of “fullscreen” iframes used by the AMP Project. Note that such exceptions already exist in WebKit, e.g. to avoid making offscreen content visible.

    We thus added the following heuristic into WebKit Nightly: do not flatten out-of-flow iframes (e.g. position: absolute) that have viewport units (e.g. vw and vh). This includes the case of the “fullscreen” iframe previously mentioned. For now it is still under a developer flag so that WebKit developers can control when they want to enable it. Of course, if this is successful we might consider more advanced heuristics.

    The fact that frames are never scrollable in iOS is an obvious interoperability issue. As a workaround, it is possible to emulate such “scrollable nodes” behavior using overflow: scroll nodes with the -webkit-overflow-scrolling: touch property set. This is not really ideal for our Web Predictability goal as we would like to get rid of browser vendor prefixes. Also, in practice such workarounds lead to even more problems in AMP as explained in these blog posts. That’s why implementing scrolling of frames is one of the main goals of this project and significant steps have already been made in that direction.

    Class Hierarchy
    Figure 2: C++ classes involved in frame scrolling

    The (relatively complex) class hierarchy involved in frame scrolling is summarized in Figure 2. The frame flattening heuristic mentioned above is handled in the WebCore::RenderIFrame class (in purple). The WebCore::ScrollingTreeFrameScrollingNodeIOS and WebCore::ScrollingTreeOverflowScrollingNodeIOS classes from the scrolling tree (in blue) are used to scroll, respectively, the main frame and overflow nodes on iOS. Scrolling of non-main frames will obviously have some code to share with the former, but it will also have some parts in common with the latter. For example, passing an extra UIScrollView layer is needed instead of relying on the one contained in the WKWebView of the main frame. An important step is thus to introduce a special class for scrolling inner frames that would share some logic from the two other classes and some refactoring to ensure optimal code reuse. Similar refactoring has been done for scrolling node states (in red) to move the scrolling layer parameter into WebCore::ScrollingStateNode instead of having separate members for WebCore::ScrollingStateOverflowScrollingNode and WebCore::ScrollingStateFrameScrollingNode.

    The scrolling coordinator classes (in green) are also important, for example to handle hit testing. At the moment, this is not really implemented for overflow nodes but it might be important to have it for scrollable frames. Again, one sees that some logic is shared for asynchronous scrolling on macOS (WebCore::ScrollingCoordinatorMac) and iOS (WebCore::ScrollingCoordinatorIOS) in ancestor classes. Indeed, our effort to make frames scrollable on iOS is also opening the possibility of asynchronous scrolling of frames on macOS, something that is currently not implemented.

    Class Hierarchy
    Figure 4: Video of this demo page on WebKit iOS with experimental patches to make frame scrollables (2017/07/10)

    Finally, some more work is necessary in the render classes (purple) to ensure that the layer hierarchies are correctly built. Patches have been uploaded and you can view the result on the video of Figure 4. Notice that this work has not been reviewed yet and there are known bugs, for example with overlapping elements (hit testing not implemented) or position: fixed elements.

    Various other scrolling bugs were reported, analyzed and sometimes fixed by Apple. The switch from overflow nodes to scrollable iframes is unlikely to address them. For example, the “Find Text” operation in iOS has advanced features done by the UI process (highlight, smart magnification) but the scrolling operation needed only works for the main frame. It looks like this could be fixed by unifying a bit the scrolling code path with macOS. There are also several jump and flickering bugs with position: fixed nodes. Finally, Apple fixed inconsistent scrolling inertia used for the main frame and the one used for inner scrollable nodes by making the former the same as the latter.

    Root Scroller

    The CSSOM View specification extends the DOM element with some scrolling properties. That specification indicates that the element to consider to scroll the main view is document.body in quirks mode while it is document.documentElement in no-quirks mode. This is the behavior that has always been followed by browsers like Firefox or Interner Explorer. However, WebKit-based browsers always treat document.body as the root scroller. This interoperability issue has been a big problem for web developers. One convenient workaround was to introduce the document.scrollingElement which returns the element to use for scrolling the main view (document.body or document.documentElement) and was recently implemented in WebKit. Use this test page to verify whether your browser supports the document.scrollingElement property and which DOM element is used to scroll the main view in no-quirks mode.

    Nevertheless, this does not solve the issue with existing web pages. Chromium’s Web Platform Predictability team has made a huge communication effort with Web authors and developers which has drastically reduced the use of document.body in no-quirks mode. For instance, Chromium’s telemetry on Figure 3 indicates that the percentage of document.body.scrollTop in no-quirks pages has gone from 18% down to 0.0003% during the past three years. Hence the Chromium team is now considering shipping the standard behavior.

    UseCounter for ScrollTopBodyNotQuirksMode
    Figure 3: Use of document.body.scrollTop in no-quirks mode over time (Chromium's UseCounter)

    In WebKit, the issue has been known for a long time and an old attempt to fix it was reverted for causing regressions. For now, we imported the CSSOM View tests and just marked the one related to the scrolling element as failing. An analysis of the situation has been left on WebKit’s bug; Depending on how things evolve on Chromium’s side we could consider the discussion and implementation work in WebKit.

    Related to that work, a new API is being proposed to set the root scroller to an arbitrary scrolling element, giving more flexibility to authors of Web applications. Today, this is unfortunately not possible without losing some of the special features of the main view (e.g. on iOS, Safari’s URL bar is hidden when scrolling the main view to maximize the screen space). Such API is currently being experimented in Chromium and we plan to investigate whether this can be implemented in WebKit too.

    Conclusion

    In the past months, The AMP Project and Igalia have worked on analyzing some interoperability issue and fixing them in WebKit. Many improvements for frame sandboxing are going to be available soon. Significant progress has also been made for frame scrolling on iOS and collaboration continues with Apple reviewers to ensure that the work will be integrated in future versions of WebKit. Improvements to “root scrolling” are also being considered although they are pending on the evolution of the issues on Chromium’s side. All these efforts are expected to be useful for WebKit users and the Web platform in general.

    Igalia Logo
    AMP Logo

    Last but not least, I would like to thank Apple engineers Simon Fraser, Chris Dumez, and Youenn Fablet for their reviews and help, as well as Google and the AMP team for supporting that project.

    August 29, 2017 10:00 PM

    August 24, 2017

    Eleni Maria Stea

    A terrain rendering approach (part 1)

    There are several methods to create and display a terrain, in real-time. In this post, I will explain the approach I followed on the demo I’m writing for my work at Igalia. Some work is still in progress.

    The terrain had to meet the following requirements:

    • its size should be arbitrary
    • parts outside the viewer’s field of view should be culled

    Parameters that describe the terrain

    For reasons that will become obvious when I explain the terrain generation and drawing, I decided to use a heightfield made by tiles and I used the following parameters to generate it:

    /* parameters needed in terrain generation */
    
    struct TerrainParams {
        float xsz; /* terrain size in x axis */
        float ysz; /* terrain size in y axis */
        float max_height; /* max height of the heightfield */
        int xtiles; /* number of tiles in x axis */
        int ytiles; /* number of tiles in y axis */
        int tile_usub;
        int tile_vsub;
        int num_octaves; /* Perlin noise sums */
        float noise_freq; /* Perlin noise scaling factor */
        Image coarse_heightmap; /* mask for low detail heightmap */
    };

    Let’s explain them a little bit:

    Imagine the terrain as a xsz * ysz grid of tiles where each tile is a subdivided in usub, vsub smaller parts. Each grid point can have an “arbitrary” height (we ‘ll see later how we calculate it) that cannot exceed the maximum height: max_height. The number of tiles in each terrain axis is xtiles for the x-axis and ytiles for the y-axis (that is practically the z-axis of out 3-D space). The variables tile_usub and tile_vsub show the number of subdivisions of each tile.

    wireframe terrain
    Image 1: a wireframe terrain

    Note that in general, I use the u, v notation in normalized spaces and the x, y for the world space.

    The variables num_octaves and noise_freq and the coarse_heightmap image are used to calculate the heights in different terrain points and will be explained later.

    Generating the geometry, applying the textures

    We ‘ve already seen that the terrain is a grid of xsz * ysz with xtiles * ytiles tiles, that are subdivided by tile_usub in the x-axis and by tile_vsub in the z-axis. In order to generate a height at every point of this grid, I needed to calculate uniformly distributed random values, like those of Perlin Noise (PN). But I also needed some higher distortions for “mountains” and “hills” here and there that cannot be simulated with PN. For them I used a function that calculates the sum of some Perlin Noise frequencies and results to a fractal-like heightfield. The number of  the PN sum octaves and the frequency (num_octaves, noise_freq) can be customized depending on how much distortion we want.

    Having a terrain with spreaded random-looking mountains is nice, but it would be much better if we could further customize its appearence to become more suitable for our scene. For example, I wanted to place a number of objects in the middle of the terrain, so I wanted it to look more flat in the center. Also, I wanted to put some high mountains at the edges to hide some skybox parts (for example mountains and buildings that were part of the skybox texture and looked ugly). In order to modify the terrain shape, I used a grayscale image, the coarse_heightmap, as a mask and I multiplied the terrain height values with the mask intensities (I’ll explain this better later). The mask was like the image below:

    terrain coarse heightmap
    Image 2: terrain coarse heightmap

    Since the heightmap is black at the center (=> the itensity values there are close to 0) the height in the middle will be low and the terrain will look flat, whereas at the edges, where the intensity takes its maximum values, the height values will have a value close to their original height. If you get a more careful look at Image 1 and Image 2 you will understand better the relationship between the terrain heights and the mask intensities.

    That’s the function that generates each tile’s heightfield by calculating the height by taking the sum  of the Perlin noise frequencies for each tile:

    float Terrain::get_height(float u, float v) const
    {
        float sn = gph::fbm(u * params.noise_freq, 
                   v * params.noise_freq,
                   params.num_octaves);
        sn = sn * 0.5 + 0.5;
    
        if(params.coarse_heightmap.pixels) {
            Vec4 texel = params.coarse_heightmap.lookup_linear(u, v,
                         1.0 / params.tile_usub, 
                         1.0 / params.tile_vsub);
            sn *= texel.x;
        }
        return sn;
    }

    Note that the function performs the calculations in u-v space. I found it convenient to write a similar function for the world space to use it in other places.

    Now, let’s see how we use the coarse_heightmap on top of that:

    The coarse_heightmap image (Image 2) has a very low resolution of 128x128 which means that if we just lookup the closest pixel value for each terrain point, and the terrain is big, many terrain points will map to the same pixel and we’ll start seeing aliasing. To avoid this artifact, I used bilinear interpolation among the pixel’s neighboring pixels by taking into account the distance of the terrain point from each pixel of the neighborhood.

    terrain before optimisations
    Image 3: terrain and skybox

    The following video is a preview of this early stage of the demo where you can see the terrain from different views (as well as a cow called Spot :P):

    Improvements:

    fog:

    A good trick to make the terrain edges appear more distant that they really are and have a more realistic horizon, is to add some fog that is more dense in the more distant terrain points and less dense at the points that are close to the viewer. The fog can be simulated by interpolating the terrain color at each pixel with a color that looks like the sky (I used a light blue here). The interpolation factor must be a function of the distance between the point and the viewer. I used the following function:

    e(-density * distance)

    (taken by the glFog OpenGL manpage). I hard-coded the density to a value that looked good and I used the -pos.z as the distance where pos is the vertex position in the View space.

    Image 4 shows the result of this operation:

    terrain with fog
    Image 4: the terrain with the fog and a cow 😉
    iMage based LIGHTING:

    The idea behind the IBL is that instead of using point lights with a standard position in our 3D world, we can calculate the lighting as the radiance coming from each skybox pixel. For this purpose, we need an irradiance map. I avoided the overhead of calculating one by using this nice tool I’ve found on github: cmft to pre-calculate it from the images of my skybox. Then, I used the normal direction at each terrain point (in world space coordinates) to sample the map and I calculated the color inside the pixel shader:

    vec4 itexel = textureCube(dstex, normalize(world_normal));
    vec3 object_color = diffuse.xyz * texel.xyz * itexel.xyz;

    As you can guess the world_normal is the normal in world space (modelling transformation) and the object_color the color we get if we multiply our diffuse color from the material with the texel value (from the terrain texture) and the irradiance map texel value (itexel). The result of this operation can be seen in this video:

    TODO LIST

    Although the terrain looks better with these additions, I think that it can be further improved if I add the following things:

    1. Multiple textures for different parts of the terrain
    2. (Maybe) specular color calculated by the irradiance map (since atm I have support for diffuse only)
    Performance optimisations:

    I also want to optimize its performance by adding:

    1. View Frustrum Culling:  The idea is that we only draw the terrain tiles that are visible (part of the view frustrum).
    2. Tessellation shaders: use TC, TE shaders to improve the terrain tesselation.

    Some of the above TODOs are in progress, some are still in my wishlist but all of them all TL;DR to be analyzed in this post anyway.

    I will post about these additions as soon as I finish them, stay tunned! 😉

    by hikiko at August 24, 2017 07:59 AM

    August 20, 2017

    Hyunjun Ko

    Support GstContext for VA-API elements

    Since I started working on gstreamer-vaapi, one of what’s disappointing me is that vaapisink is not so popular even though it should be the best choice on vaapi installed machine. There are some reasonable causes and one of the reasons is probably it doesn’t provide a convinient way to be integrated for application developers.

    Until now, we provided a way to set X11 window handle by gst_video_overlay_set_window_handle, which is to tell the overlay to display video output to a specific window. But this is not enough since VA and X11 Display handle is managed internally inside gstreamer-vaapi elements, which means that users can’t handle them by themselves.

    In short, there was no way to share display handle created by application. Also we have some additional problems due to this issue as the following.

    • If users want to handle multiple display seperatedly, it can’t be possible. bug 754820
    • If users run multiple decoding pipelines with vaapisink, performance is down critically since there’s some locks in each vaapisink with same VADisplay. bug 747946

    Recently we have merged a series of patches to provide a way to set external VA Display and X11 Display from application via GstContext. GstContext provides a way of sharing not only between elements but also with the application using queries and messages. (For more details, see https://developer.gnome.org/gstreamer/stable/gstreamer-GstContext.html)

    By these patches, application can set its own VA Display and X11 Display to VA-API elements as the following:

    • Create VADisplay instance by vaGetDisplay, it doesn’t need to be initialized at startup and terminated at endup.
    • Call gst_element_set_context with the context to which each display instance is set.

    Example: sharing an VADisplay and X11 display with the bus callback, this is almost same as other examples using GstContext.

    static GstBusSyncReply
    bus_sync_handler (GstBus * bus, GstMessage * msg, gpointer data)
    {
      switch (GST_MESSAGE_TYPE (msg)) {
        case GST_MESSAGE_NEED_CONTEXT:{
          const gchar *context_type;
          GstContext *context;
          GstStructure *s;
          VADisplay va_display;
          Display *x11_display;
    
          gst_message_parse_context_type (msg, &context_type);
          gst_println ("Got need context %s from %s", context_type,
              GST_MESSAGE_SRC_NAME (msg));
    
          if (g_strcmp0 (context_type, "gst.vaapi.app.Display") != 0)
            break;
    
          x11_display = /* Get X11 Display somehow */
          va_display = vaGetDisplay (x11_display);
    
          context = gst_context_new ("gst.vaapi.app.Display", TRUE);
          s = gst_context_writable_structure (context);
          gst_structure_set (s, "va-display", G_TYPE_POINTER, va_display, NULL);
          gst_structure_set (s, "x11-display", G_TYPE_POINTER, x11_display, NULL);
    
          gst_element_set_context (GST_ELEMENT (GST_MESSAGE_SRC (msg)), context);
          gst_context_unref (context);
          break;
        }   
        default:
          break;
      }
    
      return GST_BUS_PASS;
    }

    Also you can find the entire example code here.

    Furthermore, we know we need to support Wayland for this feature. See bug 705821. There’s already some pending patches but they need to be rebased and modified based on the current way. I’ll be working on this in the near future.

    We really want to test this feature more especially in practical cases until next release. I’d appreciate if someone reports any bug or issue and I promise I’d focus on it precisely.

    Thanks for reading!

    August 20, 2017 03:00 PM

    August 09, 2017

    Michael Catanzaro

    On Firefox Sync

    Epiphany 3.26 is, unfortunately, not going to be packed with cool new features like 3.24 was. We’ve just been too busy working on improving WebKit this cycle. But there is one cool new thing: Firefox Sync support. You can sync bookmarks, history, passwords, and open tabs with other Epiphany instances and as well as both desktop and mobile Firefox. This is already enabled in 3.25.90. Just go to the Sync tab in Preferences and sign in or create your Firefox account there. Please test it out and report bugs now, so we can quash problems you find before 3.26.0 rather than after.

    Some thank yous are in order:

    • Thanks to Gabriel Ivascu, for writing all the code.
    • Thanks to Google and Igalia for sponsoring Gabriel’s work.
    • Thanks to Mozilla. This project would never have been possible if Mozilla had not carefully written its terms of service to allow such use.

    Go forth and sync!

    by Michael Catanzaro at August 09, 2017 07:57 PM

    August 06, 2017

    Michael Catanzaro

    Endgame for WebKit Woes

    In my original blog post On WebKit Security Updates, I identified three separate problems affecting WebKit users on Linux:

    • Distributions were not providing updates for WebKitGTK+. This was the main focus of that post.
    • Distributions were shipping a insecure compatibility package for old, unmaintained WebKitGTK+ 2.4 (“WebKit1”).
    • Distributions were shipping QtWebKit, which was also unmaintained and insecure.

    Let’s review these problems one at a time.

    Distributions Are Updating WebKitGTK+

    Nowadays, most major community distributions are providing regular WebKitGTK+ updates, so this is no longer a problem for the vast majority of Linux users. If you’re using a supported version of Ubuntu (except Ubuntu 14.04), Fedora, or most other mainstream distributions, then you are good to go.

    My main concern here is still Debian, but there are reasons to be optimistic. It’s too soon to say what Debian’s policy will be going forward, but I am encouraged that it broke freeze just before the Stretch release to update from WebKitGTK+ 2.14 to 2.16.3. Debian is slow and conservative and so has not yet updated to 2.16.6, which is sad because 2.16.3 is affected by a bug that causes crashes on a huge number of websites, but my understanding is it is likely to be updated in the near future. I’m not sure if Debian will update to 2.18 or not. We’ll have to wait and see.

    openSUSE is another holdout. The latest stable version of openSUSE Leap, 42.3, is currently shipping WebKitGTK+ 2.12.5. That is disappointing.

    Most other major distributions seem to be current.

    Distributions Are Removing WebKitGTK+ 2.4

    WebKitGTK+ 2.4 (often informally referred to as “WebKit1”) was the next problem. Tons of desktop applications depended on this old, insecure version of WebKitGTK+, and due to large API changes, upgrading applications was not going to be easy. But this transition is going much smoother and much faster than I expected. Several distributions, including Debian, Fedora, and Arch, have recently removed their compatibility packages. There will be no WebKitGTK+ 2.4 in Debian 10 (Buster) or Fedora 27 (scheduled for release this October). Most noteworthy applications have either ported to modern WebKitGTK+, or have configure flags to disable use of WebKitGTK+. In some cases, such as GnuCash in Fedora, WebKitGTK+ 2.4 is being bundled as part of the application build process. But more often, applications that have not yet ported simply no longer work or have been removed from these distributions.

    Soon, users will no longer need to worry that a huge amount of WebKitGTK+ applications are not receiving security updates. That leaves one more problem….

    QtWebKit is Back

    Upstream QtWebKit has not been receiving security updates for the past four years or thereabouts, since it was abandoned by the Qt project. That is still the status quo for most distributions, but Arch and Fedora have recently switched to Konstantin Tokarev’s fork of QtWebKit, which is based on WebKitGTK+ 2.12. (Thank you Konstantin!) If you are using any supported version of Fedora, you should already have been switched to this fork. I am hopeful that the fork will be rebased on WebKitGTK+ 2.16 or 2.18 in the near future, to bring it current on security updates, but in the meantime, being a year and a half behind is an awful lot better than being four years behind. Now that Arch and Fedora have led the way, other distributions should find little trouble in making the switch to Konstantin’s QtWebKit. It would be a disservice to users to continue shipping the upstream version.

    So That’s Cool

    Things are better. Some distributions, notably Arch and Fedora, have resolved all of the above problems (or will in the very near future). Yay!

    by Michael Catanzaro at August 06, 2017 09:47 PM

    Modifying hidden settings in Epiphany 3.24

    We’re just one short month away from releasing Epiphany 3.26, but this is not a post about that. Turns out there is some confusion about how to edit hidden settings in Epiphany 3.24. Many users previously relied on the dconf-editor tool to tweak hidden settings like the user agent or minimum font size, but this no longer works in 3.24. What gives?

    The problem is that these settings can now be configured separately for your main browsing instance and for each web app. This gives you a lot more flexibility, but it does make it harder to change the settings because dconf-editor will not work anymore. The technical problem is that dconf-editor does not support relocatable settings schemas: settings definitions that are reused in many different places. So you will unfortunately have to use the command line to change these settings now. For example:

    # Old command, *this no longer works*
    $ gsettings set org.gnome.Epiphany.web user-agent 'Mozilla/5.0'

    # Replacement command
    $ gsettings set org.gnome.Epiphany.web:/org/gnome/epiphany/web/ user-agent 'Mozilla/5.0'

    Changing a global setting like this will also affect newly-created web apps, but not existing web apps.

    by Michael Catanzaro at August 06, 2017 06:13 PM

    August 03, 2017

    Eleni Maria Stea

    Debugging graphics code using replacement shaders (Linux, Mesa)

    Sometimes, when working with the mesa drivers, modifying or replacing a shader might be extremely useful for debugging. Mesa allows users to replace their shaders at runtime without having to change the original code by providing these environment variables:

    MESA_SHADER_READ_PATH and MESA_SHADER_DUMP_PATH

    Example usage:

    In the following example we are going to use these two environment variables with a small OpenGL program called demo.

    Step 1:

    We create a directory (tmp) to store the shaders and two more directories read and dump inside it:

    tmp/
    ├── dump
    └── read

    It’s necessary that these dump and read directories exist before running the program that will be debugged (the demo in our case).

    Step 2: We export the environment variables:

    export MESA_SHADER_READ_PATH=tmp/read
    export MESA_SHADER_DUMP_PATH=tmp/dump

    the first one sets the directory where the mesa driver will look for replacement shaders whereas the second one sets the directory where the shaders will be dumped.

    Step 3: We run the program once to dump its original shaders inside the tmp/dump directory:

    ./demo

    Step 4: We copy the shaders from the dump directory to the read directory and then we modify the ones in the read directory:

    cp tmp/dump/* tmp/read/

    It is important not to change the filenames of the shader files. After this step both directories should contain some shaders with long names similar to these:

    tmp/
    ├── dump
    │   ├── FS_41bfd6998229f924b0bc409fafc85043d0819adc.glsl
    │   ├── FS_efc090363ee2378fbae150e66f53a891e072e983.glsl
    │   ├── VS_17c27d658ec6d02901f45c88b67111bd4ee955cb.glsl
    │   └── VS_9668281d927970b6ff023d45da67b38fc930dafe.glsl
    └── read
    ├── FS_41bfd6998229f924b0bc409fafc85043d0819adc.glsl
    ├── FS_efc090363ee2378fbae150e66f53a891e072e983.glsl
    ├── VS_17c27d658ec6d02901f45c88b67111bd4ee955cb.glsl
    └── VS_9668281d927970b6ff023d45da67b38fc930dafe.glsl

    As you can guess the VS_*.glsl are the program’s vertex shaders and the FS_*.glsl the fragment ones.

    The reason that we see two VS_*.glsl (vertex shaders) and two FS_*.glsl (fragment shaders) in the dump directory, is that the demo program was originally using two vertex and two fragment shaders at the rendering.

    We could see dumped shader names that start from GS, TC, TE, as well, for Geometry, Tesselation Control and Tesselation Evaluation, if the program was using such shaders.

    Now, every shader in the read directory can be safely modified. I will only change one of the fragment shaders for simplicity.

    The FS_41bfd6998229f924b0bc409fafc85043d0819adc.glsl is the dump of my original fragment shader with name sky.f.glsl, that was calculating the colors of the pixels of a skybox using this code:

    #version 450
    uniform samplerCube stex;
    in vec3 normal;
    out vec4 color;
    void main()
    {
        vec4 texel = textureCube(stex, normalize(normal));
    
        color.rgb = texel.rgb;
        color.a = 1.0;
    }

    I used the sky.f.glsl fragment shader with some code that draws a green quad and the result was:

    quad1

    If we modify the tmp/read/FS_41bfd6998229f924b0bc409fafc85043d0819adc.glsl replacement shader to simply return blue like that:

    #version 450
    uniform samplerCube stex;
    in vec3 normal;
    out vec4 color;
    void main()
    {
        color = vec4(0.0, 0.0, 1.0, 1.0);
    }

    and run ./demo again, the result will be a blue sky, as expected:

    Debugging graphics code using replacement shaders (Linux, Mesa)

    We could safely play with any replacement shader in the read directory and then simply delete it. The demo program’s code would remain the same.

    Let’s try a more interesting example with a more complex program, like blender.

    We create the same directory tree, we export the variables and then we run blender (select the material and choose GLSL at the rendering options) to dump its default shaders and end up with a directory tree similar to this one:

    tmp
    ├── dump
    │   ├── FS_e025add3a93498ca49ba96c38260c36138430d54.glsl
    │   └── VS_c5310c724728053b7bf1e0b1055546f530afa9ca.glsl
    └── read
    ├── FS_e025add3a93498ca49ba96c38260c36138430d54.glsl
    └── VS_c5310c724728053b7bf1e0b1055546f530afa9ca.glsl

    On the screen we see something like this:

    that is the default blender scene.

    We can then open the file:
    tmp/read/FS_e025add3a93498ca49ba96c38260c36138430d54.glsl search for the main function. We will modify the output color by adding this line at the end of the function that sets the output color to pink:

    gl_FragColor = vec4(0.84, 0.16, 0.63, 1.0);

    The code will look like this:

    [...]
    	shade_add(vec4(tmp73, 1.0), tmp63, tmp76);
    	mtex_alpha_to_col(tmp76, cons78, tmp79);
    	shade_mist_factor(varposition, unf81, unf82, unf83, unf84, unf85, tmp86);
    	mix_blend(tmp86, tmp79, unf89, tmp90);
    	shade_alpha_opaque(tmp90, tmp92);
    	linearrgb_to_srgb(tmp92, tmp94);
    
    	gl_FragColor = tmp94;
    	gl_FragColor = vec4(0.84, 0.16, 0.63, 1.0);

    If we exit and run blender again, we’ll see that selecting the material makes the cube pink:

    This is because the blender shader that calculates the material color is replaced by our read/FS_e025add3a93498ca49ba96c38260c36138430d54.glsl shader where we explicitely set the output color (gl_FragColor) to pink (0.84, 0.16, 0.63, 1.0).

    Note: Mesa documentation mentions that we need to compile the mesa driver using the --with-sha1 option, for the environment variables to take effect. This option doesn’t seem to exist anymore but fortunately, the trick works without it and it seems that the only thing that we need to pay attention to, is to keep the replacement shader filenames in the read directory unchanged.

    by hikiko at August 03, 2017 05:06 PM

    Gyuyoung Kim

    How to make your code on downstream ?

    Nowadays most of my projects have been using opensource or worked based on opensource. If we just needs to use it, I think it would be a good situation for you. However we usually have projects that keep the opensource over years for your products. So if you have to hack a lot of modules inside of opensource code, it can be a nightmare when you rebase current source base against the latest opensource after months or years. In this article I would like to share some of my experiences on how to make a downstream patch when we work using opensource.

    1. Try to contribute your patch to the opensource project as much as possible

      • I think this is the best way to reduce our heavy burden that we should maintain in your downstream source code. Even if the opensource project you’re using is being developed fast, you will often face many conflicts during the rebase because it’s likely that original code or architecture have changed frequently in the meantime. To avoid the conflict, it would be best if you contribute your patch to the opensource project as much as possible.
      • Below documents are a good example to explain how to contribute your code to the opensource project.
    2. Make your downstream port

      • We know well that #1 is the best way to reduce our downstream patches though, it is often hard to keep because downstream patches are often too hacky or unstable. Even if you submit a downstream patch to upstream, you might get many review comments or objections from the opensource maintainers. In such case, what else can you do ? In my experience, it was important to separate our downstream implementation from the original code. For example, we can make new TriangleFoo.h/cpp files instead of original Triangle.h/cpp files, then we can use them through the modification of few build scripts.
        1. Figure class
         class Figure {
          public:
              virtual int calculateSize();
         }
         
        2. Triangle class
         class Triangle : public Figure {
          public:
              virtual int calculateSize() override;
         }
        
         Triangle::calculateSize() {
             return width * height / 2;
         }
        
        3. TriangleFoo class
         class TriangleFoo : public Figure {
             public: virtual int calculateSize() override;
         }
         
         TriangleFoo::calculateSize() {
             return new_width * height / 2;
         }
        
        4. Build script. In this example we use cmake,
         list(APPEND Figure_SOURCES
             Figure.cpp
             Triangle.cpp
             TriangleFoo.cpp
         )
         
         list(REMOVE_ITEM Figure_SOURCES
             Triangle.cpp
         )

        We can avoid some conflicts in the next rebase with the latest opensource in the Triangle.h/cpp files. However we still need to modify the TriangleFoo.h/cpp if Figure.h is changed. For example, when a new parameter is added or a return type is changed.

    3. Use #if ~ #endif guard

      • When we only need to modify few lines or just to change logic inside a function, we can use #if ~ #else ~ #endif guard. The guard can help us to know what codes were added by us or modified by us. Besides it might be help us to check easily if the downstream patch generated side effects through turning it off. In my previous projects, most of issues have come from downstream patches because they lacked code review, missed test cases, or were too hacky against original architecture. In such cases, you can check the issue just by turning the guard off. However, if you use #if ~ #endif guard in many places, the usages can mess your code up. So I’d like to recommend you use it only when you really need.
        Triangle::calculateSize()
        {
        #if defined(DOWNSTREAM_ENABLED)
            return new_width * height / 2;
        #else
            return width * height / 2;
        #endif
        }
    4. Try to make a patch per a feature

      • As you may have experienced before, it is very hard to implement a feature with a commit. Even though you succeed in implementing a new feature with a commit perfectly, you may face to touch the implementation again in order to fix a bug or apply new requirements again. In such case your git history will get messier and messier. It will make it difficult to rebase based on the latest opensource. To avoid it, you may have manually merged original implementation with the fixup commits reflected later. But there are two useful git commands for this case – git commit fixup and git rebase –autosquash.
          • git commit –fixup : Automatically marks your commit as a fix of a previous commit. Construct a commit message for use with git rebase –autosquash.
          • git rebase -i –autosquash : Automatically organize merging of these fixup commits and associated normal commits.

      • Example
        There is a good article to explain that explains method [1]. If you need to understand further, it would be good if you visit the URL. Let’s assume that we have 3 commits on your local repository.
    $ git log --oneline
      new commit1 (7ae79f6)
      new commit2 (9e4c1de)
      new commit3 (480ee07)
      previous commit (19c8abf)

    But if we just noticed that we missed to add a comment in commit2, it’s time to use --fixup option.

    $ git add [modified file]
    $ git commit --fixup [new commit2's commit-id]
      (i.e. git commit --fixup 9e4c1de)

    Then you can clean your branch before merging it using - autosquash option.

    $ git rebase -i --autosquash [previous commit id]
      (i.e. git rebase -i autosquash 19c8abf)

     

    Reference
    [1] http://fle.github.io/git-tip-keep-your-branch-clean-with-fixup-and-autosquash.html

    by gyuyoung at August 03, 2017 01:28 AM

    August 01, 2017

    Gyuyoung Kim

    Hello world!

    Welcome to Igalia Blogs. This is your first post. Edit or delete it, then start blogging!

    by gyuyoung at August 01, 2017 11:30 AM

    July 30, 2017

    Iago Toral

    Working with lights and shadows – Part II: The shadow map

    In the previous post we talked about the Phong lighting model as a means to represent light in a scene. Once we have light, we can think about implementing shadows, which are the parts of the scene that are not directly exposed to light sources. Shadow mapping is a well known technique used to render shadows in a scene from one or multiple light sources. In this post we will start discussing how to implement this, specifically, how to render the shadow map image, and the next post will cover how to use the shadow map to render shadows in the scene.

    Note: although the code samples in this post are for Vulkan, it should be easy for the reader to replicate the implementation in OpenGL. Also, my OpenGL terrain renderer demo implements shadow mapping and can also be used as a source code reference for OpenGL.

    Algorithm overview

    Shadow mapping involves two passes, the first pass renders the scene from te point of view of the light with depth testing enabled and records depth information for each fragment. The resulting depth image (the shadow map) contains depth information for the fragments that are visible from the light source, and therefore, are occluders for any other fragment behind them from the point of view of the light. In other words, these represent the only fragments in the scene that receive direct light, every other fragment is in the shade. In the second pass we render the scene normally to the render target from the point of view of the camera, then for each fragment we need to compute the distance to the light source and compare it against the depth information recorded in the previous pass to decice if the fragment is behind a light occluder or not. If it is, then we remove the diffuse and specular components for the fragment, making it look shadowed.

    In this post I will cover the first pass: generation of the shadow map.

    Producing the shadow map image

    Note: those looking for OpenGL code can have a look at this file ter-shadow-renderer.cpp from my OpenGL terrain renderer demo, which contains the shadow map renderer that generates the shadow map for the sun light in that demo.

    Creating a depth image suitable for shadow mapping

    The shadow map is a regular depth image were we will record depth information for fragments in light space. This image will be rendered into and sampled from. In Vulkan we can create it like this:

    ...
    VkImageCreateInfo image_info = {};
    image_info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
    image_info.pNext = NULL;
    image_info.imageType = VK_IMAGE_TYPE_2D;
    image_info.format = VK_FORMAT_D32_SFLOAT;
    image_info.extent.width = SHADOW_MAP_WIDTH;
    image_info.extent.height = SHADOW_MAP_HEIGHT;
    image_info.extent.depth = 1;
    image_info.mipLevels = 1;
    image_info.arrayLayers = 1;
    image_info.samples = VK_SAMPLE_COUNT_1_BIT;
    image_info.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
    image_info.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT |
                       VK_IMAGE_USAGE_SAMPLED_BIT;
    image_info.queueFamilyIndexCount = 0;
    image_info.pQueueFamilyIndices = NULL;
    image_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
    image_info.flags = 0;
    
    VkImage image;
    vkCreateImage(device, &image_info, NULL, &image);
    ...
    

    The code above creates a 2D image with a 32-bit float depth format. The shadow map’s width and height determine the resolution of the depth image: larger sizes produce higher quality shadows but of course this comes with an additional computing cost, so you will probably need to balance quality and performance for your particular target. In the first pass of the algorithm we need to render to this depth image, so we include the VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT usage flag, while in the second pass we will sample the shadow map from the fragment shader to decide if each fragment is in the shade or not, so we also include the VK_IMAGE_USAGE_SAMPLED_BIT.

    One more tip: when we allocate and bind memory for the image, we probably want to request device local memory too (VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) for optimal performance, since we won’t need to map the shadow map memory in the host for anything.

    Since we are going to render to this image in the first pass of the process we also need to create a suitable image view that we can use to create a framebuffer. There are no special requirements here, we just create a view with the same format as the image and with a depth aspect:

    ...
    VkImageViewCreateInfo view_info = {};
    view_info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
    view_info.pNext = NULL;
    view_info.image = image;
    view_info.format = VK_FORMAT_D32_SFLOAT;
    view_info.components.r = VK_COMPONENT_SWIZZLE_R;
    view_info.components.g = VK_COMPONENT_SWIZZLE_G;
    view_info.components.b = VK_COMPONENT_SWIZZLE_B;
    view_info.components.a = VK_COMPONENT_SWIZZLE_A;
    view_info.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
    view_info.subresourceRange.baseMipLevel = 0;
    view_info.subresourceRange.levelCount = 1;
    view_info.subresourceRange.baseArrayLayer = 0;
    view_info.subresourceRange.layerCount = 1;
    view_info.viewType = VK_IMAGE_VIEW_TYPE_2D;
    view_info.flags = 0;
    
    VkImageView shadow_map_view;
    vkCreateImageView(device, &view_info, NULL, &view);
    ...
    

    Rendering the shadow map

    In order to generate the shadow map image we need to render the scene from the point of view of the light, so first, we need to compute the corresponding View and Projection matrices. How we calculate these matrices depends on the type of light we are using. As described in the previous post, we can consider 3 types of lights: spotlights, positional lights and directional lights.

    Spotlights are the easiest for shadow mapping, since with these we use regular perspective projection.

    Positional lights work similar to spotlights in the sense that they also use perspective projection, however, because these are omnidirectional, they see the entire scene around them. This means that we need to render a shadow map that contains scene objects in all directions around the light. We can do this by using a cube texture for the shadow map instead of a regular 2D texture and render the scene 6 times adjusting the View matrix to capture scene objects in front of the light, behind it, to its left, to its right, above and below. In this case we want to use a field of view of 45º with the projection matrix so that the set of 6 images captures the full scene around the light source with no image overlaps.

    Finally, we have directional lights. In the previous post I mentioned that these lights model light sources which rays are parallel and because of this feature they cast regular shadows (that is, shadows that are not perspective projected). Thus, to render shadow maps for directional lights we want to use orthographic projection instead of perspective projection.

    Projected shadow from a point light source
    Regular shadow from a directional light source

    In this post I will focus on creating a shadow map for a spotlight source only. I might write follow up posts in the future covering other light sources, but for the time being, you can have a look at my OpenGL terrain renderer demo if you are interested in directional lights.

    So, for a spotlight source, we just define a regular perspective projection, like this:

    glm::mat4 clip = glm::mat4(1.0f, 0.0f, 0.0f, 0.0f,
                               0.0f,-1.0f, 0.0f, 0.0f,
                               0.0f, 0.0f, 0.5f, 0.0f,
                               0.0f, 0.0f, 0.5f, 1.0f);
    
    glm::mat4 light_projection = clip *
          glm::perspective(glm::radians(45.0f),
                           (float) SHADOW_MAP_WIDTH / SHADOW_MAP_HEIGHT,
                           LIGHT_NEAR, LIGHT_FAR);
    

    The code above generates a regular perspective projection with a field of view of 45º. We should adjust the light’s near and far planes to make them as tight as possible to reduce artifacts when we use the shadow map to render the shadows in the scene (I will go deeper into this in a later post). In order to do this we should consider that the near plane can be increased to reflect the closest that an object can be to the light (that might depend on the scene, of course) and the far plane can be decreased to match the light’s area of influence (determined by its attenuation factors, as explained in the previous post).

    The clip matrix is not specific to shadow mapping, it just makes it so that the resulting projection considers the particularities of how the Vulkan coordinate system is defined (Y axis is inversed, Z range is halved).

    As usual, the projection matrix provides us with a projection frustrum, but we still need to point that frustum in the direction in which our spotlight is facing, so we also need to compute the view matrix transform of our spotlight. One way to define the direction in which our spotlight is facing is by having the rotation angles of spotlight on each axis, similarly to what we would do to compute the view matrix of our camera:

    glm::mat4
    compute_view_matrix_for_rotation(glm::vec3 origin, glm::vec3 rot)
    {
       glm::mat4 mat(1.0);
       float rx = DEG_TO_RAD(rot.x);
       float ry = DEG_TO_RAD(rot.y);
       float rz = DEG_TO_RAD(rot.z);
       mat = glm::rotate(mat, -rx, glm::vec3(1, 0, 0));
       mat = glm::rotate(mat, -ry, glm::vec3(0, 1, 0));
       mat = glm::rotate(mat, -rz, glm::vec3(0, 0, 1));
       mat = glm::translate(mat, -origin);
       return mat;
    }
    

    Here, origin is the position of the light source in world space, and rot represents the rotation angles of the light source on each axis, representing the direction in which the spotlight is facing.

    Now that we have the View and Projection matrices that define our light space we can go on and render the shadow map. For this we need to render scene as we normally would but instead of using our camera’s View and Projection matrices, we use the light’s. Let’s have a look at the shadow map rendering code:

    Render pass

    static VkRenderPass
    create_shadow_map_render_pass(VkDevice device)
    {
       VkAttachmentDescription attachments[2];
    
       // Depth attachment (shadow map)
       attachments[0].format = VK_FORMAT_D32_SFLOAT;
       attachments[0].samples = VK_SAMPLE_COUNT_1_BIT;
       attachments[0].loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR;
       attachments[0].storeOp = VK_ATTACHMENT_STORE_OP_STORE;
       attachments[0].stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
       attachments[0].stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
       attachments[0].initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
       attachments[0].finalLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
       attachments[0].flags = 0;
    
       // Attachment references from subpasses
       VkAttachmentReference depth_ref;
       depth_ref.attachment = 0;
       depth_ref.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
    
       // Subpass 0: shadow map rendering
       VkSubpassDescription subpass[1];
       subpass[0].pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
       subpass[0].flags = 0;
       subpass[0].inputAttachmentCount = 0;
       subpass[0].pInputAttachments = NULL;
       subpass[0].colorAttachmentCount = 0;
       subpass[0].pColorAttachments = NULL;
       subpass[0].pResolveAttachments = NULL;
       subpass[0].pDepthStencilAttachment = &depth_ref;
       subpass[0].preserveAttachmentCount = 0;
       subpass[0].pPreserveAttachments = NULL;
    
       // Create render pass
       VkRenderPassCreateInfo rp_info;
       rp_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
       rp_info.pNext = NULL;
       rp_info.attachmentCount = 1;
       rp_info.pAttachments = attachments;
       rp_info.subpassCount = 1;
       rp_info.pSubpasses = subpass;
       rp_info.dependencyCount = 0;
       rp_info.pDependencies = NULL;
       rp_info.flags = 0;
    
       VkRenderPass render_pass;
       VK_CHECK(vkCreateRenderPass(device, &rp_info, NULL, &render_pass));
    
       return render_pass;
    }
    

    The render pass is simple enough: we only have one attachment with the depth image and one subpass that renders to the shadow map target. We will start the render pass by clearing the shadow map and by the time we are done we want to store it and transition it to layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL so we can sample from it later when we render the scene with shadows. Notice that because we only care about depth information, the render pass doesn’t include any color attachments.

    Framebuffer

    Every rendering job needs a target framebuffer, so we need to create one for our shadow map. For this we will use the image view we created from the shadow map image. We link this framebuffer target to the shadow map render pass description we have just defined:

    VkFramebufferCreateInfo fb_info;
    fb_info.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
    fb_info.pNext = NULL;
    fb_info.renderPass = shadow_map_render_pass;
    fb_info.attachmentCount = 1;
    fb_info.pAttachments = &shadow_map_view;
    fb_info.width = SHADOW_MAP_WIDTH;
    fb_info.height = SHADOW_MAP_HEIGHT;
    fb_info.layers = 1;
    fb_info.flags = 0;
    
    VkFramebuffer shadow_map_fb;
    vkCreateFramebuffer(device, &fb_info, NULL, &shadow_map_fb);
    

    Pipeline description

    The pipeline we use to render the shadow map also has some particularities:

    Because we only care about recording depth information, we can typically skip any vertex attributes other than the positions of the vertices in the scene:

    ...
    VkVertexInputBindingDescription vi_binding[1];
    VkVertexInputAttributeDescription vi_attribs[1];
    
    // Vertex attribute binding 0, location 0: position
    vi_binding[0].binding = 0;
    vi_binding[0].inputRate = VK_VERTEX_INPUT_RATE_VERTEX;
    vi_binding[0].stride = 2 * sizeof(glm::vec3);
    
    vi_attribs[0].binding = 0;
    vi_attribs[0].location = 0;
    vi_attribs[0].format = VK_FORMAT_R32G32B32_SFLOAT;
    vi_attribs[0].offset = 0;
    
    VkPipelineVertexInputStateCreateInfo vi;
    vi.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO;
    vi.pNext = NULL;
    vi.flags = 0;
    vi.vertexBindingDescriptionCount = 1;
    vi.pVertexBindingDescriptions = vi_binding;
    vi.vertexAttributeDescriptionCount = 1;
    vi.pVertexAttributeDescriptions = vi_attribs;
    ...
    pipeline_info.pVertexInputState = &vi;
    ...
    

    The code above defines a single vertex attribute for the position, but assumes that we read this from a vertex buffer that packs interleaved positions and normals for each vertex (each being a vec3) so we use the binding’s stride to jump over the normal values in the buffer. This is because in this particular example, we have a single vertex buffer that we reuse for both shadow map rendering and normal scene rendering (which requires vertex normals for lighting computations).

    Again, because we do not produce color data, we can skip the fragment shader and our vertex shader is a simple passthough instead of the normal vertex shader we use with the scene:

    ....
    VkPipelineShaderStageCreateInfo shader_stages[1];
    shader_stages[0].sType =
       VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
    shader_stages[0].pNext = NULL;
    shader_stages[0].pSpecializationInfo = NULL;
    shader_stages[0].flags = 0;
    shader_stages[0].stage = VK_SHADER_STAGE_VERTEX_BIT;
    shader_stages[0].pName = "main";
    shader_stages[0].module = create_shader_module("shadowmap.vert.spv", ...);
    ...
    pipeline_info.pStages = shader_stages;
    pipeline_info.stageCount = 1;
    ...
    

    This is how the shadow map vertex shader (shadowmap.vert) looks like in GLSL:

    #version 400
    
    #extension GL_ARB_separate_shader_objects : enable
    #extension GL_ARB_shading_language_420pack : enable
    
    layout(std140, set = 0, binding = 0) uniform vp_ubo {
        mat4 ViewProjection;
    } VP;
    
    layout(std140, set = 0, binding = 1) uniform m_ubo {
         mat4 Model[16];
    } M;
    
    layout(location = 0) in vec3 in_position;
    
    void main()
    {
       vec4 pos = vec4(in_position.x, in_position.y, in_position.z, 1.0);
       vec4 world_pos = M.Model[gl_InstanceIndex] * pos;
       gl_Position = VP.ViewProjection * world_pos;
    }
    
    

    The shader takes the ViewProjection matrix of the light (we have already multiplied both together in the host) and a UBO with the Model matrices of each object in the scene as external resources (we use instanced rendering in this particular example) as well as a single vec3 input attribute with the vertex position. The only job of the vertex shader is to compute the position of the vertex in the transformed space (the light space, since we are passing the ViewProjection matrix of the light), nothing else is done here.

    Command buffer

    The command buffer is pretty similar to the one we use with the scene, only that we render to the shadow map image instead of the usual render target. In the shadow map render pass description we have indicated that we will clear it, so we need to include a depth clear value. We also need to make sure that we set the viewport and sccissor to match the shadow map dimensions:

    ...
    VkClearValue clear_values[1];
    clear_values[0].depthStencil.depth = 1.0f;
    clear_values[0].depthStencil.stencil = 0;
    
    VkRenderPassBeginInfo rp_begin;
    rp_begin.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO;
    rp_begin.pNext = NULL;
    rp_begin.renderPass = shadow_map_render_pass;
    rp_begin.framebuffer = shadow_map_framebuffer;
    rp_begin.renderArea.offset.x = 0;
    rp_begin.renderArea.offset.y = 0;
    rp_begin.renderArea.extent.width = SHADOW_MAP_WIDTH;
    rp_begin.renderArea.extent.height = SHADOW_MAP_HEIGHT;
    rp_begin.clearValueCount = 1;
    rp_begin.pClearValues = clear_values;
    
    vkCmdBeginRenderPass(shadow_map_cmd_buf,
                         &rp_begin,
                         VK_SUBPASS_CONTENTS_INLINE);
    
    VkViewport viewport;
    viewport.height = SHADOW_MAP_HEIGHT;
    viewport.width = SHADOW_MAP_WIDTH;
    viewport.minDepth = 0.0f;
    viewport.maxDepth = 1.0f;
    viewport.x = 0;
    viewport.y = 0;
    vkCmdSetViewport(shadow_map_cmd_buf, 0, 1, &viewport);
    
    VkRect2D scissor;
    scissor.extent.width = SHADOW_MAP_WIDTH;
    scissor.extent.height = SHADOW_MAP_HEIGHT;
    scissor.offset.x = 0;
    scissor.offset.y = 0;
    vkCmdSetScissor(shadow_map_cmd_buf, 0, 1, &scissor);
    ...
    

    Next, we bind the shadow map pipeline we created above, bind the vertex buffer and descriptor sets as usual and draw the scene geometry.

    ...
    vkCmdBindPipeline(shadow_map_cmd_buf,
                      VK_PIPELINE_BIND_POINT_GRAPHICS,
                      shadow_map_pipeline);
    
    const VkDeviceSize offsets[1] = { 0 };
    vkCmdBindVertexBuffers(shadow_cmd_buf, 0, 1, vertex_buf, offsets);
    
    vkCmdBindDescriptorSets(shadow_map_cmd_buf,
                            VK_PIPELINE_BIND_POINT_GRAPHICS,
                            shadow_map_pipeline_layout,
                            0, 1,
                            shadow_map_descriptor_set,
                            0, NULL);
    
    vkCmdDraw(shadow_map_cmd_buf, ...);
    
    vkCmdEndRenderPass(shadow_map_cmd_buf);
    ...
    

    Notice that the shadow map pipeline layout will be different from the one used with the scene too. Specifically, during scene rendering we will at least need to bind the shadow map for sampling and we will probably also bind additional resources to access light information, surface materials, etc that we don’t need to render the shadow map, where we only need the View and Projection matrices of the light plus the UBO with the model matrices of the objects in the scene.

    We are almost there, now we only need to submit the command buffer for execution to render the shadow map:

    ...
    VkPipelineStageFlags shadow_map_wait_stages = 0;
    VkSubmitInfo submit_info = { };
    submit_info.pNext = NULL;
    submit_info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
    submit_info.waitSemaphoreCount = 0;
    submit_info.pWaitSemaphores = NULL;
    submit_info.signalSemaphoreCount = 1;
    submit_info.pSignalSemaphores = &signal_sem;
    submit_info.pWaitDstStageMask = 0;
    submit_info.commandBufferCount = 1;
    submit_info.pCommandBuffers = &shadow_map_cmd_buf;
    
    vkQueueSubmit(queue, 1, &submit_info, NULL);
    ...
    

    Because the next pass of the algorithm will need to sample the shadow map during the final scene rendering,we use a semaphore to ensure that we complete this work before we start using it in the next pass of the algorithm.

    In most scenarios, we will want to render the shadow map on every frame to account for dynamic objects that move in the area of effect of the light or even moving lights, however, if we can ensure that no objects have altered their positions inside the area of effect of the light and that the light’s description (position/direction) hasn’t changed, we may not need need to regenerate the shadow map and save some precious rendering time.

    Visualizing the shadow map

    After executing the shadow map rendering job our shadow map image contains the depth information of the scene from the point of view of the light. Before we go on and start using this as input to produce shadows in our scene, we should probably try to visualize the shadow map to verify that it is correct. For this we just need to submit a follow-up job that takes the shadow map image as a texture input and renders it to a quad on the screen. There is one caveat though: when we use perspective projection, Z values in the depth buffer are not linear, instead precission is larger at distances closer to the near plane and drops as we get closer to the far place in order to improve accuracy in areas closer to the observer and avoid Z-fighting artifacts. This means that we probably want to linearize our shadow map values when we sample from the texture so that we can actually see things, otherwise most things that are not close enough to the light source will be barely visible:

    #version 400
    
    #extension GL_ARB_separate_shader_objects : enable
    #extension GL_ARB_shading_language_420pack : enable
    
    layout(std140, set = 0, binding = 0) uniform mvp_ubo {
        mat4 mvp;
    } MVP;
    
    layout(location = 0) in vec2 in_pos;
    layout(location = 1) in vec2 in_uv;
    
    layout(location = 0) out vec2 out_uv;
    
    void main()
    {
       gl_Position = MVP.mvp * vec4(in_pos.x, in_pos.y, 0.0, 1.0);
       out_uv = in_uv;
    }
    
    #version 400
    
    #extension GL_ARB_separate_shader_objects : enable
    #extension GL_ARB_shading_language_420pack : enable
    
    layout (set = 1, binding = 0) uniform sampler2D image;
    
    layout(location = 0) in vec2 in_uv;
    
    layout(location = 0) out vec4 out_color;
    
    void main()
    {
       float depth = texture(image, in_uv).r;
       out_color = vec4(1.0 - (1.0 - depth) * 100.0);
    }
    

    We can use the vertex and fragment shaders above to render the contents of the shadow map image on to a quad. The vertex shader takes the quad’s vertex positions and texture coordinates as attributes and passes them to the fragment shader, while the fragment shader samples the shadow map at the provided texture coordinates and then “linearizes” the depth value so that we can see better. The code in the shader doesn’t properly linearize the depth values we read from the shadow map (that requires to pass the Z-near and Z-far values used in the projection), but for debugging purposes this works well enough for me, if you use different Z clipping planes you may need to alter the ‘100.0’ value to get good results (or you might as well do a proper conversion considering your actual Z-near and Z-far values).

    Visualizing the shadow map

    The image shows the shadow map on top of the scene. Darker colors represent smaller depth values, so these are fragments closer to the light source. Notice that we are not rendering the floor geometry to the shadow map since it can’t cast shadows on any other objects in the scene.

    Conclusions

    In this post we have described the shadow mapping technique as a combination of two passes: the first pass renders a depth image (the shadow map) with the scene geometry from the point of view of the light source. To achieve this, we need a passthrough vertex shader that only transforms the scene vertex positions (using the view and projection transforms from the light) and we can skip the fragment shader completely since we do not care for color output. The second pass, which we will cover in the next post, takes the shadow map as input and uses it to render shadows in the final scene.

    by Iago Toral at July 30, 2017 09:49 PM

    Philippe Normand

    The GNOME-Shell Gajim extension maintenance

    Back in January 2011 I wrote a GNOME-Shell extension allowing Gajim users to carry on with their chats using the Empathy infrastructure and UI present in the Shell. For some time the extension was also part of the official gnome-shell-extensions module and then I had to move it to Github as a standalone extension. Sadly I stopped using Gajim a few years ago and my interest in maintaining this extension has decreased quite a lot.

    I don’t know if this extension is actively used by anyone beyond the few bugs reported in Github, so this is a call for help. If anyone still uses this extension and wants it supported in future versions of GNOME-Shell, please send me a mail so I can transfer ownership of the Github repository and see what I can do for the extensions.gnome.org page as well.

    (Huh, also. Hi blogosphere again! My last post was in 2014 it seems :))

    by Philippe Normand at July 30, 2017 01:53 PM

    July 28, 2017

    Eleni Maria Stea

    Creating cube map images from HDR panoramas on GNU/Linux

    As part of my work for Igalia I wanted to do some environment mapping. I was able to find plenty of high quality .hdr images online but I couldn’t find any (OSS) tool to convert them to cubemap images. Then, Nuclear (John Tsiombikas) gave me the solution: he wrote a minimal tool that does the job quickly and produces high quality cube maps.

    So, here’s a short “how to” create cubemaps on Linux using his “cubemapper” program in combination with other OSS tools:

    Prerequisites:

    Install pfstools pfsview

    Install the cubemapper dependencies:

    1- libimago

    git clone https://github.com/jtsiomb/libimago.git
    make
    sudo make install
    

    2- libgmath

    git clone https://github.com/jtsiomb/gph-math.git
    make
    sudo make install

    Get/Install Cubemapper:

    Get the cubemapper code from here: cubemapper-0.1.tar.gz

    tar xzvf cubemapper-0.1.tar.gz
    cd cubemapper-01/
    make
    sudo make install
    

    Create the cubemaps:

    Before we begin, we can check our hdr images using pfsview like that:

    pfsin foobar_in.hdr | pfsview

    Sometimes the image is too big and we might need to resize it (if it’s really really big pfsview might crash).

    Resize can be done by running:

     pfsin foobar_in.hdr | pfssize --maxy 2048 | pfsout foobar_out.hdr

    (You can replace 2048 or maxy with another value)

    After resizing to something more reasonable / suitable for our app, we can use the cubemapper to create the cubemap images:

    cubemapper foobar_out.hdr

    With this command we should see something like that:

    Pressing c will save the cubemap images in the current directory.

    We can now show a cubemap made by the images we just saved, just to make sure that there aren’t any artifacts, by pressing space:

    Exiting the program, we can see that the current directory contains 6 new .hdr files:

    cubemap_px.hdr, cubemap_py.hdr cubemap_pz.hdr,
    cubemap_nx.hdr, cubemap_ny.hdr, cubemap_nz.hdr

    (one for each cubemap direction).

    Creating cube map images from HDR panoramas on GNU/Linux

    These 6 images can now be used as textures for cube mapping with OpenGL.

    Cubemapper works also with other types of images (e.g. jpg, png).

    Note: The initial .hdr panorama I used on this post is from: http://noemotionhdrs.net/hdrday.html

    by hikiko at July 28, 2017 01:07 PM