21

Programming Servo: three years, 100 commits

 4 years ago
source link: https://medium.com/programming-servo/programming-servo-three-years-100-commits-a3cbfb06ff23
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

So, after a bit more than three years, and having reached the arbitrary number 100 in commit count, I think it’s time to take some time to survey the experience so far.

A bit of background first: in the fall of 2016, I wrote my first line of Rust, by way of my first commit into Servo : https://github.com/servo/servo/pull/13412, fixing an issue labelled as “Easy”.

At that time, I was in my fifth year of programming Web applications in Python, Javascript, and HTML, and I don’t think I really knew what a mutex was.

Also, even thought I liked to read books about “patterns”, and I considered my Django apps “pretty well structured”, I really had no idea what “programming in the large” was.

After that initial commits, I spend a few more months doing other fairly easy things(although it was hard enough at the time), and it was only in February 2017 that I had my first taste of an issue labelled “Hard”: “Implement structured clone callbacks” https://github.com/servo/servo/pull/15519 .

As a self-taught engineer who once had to independently figure out what a “variable” and a “class” was, I remember very well that working on that structured clone stuff felt about as hard as learning to program all over again(by the way, since that stuff was pretty much all unsafe , it might have been a better idea to pick a different “Hard” issue at the time, but hey, I made it through).

It was also weirdly captivating and addictive, and I have been hooked ever since. Over time, I figured out what I like the most, which I would described as “working with event-loops”, especially coordinating among them, and using those to build fairly convoluted concurrent and/or multi-process workflows.

Here are some things I’ve noticed along the way:

Community works

Why did I go from a few easy commits, to becoming a reviewer on the project three years later?

The answer is community.

If you look at the Servo contributor graph , you see that over the years not only has the project attracted more than 1000 contributors, it also takes a full 22 commits to get in the top 100(For comparison: it takes 102 commits to reach the top 100 for rust itself , so there is still some way to go for Servo to reach that level of engagement). To me, this means that the project’s engagement with the community as been both broad, and deep. People tend to stick around for more once they get started.

What’s so great about Servo’s community? I think it’s hard to put your finger on it, but one thing I’ve noticed is that the “reviewers”, while sometimes indeed really do seem to know everything, never make it feel like they know everything. In other words, there is plenty of room to have discussions, and people will take your arguments seriously.

I also should mention the help I got fromPaul Rouget, who initially introduced me to the project and provided invaluable support along the way. Coming from web development, I was initially skeptical I could tackle working on the underlying engine, but you convinced me otherwise. Thanks Paul!

Incremental progress works

How can you continuously improve your skills as a programmer? It’s simple: always work on something that is slightly harder than the last thing you worked on.

“CV time” doesn’t count. You can work for 10 years and yet make little progress. It’s not the amount of time you spend programming that counts, it’s how much progress you make by doing things that are incrementally harder.

So, every time I picked-up an issue in Servo, I made a point to choose one that appeared harder than the last.

It goes a little like this: at the start of almost every issue, there is an initial period were you feel lost and have no idea how to go about it. Then, as you read the spec, survey existing code, or perhaps look into how something was done in Gecko, you slowly build a mental model of the solution, and start writing some code incrementally towards it(off-course your perception of the problem then further changes along the way).

After you’ve finished one such issue, you might go back and do some smaller stuff still related to it, such as fixing a few bugs that perhaps have come up since. That can be fairly relaxing and probably necessary as some sort of recovery phase. But don’t fool yourself, you’re not making progress until you actually pick up the next project and get that feeling of being at a loss again.

Obviously, the above is nothing new. See for example https://norvig.com/21-days.html

The native thread is dead, long live the native thread!

Here comes my favorite one.

I’m always surprised when I read somewhere the prediction that “one day, nobody will directly use threads anymore”. Or the advice that one should “use async/await by default” as an approach to concurrency. Let’s say my experience on Servo has been very different, and I’ll try to explain why.

In my opinion, there is one reason Servo is still tractable despite it’s size, and it is the use of a simple approach to concurrency used to model individual components: a native thread, running it’s own event-loop, and handling one message at a time.

And “simple” doesn’t necessarily mean “contains little code”. In fact, script is an example of a large component in Servo( a cargo check alone will take 5 mins). Yet the answer to the question “how does script run?” can be summed up as “ one task at a time ”.

So, despite the huge amount of code that potentially runs as part of “one task”, if you want to understand “how the component runs”, you just need to add one println! here . Doing so will tell you exactly what “events” are being handled, one at the time, by the component.

The component, like most others in Servo, runs something like the below algorithm:

  1. Block on receiving on a channel, or on a select of channels(thank you Crossbeam ),
  2. Handle the message received, (almost)without any concurrency coming into play(except non-blocking sends on channels),
  3. Go back to 1.

So, while step 2 can get incredibly complicated, in trying to understand what goes on you will have one enormous benefit: the code is single-threaded/sequential in nature.

So what’s the problem with async and tasks? The problem is that using those breaks down that simple model.

Perhaps some example code is due.

Let’s first take a look at the event-loop of script in Servo:

uUFBFzQ.png!web

https://github.com/servo/servo/blob/95614f57f147699f15a8f103c7def1cdfcdc7d1f/components/script/script_thread.rs#L1415

As you can see, there is a single “yield” point, where the thread might block if no message is available. The actual event handling that follows the receiving of a message is purely sequential.

Ok, ok, I admit there are a few more points where script might block, as can be seen for example below:

7ZfeIzf.png!web
https://github.com/servo/servo/blob/95614f57f147699f15a8f103c7def1cdfcdc7d1f/components/script/dom/document.rs#L1833

This is referred to as “blocking the event-loop”, andavoided if possible.

Now let’s take a look at an async example, this time from Facebook’s Libra:

VZvuyii.png!web

https://github.com/libra/libra/blob/005ac1cdf9e266a40940dd18a43225b411389ff1/consensus/src/chained_bft/chained_bft_smr.rs#L94

We can further look into one of those async method calls, for example process_proposal_msg , where we can find further yield points:

https://github.com/libra/libra/blob/005ac1cdf9e266a40940dd18a43225b411389ff1/consensus/src/chained_bft/event_processor.rs#L187

So, while there is a resemblance to the select used in the script event-loop, the similarities end there.

The big difference is what happens after the select wakes-up. In Servo, the handling of the message received from the select is sequential. You’re talking single-threaded code running one statement after the other, without yielding, there(and that’s hard enough, believe me).

In the Libra example, the code inside the select is itself async, which is another way of saying it is concurrent(even if you’re using a single-threaded runtime). Can you describe “how this component runs” using a few simple steps, like I’ve done above for Servo? Let’s try:

  1. Await on the select(so far so good).
  2. Handle the result of the select, awaiting a number of nested async computations.
  3. Back to 1.

In theory it’s fine, but what happens when something goes wrong? Trying to debug the async code at step 2 is going to be a lot harder than the equivalent sequential code in Servo. Why? I think we can assume that sequential code is easier to understand and debug than concurrent code(whether it runs on a thread-pool or a single thread).

In Servo’s approach, each component is internally sequential(at least the main event-loop of a component is, while other parallel computations can be spawned by code running on that event-loop, see for exampleFetch). The component will communicate with other components running in parallel using message-passing(preferably without blocking). Those message-passing workflows can indeed be somewhat hard to debug, but at least you can rely on the internal logic of (the main event-loop of) each component being single-threaded.

Looking at https://github.com/libra/libra/issues/2152, and https://github.com/libra/libra/issues/1399, it seems like the Libra devs are also moving to something more like a “single thread/single event-loop per component” model. It also looks like that code is not as async as it looks.

Does the Servo approach simply fit what Servo is trying to do, and could other types of system be modeled fully using async/await?

I think that there isn’t one particular thing that “Servo is trying to do”. There is really a bit of everything, from networking, to graphics, to running code in a VM. And that’s the challenge of a large system, it’s going to consists of various parts, you’re going to have to keep them isolated from each other, and they all will have different runtime “needs”(and the needs of each component is different from the needs of the system as a whole).

So while a given component, let’s say your networking component, might internally own an async runtime and spawn internal async computations, as part of a larger system, I would still model the component with a thread as the outer layer(I would probably argue for a single-threaded async runtime, running inside that thread).

I would not try to model components of a larger system individually as tasks(spawning other tasks?), run multiple components on the same async runtime, or try to communicate between them using futures. Why? Because that would force upon each component an async model of computation, which is unlikely to be a good fit for each, and it would also represent a loss of isolation in terms of the runtime characteristics of each component(despite the fact that most “async runtimes” come with a flavor of “spawn a long-running computation” API, that’s not the same thing as spawning a thread representing a component that “usually doesn’t block, but sometimes must”).

Actually one further complication in Servo is the existence of, and the need for, process boundaries. Those are partly required as mitigation for Spectre , partly for increased robustness of the system(when a tab crashes, the browser as a whole doesn’t). If anything, I think these will become increasingly prevalent in other types of system too.

For other relevant discussions of things like “the (fallacy of the)cost of context-switching”, and an excellent overview of concurrency applied to large programs, I refer to the excellent: https://www.chromium.org/developers/lock-and-condition-variable (the title doesn’t do it justice, scroll down about half-way for a few real gems of paragraphs).

In conclusion

Thanks for reading, and here’s to another, or your first, 100 commits in Servo, happy new year!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK