3

Why Rust is actually good for your car.

 1 year ago
source link: https://medium.com/volvo-cars-engineering/why-volvo-thinks-you-should-have-rust-in-your-car-4320bd639e09
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Why Rust is actually good for your car.

1*lf61QbASBkew69V-jBRLeg.jpeg

This post is an interview made with one of our embedded Rust Pioneers, Julius Gustavsson who is a Technical Expert and System Architect at Volvo Cars Corporation.

JF = Johannes Foufas (interviewer), JG = Julius Gustavsson (interviewee).

JF: How did you discover Rust?

JG: I think this was back in 2014 and by then I had been doing C and C++ for 15 years. I was working in a new team at a large Swedish tech company. They had quite an advanced code style, and they were proud of their code base. It looked really nice. You know, I was excited to start working there but then, lo and behold, I was debugging the same kind of obscure memory issues as before.

A thought just struck me, is this as good as it gets?

Is this how my career is going to be?

Am I going to spend the rest of my life doing this?

And interestingly enough, I think it was the same week or the week after that, I saw this Reddit post where they were closing in on the final Rust 1.0 release. And apparently, it still was living up to its original claims, which I found when I first discovered Rust somewhere around 2009.

1*Z0g4j3QYapjaMIxni6mgKg.jpeg

Julius to the left, Johannes to the right

From then on, I followed the Rust project from the sidelines. When I came to Volvo Cars a few years later, I was already sold on it, and I thought this would be useful for Volvo Cars because it embodies the same type of Ideology that you want when you’re developing safety-critical software. You really want to have quality up front.

JF: That’s what we struggled with the most, the obscure and difficult-to-find errors. They don’t happen that often, but when they happen it’s a pain.

JG: Yeah, definitely.

JF: Did you code Rust on the side then, privately or?

JG: Nothing major. But when I started at Volvo, my first project was to make an Android integration with our Signal Broker, what is now BeamyBroker, on our prototype version of the Core Computer. It was a kind of Hardware Abstraction Layer (HAL) towards the broker, itself written in Elixir. I did that using Rust and async Futures. That was also a big confirmation that this was indeed something useful. Everything just worked from the start. I mean, once you get it to build it usually always works when you run it. Of course, it does not solve any logic errors in your implementation. But, if your logic checks out, it just magically works as soon as it compiles. But then you might of course struggle sometimes to get it to compile, especially if you are trying to do things that the compiler sees as erroneous.

JF: From there you went more serious?

JG: Then I pitched to the managers, “If we want to do a serious Rust effort within the company, then I want to take part in that and become an employee”, since I was a consultant at that point.

So that’s how my employment started at Volvo and that’s when I met Niko (Nikolaos Korkakakis) who had the same ambition. And so, we teamed up and started to work on the Low Power node of the core computer. That was basically a fluke that we had this node that no one was really paying attention to. Everyone was busy with the other nodes. And it happened to run on the architecture that was best supported in the embedded bare metal space of Rust at the time. It was also not safety critical component, so we did not have to worry about safety certifications.

JF: Yeah, that’s always a hassle.

JG: We didn’t have to worry about the safety-critical stuff.

But at the same time, it must be extremely reliable, because if it doesn’t work, the car will not start.

Also, since the functionality scope was limited, we could be a small team for the first project.

JF: Was anything lacking for you?
Did you need anything from the Open Source community or did you develop it all by yourselves?

JG: No, we did not have everything. Some parts were missing. We run on an ATMEL/Microchip ATSAME chip, and there is an Open Source project for it.
In Rust, you have this hardware abstraction layer, similar to an MCAL in Autosar.

So that project had been recently started, but it only had a few of the peripherals, and many of the peripherals that we needed were not yet in place. This was the case for drivers like CAN that we use a lot in the car industry. Because of this, we have been active in that project.

JF: Did you team up with anyone then?

JG: Yes, we meet Grepit at a Rust conference. Grepit was founded in 2014 as a spin-off from Luleå University of Technology. They are the authors of: https://crates.io/crates/cortex-m-rtic, and it’s a concurrency framework for building real-time systems.

With it, you can achieve real-time behaviour in a system, but it's only that. It does not provide you with any higher-level abstractions or things like services that many other RTOS would typically give you. But you get that instead from many of the open source components that are available. So yes, we teamed up with Grepit to get us up and running. To develop drivers and push them upstream to the projects. There have also been some implementations lacking that we had to develop from scratch, but that has not hindered us much. There are quite a lot of neat tools out there that you can use.

JF: Now you have experience from the C++ world. What benefits do you see straight off? I mean now, when you have something working in Rust?

JG: I would say the benefits I saw from the start were that you didn’t have to think about race conditions and memory corruption, and memory safety in general. You know, just writing correct and robust code from the start. So that was basically my first impression, but now I have also come to realize that there are many other aspects. You get just as big benefits from the side effects of that first aspect. First let’s back up and describe briefly how memory safety is achieved in Rust, which is quite unique. It is based on static analysis of the lifetime of data within the program and making sure that any references to that data never outlives the lifetime of the data itself. You also never allow for more than one mutable instance at any given time or you can have multiple read instances to the data, but you may never mix those two. By enforcing this statically at compile time you get the memory safety for free, because then you know when the lifetime ends, and the compiler will then inject clean up code at that location.

JF: Unless you use the unsafe keyword?

JG: Well, the unsafe parts give you some extra flexibility to shoot you in the foot. But still it’s not like all bets are off. There are still a lot of invariants that are upheld, even in unsafe. For example, the lifetime stuff is still being upheld. The difference is that you are allowed to use raw pointers. They basically erase all lifetime information or you may bypass things that the compiler would not allow you to do. This also includes things that the compiler can’t reason about on its own but instead must leave to the human and that’s always done in a clearly marked unsafe block, so that you can audit it specifically.

But it’s not like, as I think many assume, just because it’s unsafe you’re back to C. That’s not the case. You have a lot of safety measures, even though they are more relaxed compared to the safe subset. But the nice thing about having this memory model, lifetime and ownership model enforced by the compiler means that everyone is on the same page, which in turn means that you can import and use third party components in a much easier and straightforward way. Since Rust comes with a built-in tool chain that takes care of building and that fetches and resolves dependencies, it is also much easier and safer to add new dependencies.

You no longer have to check:

Does it build?

Is it my build system that makes it fail?

You do not have to make changes to your build system to make it build and link. And if I get it that far,

does the library have the same assumptions about memory and ownership as I do?
Who’s going to free this memory?
If I must create a buffer, who’s going to delete it, all that stuff!

By not having to worry about that, you can feel much more at ease when you use third party components.

JF: So there is a rule for how that is done?

JG: Yes.

JF: How about packages? Do you just take anything, or do you have reasoning for what you use?

JG: We try to keep our dependencies to a minimum, because we are creating products that will live for a long time. So, all the dependencies that we pull in is something that we deem maintainable by ourselves in case we need to make bug fixes or whatever. There’s a built-in plugin for Cargo that does auditing of the code and checks it towards a database and reports any vulnerabilities or any other issues that may turn up during the lifetime of the product.

One other benefit that I did not realize in the beginning is onboarding new people is much easier because the new person is free to play with the with the code base, to try to improve it, change or refactor it, and the compiler is not going to compile until all the invariants are upheld again. This means that you can refactor without any fear and that new people can start writing code without being minutely reviewed, because you know there are so many unwritten invariants that only a few people know about. And I’m quite confident that this will also result in fewer warranty issues over time because you get higher quality up front.

JF: Yes, you do not need to run so many tools on the product to know if it’s safe or not, we run a whole array of tools on C and C++ code to try to find these hard to find errors.

JG: Yes, you do not need that to the same extent. There are some runtime characteristics and things like that that, that you may need to check. It also depends on how formally verified you want the code to be. Do you want to statically ensure that these things will never happen?

There are some runtime behaviors that you always will need to have some other tool to help you with.

JS: Are there tools like this available?

JG: They are becoming more and more available. I’m not sure if there are any like out of the box that can do everything we need, but we are experimenting with a few. For example, Miri, which is basically running the code in a virtual machine during compile time to figure out any unsoundness in your code base.

JF: Is there not also another tool made by some university people?

JG: Yes, there is for example KLEE, a dynamic symbolic execution engine built on top of the LLVM compiler infrastructure. It has been adapted towards Rust. This tool can give you definitive answers on whether something can panic in a certain situation or not. We haven’t started using it ourselves, but it’s something we want to explore. There is also the Kani verifier that looks promising.

JF: How about profilers, does that come with the language or?

JG: Unfortunately, not on embedded targets, or at least not yet. So in a recent release of Rust, there’s an instrumented code coverage flag that you can give to the compiler, so that you get extra instructions. This will afterwards show you what code paths you have actually executed. We need to do some work there to apply this on an embedded target. I know it works for desktop because there it can generate those files on the fly. But of course, on an embedded target you don’t have file systems or files, so then you need to take care of that yourself writing it to some internal buffer. For profiling you can use the standard desktop tools, but I am not sure how well that translates to the target. When we are testing we try to isolate all logic that is hardware independent to their own crates. Because then we can run them and use the built in tests support that the language provides. We can run the regular test infrastructure on those, but then when we do integration testing on actual target hardware, then we use integration test tools.

JF: So you can rely a lot on X86?

JG: Yes, we build the code for host and test it there. And that is part of strength of Rust. It’s fairly seamless across platform support. It’s just a part of the tool chain, which is a completely different experience compared to C or C++.

JF: How is testing them?
I mean, are there any test frameworks supplied with Rust?

JG: There is a build in unit test framework. Any function in the code can be run as a test just by annotating it with a specific tag. You can intersperse tests within regular code, and when you build for tests, those are run. The threshold to do unit testing is almost nonexistent. It’s built-in, which is quite amazing. You can easily write benchmark tests to see how your function improves. The same test framework can be used also to write integration tests, which are easy to run on your desktop, but when you’re doing it on target, it is not as seamless. But there’s this new Rust project called Probe-rs that allows us to communicate with the target hardware in in a fairly seamless way. It offers a GDB type of interface as a library, so you can write your test application for target. For your host it just loads probe-rs as a library and then you can interact and send GDB like commands or doing low level hardware manipulation over the wire, as part of the test.

JF: What do you foresee for the future?

JG: We have quite ambitious plans going forward.

We want to expand Rust here at Volvo Cars

to enable it on more nodes and to do that, we need to get compiler support for certain hardware targets and OS support for other targets. There is no point in replacing already developed and well tested code, but code developed from scratch should definitely be developed in Rust, if at all feasible. That is not to say that Rust is a panacea. Rust has some rough edges still and it requires you to make certain trade-offs that may not always be the best course of action. But overall, I think Rust has huge potential to allow us to produce higher quality code up front at a lower cost which in turn would reduce our warranty costs, so it’s a win-win for the bottom line.

JF: But Rust could co-exist with a code base based on C?

JG: It can co-exist on an almost arbitrary granularity, on a module level or on a function level, depending on what you’re doing. You could for instance rewrite parts that need cyber security, parts that are vulnerable. There is zero overhead between C and Rust, C can call into Rust and vice versa. Even for C++ there are ways, although you have to go through the C interface, there are some nice crates that can help you generate extra boilerplate so that C++ and Rust can seamlessly communicate.

There is also a lot of uptake in the safety critical industry, and we try to support these efforts as much as possible.

For example, there’s this Ferrocene project that is aiming to ASIL D certify the compiler. Ferrocene will be an ISO26262 qualified version of the existing open-source compiler, rustc.

AUTOSAR and SAE have as well started a Rust working group.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK