2

Building a Simple Self-Driving Car Simulator | Pointers Gone Wild

 3 years ago
source link: https://pointersgonewild.com/2018/05/21/building-a-simple-self-driving-car-simulator/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Building a Simple Self-Driving Car Simulator

As part of my new job, I’ve been working with Professor Liam Paull and his students on building a simulator for Duckietown. This is a university course being taught at ETH Zurich, the University of Montreal, TTIC/UChicago and other institutions, where students learn about self-driving cars and robotics by building their own small-scale model which can drive around in the miniature Duckietown universe, complete with turns, intersections, traffic signs, and other moving vehicles.

The course has, so far, been focused on traditional robotics methods using computer vision and PID controllers to drive the robots. However, students and professors are becoming increasingly interested in using deep learning (and reinforcement learning in particular). Since reinforcement learning needs a lot of data, it’s much more practical to do it in simulation than with physical robots. This has been one of the main motivations for building a simulator.

The simulator I’m building works as an OpenAI Gym environment, which makes it easy to use for reinforcement learning. It’s written in pure Python and uses OpenGL (pyglet) to produce graphics. I chose Python because this is the language most used by the deep learning community, and I wanted for students to be able to modify the simulator easily. The simulator performs many forms of domain randomization: it randomly varies colors, the camera angle and field of view, etc. This feature is meant to help neural networks learn to deal with variability, so that trained networks will hopefully work not just in simulation, but also in the real world as well.

Because Duckietown is a toy universe that is essentially two-dimensional, I’ve designed a YAML map format which can be easily hand-edited to create custom new environments. It essentially describes a set of tiles, and where to place 3D models relative to those tiles. I’ve also made some simplifying assumptions with regard to physics (the agent essentially moves along a 2D plane).

While working on this simulator, I’m also experimenting with sim-to-real transfer, that is, getting policies trained in simulation to work with a real robot. This is a work in progress, but we are close to having this working reliably. The video below shows some promising results:

If you’re interested in playing with this, the source code is available on GitHub. Feedback and contributions are welcome. If you run into any issues, please report them, as I’d like to make the simulator as easy to use as possible.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK