54

LiDAR Basics: The Coordinate System

 5 years ago
source link: https://www.tuicool.com/articles/hit/rENN7vA
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

It’s critical that we familiarise with the coordinate system of LiDAR to process point clouds generated by the sensor mounted on an autonomous car. In this post, we will look into the coordinate system that is used by LiDARs.

LIDAR — Light Detection and Ranging — is used to find the precise distance of objects in relation to us.

aMbaaqm.png!web
A LiDAR calculates the distance to an object using time of flight.

When a laser pulse is emitted, its time-of-shooting and direction are registered. The laser pulse travels through the air until it hits an obstacle which reflects some of the energy. The time-of-acquisition and power received are registered by the sensor after receiving the portion of energy. The spherical coordinates of the obstacle are calculated using time-of-acquisition which is returned by the sensor along with power received(as reflectance) after each scan.

As LiDAR sensor returns reading in spherical coordinates, let’s brush up with the spherical coordinate system.

Spherical Coordinate System

In a spherical coordinate system, a point is defined by a distance and two angles. To represent the two angles we use azimuth(θ) and polar angle(ϕ) convention. Thus a point is defined by (r,θ,ϕ).

RnAZ3ii.png!web

As you can see from the above diagram, the azimuth angle is in X-Y plane measured from X-axis and polar angle is in Z-Y plane measured from Z axis.

From the above diagram, we can get the following equations for converting a Cartesian coordinate to spherical coordinates.

zYfuYbj.png!web

We can derive cartesian coordinates from spherical coordinates using below equations.

UzUJbqr.png!web

LiDAR coordinate system

LiDAR returns reading in spherical coordinates. As you can see from the below diagram, there is a slight difference from the above-discussed convention.

In the sensor coordinate system, a point is defined by (radius r, elevation ω, azimuth α). Elevation angle, ω is in Z-Y plane measured from Y-axis. Azimuth angle, α is in X-Y plane measured from Y-axis.

Azimuth angle depends upon the position at when a laser is fired and is registered at the time of firing. Elevation angle for a laser emitter is fixed in a sensor. The radius is calculated using the time taken by the beam to come back.

fQNv2ia.png!web

Cartesian coordinates can be derived from the following equations.

3UvI3u3.png!web

The Cartesian coordinate system is easy to manipulate and hence most of the time we need to convert spherical coordinates to a Cartesian system using the above equations.

So a computation is necessary to convert the spherical data from the sensor to Cartesian coordinates using the above equations. Drivers of LiDAR sensors usually do that for us. For example, Velodyne LiDARs sensors provide a ROS package- velodyne_pointcloud for converting the coordinate system.

rU7RfmM.png!web
A car coordinate system with LiDAR mounted on it.

Above diagram shows the Cartesian coordinate system of a sensor mounted on a car.

So in this post, we learned on how a LiDAR calculates the distance and the coordinate system involved.

References for further reading:

  • A story on LiDAR sensor is changing self-driving car blog by Oliver Cameron is a good read.
  • Read more about Spherical coordinate system at Wikipedia.
  • Velodyne VLP-16 manual is a good start to know more about how LiDAR works.
uiiIRnB.png!web

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK