what is lidar-camera calibration?
lidar-camera calibration establishes correspondences between 3-d lidar points and 2-d camera data to fuse the lidar and camera outputs together.
lidar sensors and cameras are widely used together for 3-d scene reconstruction in applications such as autonomous driving, robotics, and navigation. while a lidar sensor captures the 3-d structural information of an environment, a camera captures the color, texture, and appearance information. the lidar sensor and camera each capture data with respect to their own coordinate system.
lidar-camera calibration consists of converting the data from a lidar sensor and a camera into the same coordinate system. this enables you to fuse the data from both sensors and accurately identify objects in a scene. this figure shows the fused data.
lidar-camera calibration consists of intrinsic calibration and extrinsic calibration.
intrinsic calibration — estimate the internal parameters of the lidar sensor and camera.
manufacturers calibrate the intrinsic parameters of their lidar sensors in advance.
you can use the function to estimate the intrinsic parameters of the camera, such as focal length, lens distortion, and skew. for more information, see the example.
you can also interactively estimate camera parameters using the camera calibrator app.
extrinsic calibration — estimate the external parameters of the lidar sensor and camera, such as location, orientation, to establish relative rotation and translation between the sensors.
extrinsic calibration of lidar and camera
the extrinsic calibration of a lidar sensor and camera estimates a rigid transformation between them that establishes a geometric relationship between their coordinate systems. this process uses standard calibration objects, such as planar boards with checkerboard patterns.
this diagram shows the extrinsic calibration process for a lidar sensor and camera using a checkerboard.
the programmatic workflow for extrinsic calibration consists of these steps. alternatively, you can use the app to interactively perform lidar-camera calibration.
extract the 3-d information of the checkerboard from both the camera and lidar sensor.
to extract the 3-d checkerboard corners from the camera data, in world coordinates, use the function.
to extract the checkerboard plane from the lidar point cloud data, use the function.
use the checkerboard corners and planes to obtain the rigid transformation matrix, which consists of the rotation r and translation t. you can estimate the rigid transformation matrix by using the function. the function returns the transformation as a object.
you can use the transformation matrix to:
evaluate the accuracy of your calibration by calculating the error. you can do so either programmatically, using , or interactively, using the app.
project lidar points onto an image by using the function, as shown in this figure.
fuse the lidar and camera outputs by using the function.
estimate the 3-d bounding boxes in a point cloud based on the 2-d bounding boxes in the corresponding image. for more information, see .
references
[1] zhou, lipu, zimo li, and michael kaess. “automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences.” in 2018 ieee/rsj international conference on intelligent robots and systems (iros), 5562–69. madrid: ieee, 2018. https://doi.org/10.1109/iros.2018.8593660.
see also
| | | | |