2.2.1.5. LiDAR Camera Calibration¶
The current sensorfusion relies on the extrinsic calibration between camera and LiDAR. This is because we need to project the LiDAR points through the camera frame on to the image plane.
The intrinsic camera matrix, which is also needed, is calculated by the ZED camera itself and published as ROS message.
2.2.1.5.1. How to calibrate¶
Take the calibration board and cones and place them so they are visible to both the camera and the LiDAR (max. 5m for the Board).
Record a rosbag of the vehicle standing still and the calibration board. Change the board`s position multiple times and record every position for at least 5 seconds. Make sure both the camera and LiDAR are publishing (typically after 15 seconds).
- The following topics need to be recorded:
/ouster/points (sensor_msgs/PointCloud2)
/zed2i/zed_node/left/image_rect_color (sensor_msgs/Image)
/zed2i/zed_node/left/camera_info (sensor_msgs/CameraInfo)
Run the calibration script as described down below.
2.2.1.5.2. Recommended markers for calibration¶
- Calibration board:
Outer corners of the board
Cross sections of the black and white tiles
- Cones:
Lower left and right point
Tip of cone


2.2.1.5.3. Play ROS Bag File¶
This launch file will only play the rosbag record file.
roslaunch local_mapping play_rosbag_lidar_camera_calibration.launch rosbag:=/path/to/file.bag
2.2.1.5.4. Calibrate LiDAR-Camera Extrinsic¶
Note
Make sure to have the rosbag playing in another terminal before running the calibration script.
This script will perform calibration using the matplotlib GUI to pick correspondences in the camera and the LiDAR frames. You first need to play the rosbag in another terminal.
Edit the global variables to your needs before calibrating
roslaunch local_mapping play_rosbag_lidar_camera_calibration.launch rosbag:=/path/to/file.bag
rosrun local_mapping calibrate_camera_lidar.py
Press [ENTER] to launch the GUIs and pick the corresponding points of the checkerboard in both the camera and the LiDAR frames.
OpenCV’s PnP RANSAC + refinement using LM is used to find the rotation and translation transforms between the camera and the LiDAR.
Note
The point files are appended and the extrinsics estimates are calculated and refined continuously using a RANSAC approach.
The calibrated extrinsics are saved as following:
- RT(4,4) Matrix in local mapping matrices
‘R’ : Rotation Matrix
‘T’ : Translation Offsets (XYZ m)