2.1.1.2. Setup Camera Perception

Warning

Dependency on local mapping

For performance optimisation, the way the camera perception and local mapping work during sensorfusion makes them depend on each others’ configuration. This means that, even if sensorfusion is not active, the local mapping configuration needs to be loaded into the appropriate namespace for the camera perception to function.

2.1.1.2.1. Prerequisites

Ensure the camera_perception package is built and the setup file is sourced.

catkin build camera_perception
source devel/setup.bash

2.1.1.2.2. YOLOv8-seg Model with TensorRT

How to export a trained model to a TensorRT engine:

  1. Export the ONNX model from a .pt file using the script export-seg.py

  2. Convert the ONNX model to a TensorRT engine using trtexec

2.1.1.2.2.1. Export ONNX Model

All trained models can be found in this folder.

1. Download the model into an accessible directory inside the Docker container, for example: /workspace/as_ros/src/camera_perception/engine

  1. Export the ONNX model from your .pt file:

cd /workspace/as_ros/src/camera_perception/scripts
python3 export-seg.py --weights ../engine/YOUR_MODEL.pt --opset 14 --input-shape 1 image_channel(3) model_input_height(608) model_input_width(1280) --device cuda:0

Example:

python3 export-seg.py --weights ../engine/uni_4_V100_seg_n_34.pt --opset 14 --input-shape 1 3 608 1280 --device cuda:0

You will get an ONNX model whose prefix is the same as the input weights. This ONNX model does not contain postprocessing.

  • --weights : The PyTorch model you trained.

  • --opset : ONNX opset version, default is 11.

  • --sim : (Optional) Whether to simplify your ONNX model.

  • --input-shape : Input shape for your model, should be (batch, channel, height, width).

  • --device : The CUDA device you use to export the engine.

2.1.1.2.2.2. Export Engine by trtexec

Use NVIDIA’s trtexec tool to convert the ONNX model to a TensorRT engine:

/usr/src/tensorrt/bin/trtexec \
--onnx=YOUR_ONNX_MODEL.onnx \
--saveEngine=YOUR_ENGINE.engine \
--fp16
  • --onnx : The ONNX model you exported.

  • --saveEngine : The name of the engine you want to save.

  • --fp16 : (Optional) Whether to use FP16 (converting from FP32 to FP16).

Example:

cd /workspace/as_ros/src/camera_perception/engine
/usr/src/tensorrt/bin/trtexec \
--onnx=uni_4_V100_seg_n_34.onnx \
--saveEngine=uni_4_V100_seg_n_34.engine \
--fp16 \
--workspace=16384

2.1.1.2.3. Running the Node

roslaunch camera_perception camera_perception.launch

You can set the following argument:

  • send_debug_informations (yes/no): If “yes,” the node outputs debug information to /rosout. Default: “no.”

Adjust other parameters in perception.yaml in the config folder. When changing the camera resolution in the perception.yaml, set the proper directory for the intrinsic matrix as well.

2.1.1.2.4. Setting up the camera

The current camera configuration is set to a resolution of 720p and a framerate of 60 fps. A fixed offset of 35 ms is subtracted from the timestamp of the image to correct driver delay (based on the ZED SDK documentation).

2.1.1.2.5. Reference

The inference is partly based on: YOLOv8-TensorRT and YOLACT.