2.4.3.4. cli package

2.4.3.4.1. Submodules

2.4.3.4.2. cli.analyzer module

class cli.analyzer.RosbagAnalyzer(input_rosbag: str, time_delay: bool, actuator_delay: bool, imu_tilt: bool, parameter_dump: bool, calibration_matrices: bool, gggv_dump: bool, vehicle_name: str, start: float, duration: float | None, end: float | None)[source]

Bases: object

calculate_acc_rotation_object() None[source]

Calculate rotation object for accelerometer.

Uses the mean of the acceleration while standing still to calculate the gravitational acceleration.

finish()[source]
plot()[source]
plot_actuator_delay()[source]
plot_time_delay()[source]
process(topic: str, msg: Any, t: Time)[source]
run()[source]
cli.analyzer.rgetattr(obj, attr, *args)[source]
cli.analyzer.save_intrinsic_rotation(vehicle_name: str, intrinsic_rotation: ndarray) None[source]

Save intrinsic rotation to correct transforms yaml.

cli.analyzer.save_rotated_gggv_data(vehicle_name: str, ggg_data: list, intrinsic_rotation: ndarray, velocities: list) None[source]

2.4.3.4.3. cli.drone_visualization module

class cli.drone_visualization.DroneVisualization(input_rosbag: str, input_video: str, recalculate_transforms: bool, no_new_transforms: bool, track_height: float, track_width: float, recover_offset: bool, recover_initial_features: bool, killer: GracefulKiller, start: int, duration: int | None, end: int | None, output: str, save_recovery: bool, recover_initial_position: bool, recover_initial_vehicle_mask: bool, visualize_only_debug_information: bool)[source]

Bases: object

affine_transform_initial_features()[source]
calculate_map_to_world_transform()[source]
calculate_perspective_transform()[source]
calculate_transforms_from_video()[source]
static chain_affine_transforms(first_transform: ndarray[Any, dtype[ScalarType]], second_transform: ndarray[Any, dtype[ScalarType]])[source]
chain_incrementing_transforms()[source]
static chain_perspective_transforms(first_transform: ndarray[Any, dtype[ScalarType]], second_transform: ndarray[Any, dtype[ScalarType]])[source]
compute_eigenvalues_and_angle_of_covariance(covariance: ndarray[Any, dtype[ScalarType]])[source]
draw_rounded_rectangle(image: ndarray[Any, dtype[ScalarType]], origin: Tuple[int, int], size: Tuple[int, int], radius: int, color: Tuple[int, int, int], alpha: float)[source]
find_initial_features()[source]
find_initial_position()[source]
find_initial_vehicle_mask()[source]
find_input_video_start_index()[source]
find_offset()[source]
find_start_and_end()[source]
generate_output()[source]
get_value_from_rosbag_df(time: float, name: str)[source]
get_values_from_rosbag_df(name: str, start_time: float | None = None, end_time: float | None = None)[source]
import_rosbag()[source]
recover_information()[source]
recover_transforms_from_storage()[source]
static rotate_and_move(xy: ndarray[Any, dtype[ScalarType]], rotation: float = 0.0, offset_before_rotation: ndarray[Any, dtype[ScalarType]] = array([0, 0]), offset_after_rotation: ndarray[Any, dtype[ScalarType]] = array([0, 0])) ndarray[Any, dtype[ScalarType]][source]

Function to move and rotate given coordinates.

First moves the coordinates, then rotates them around the origin and then move them again. Using a numpy built rotation matrix.

Parameters:
  • xy (npt.NDArray) – Coordinates to move, rotate and move again.

  • rotation (float, optional) – Rotation angle to rotate the coordinates around the origin, by default 0.

  • offset_before_rotation (npt.NDArray, optional) – Offset to move the coordinates before rotating them, by default np.array([0, 0]).

  • offset_after_rotation (npt.NDArray, optional) – Offset to move the coordinates after rotating them, by default np.array([0, 0]).

Returns:

Moved, rotated and moved coordinates.

Return type:

npt.NDArray

run()[source]
save_incrementing_transforms()[source]
save_recovery_information()[source]
transform_map_coordinated_to_world_coordinates(map_coordinates: ndarray[Any, dtype[ScalarType]], frame_i: int)[source]
visualize_acceleration(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], radius: int)[source]
visualize_brake(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], height: int)[source]
visualize_centerpoints(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]
visualize_driven_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]
visualize_feature(image: ndarray[Any, dtype[ScalarType]], frame_i: int)[source]
visualize_fov(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, fov_angle: float, fov_distance: float, color: Tuple[int, int, int])[source]
visualize_fov_gate(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]
visualize_gps_measurement(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, uncertainty: bool, add_to_legend: bool = False)[source]
visualize_heading_covariance(image: ndarray[Any, dtype[ScalarType]], frame_i: int, color: Tuple[int], pose: ndarray[Any, dtype[ScalarType]], uncertainty: float)[source]
visualize_image(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int)[source]
visualize_landmarks(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, uncertainty: bool = False)[source]
visualize_legend(image: ndarray[Any, dtype[ScalarType]])[source]
visualize_overlay(image: ndarray[Any, dtype[ScalarType]], frame_t: int)[source]
visualize_planned_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]
visualize_position_covariance(image: ndarray[Any, dtype[ScalarType]], frame_i: int, color: Tuple[int], uncertainty: ndarray[Any, dtype[ScalarType]], center: ndarray[Any, dtype[ScalarType]])[source]
visualize_predicted_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]
visualize_speed(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int])[source]
visualize_steering_wheel_angle(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], height: int)[source]
visualize_torque(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], height: int)[source]
visualize_transform_points(image: ndarray[Any, dtype[ScalarType]], frame_i: int)[source]
visualize_vehicle_position(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, uncertainty: bool = False, add_to_legend: bool = False)[source]
visualize_will_have_driven_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]
class cli.drone_visualization.FeatureSelector(image: ndarray[Any, dtype[ScalarType]], killer: GracefulKiller, track_width: float, track_height: float, mode: FeatureSelectorMode)[source]

Bases: object

add_feature(event, x, y, flags, param)[source]
find_good_features_to_track_in_rectangle(point1, point2)[source]
plot_features()[source]
plot_order()[source]
show()[source]
transform_image_by_features()[source]
class cli.drone_visualization.FeatureSelectorMode(value)[source]

Bases: Enum

An enumeration.

BOUNDARIES = 1
POSE_DEFINITION = 2
VEHICLE_MASK = 3
class cli.drone_visualization.GracefulKiller[source]

Bases: object

exit_gracefully(*args)[source]
kill_now = False
class cli.drone_visualization.OffsetFinder(video_path: str, input_rosbag_path: str, killer: GracefulKiller)[source]

Bases: object

calculate_offset()[source]
find_image_i(image_i: int, key: str)[source]
find_rosbag_start()[source]
find_video_start()[source]
select_rosbag_start_time(event: PickEvent)[source]
cli.drone_visualization.generate_progressbar(name: str, data_length: int, variable_widgets: List[Variable] = []) ProgressBar[source]

Generates a progressbar object with the given data length with more information than the standard one.

Parameters:

data_length (int) – Length of data points to process.

Returns:

Progressbar object with the given data length with more information than the standard one.

Return type:

progressbar.ProgressBar

2.4.3.4.4. cli.merge_bags module

class cli.merge_bags.RosbagMerger(main_input_rosbag: str, merge_input_rosbag: str, output_rosbag_suffix: str, topics: List[str])[source]

Bases: object

merge_filter(topic: str, datatype: str, md5sum: str, msg_def: str, header: dict)[source]
run()[source]

2.4.3.4.5. cli.migrator module

class cli.migrator.RosbagMigrator(input_rosbag: str, output_rosbag_suffix: str, uncompress_images: bool, compress_images: str | None, start: float, duration: float | None, end: float | None, delete_pipeline: Tuple[str] | None, delete_lidar_points: bool, delete_images: bool, delete_transforms: bool, delete_visualization: bool, migration_strategy: Tuple[str] | None, can_time_offset: float | None, gps: bool, fix_clock: bool, rename_vehicle: str | None, fixing_strategy: Tuple[str] | None, header_time_delta_topics: Tuple[str] | None, header_time_deltas: Tuple[int] | None)[source]

Bases: object

adjust_header_time(header: str, delta: int)[source]
compress_image(image: Image, topic: str) Tuple[CompressedImage, str][source]
compress_jpeg_image(image: Image, topic: str) Tuple[CompressedImage, str][source]
compress_png_image(image: Image, topic: str) Tuple[CompressedImage, str][source]
compress_qoi_image(image: Image, topic: str) Tuple[CompressedImage, str][source]
filter_ros_msg(topic: str, msg: Any, t: Time) bool[source]
filter_ros_msg_by_pipeline(topic: str, msg: Any, t: Time) bool[source]
fix_ros_msg(topic: str, msg: Any, t: Time)[source]
migrate_gps_message(old_topic: str, old_msg: Any, t: Time) Tuple[str, Any, Time][source]
migrate_ros_msg(old_topic: str, old_msg: Any, t: Time)[source]
migrate_v1_to_v2_ros_message(old_topic: str, old_msg: Any, t: Time) Tuple[bool, List[Tuple[str, Any, Time]]][source]
process_ros_msg(old_topic: str, old_msg: Any, t: Time) List[Tuple[str, Any, Time]][source]
run()[source]
uncompress_image(compressed_image: CompressedImage, topic: str) Tuple[Image, str][source]
uncompress_qoi_image(compressed_image: CompressedImage, topic: str) Tuple[Image, str][source]
cli.migrator.intersects(set1: set, set2: set) bool[source]

2.4.3.4.6. cli.ouster_telemetry module

class cli.ouster_telemetry.OusterTelemetry(input_current_ma: int, input_voltage_mv: int, internal_temperature_deg_c: int)[source]

Bases: object

Represents the telemetry data of the ouster sensor.

input_current_ma: int
input_voltage_mv: int
internal_temperature_deg_c: int
class cli.ouster_telemetry.OusterTelemetryRecorder(host: str, port: int, interval: int, output: str)[source]

Bases: object

Records the telemetry data of the ouster sensor. Use the method record_telemetry() to start recording.

host

IP address or hostname of the ouster sensor

port

Port of the ouster sensor

interval

Interval in seconds to record the telemetry data

output

Path to the output file

get_current_sensor_telemetry() OusterTelemetry[source]

Gets the telemetry data from the sensor.

prepare_telemetry_csv() None[source]

Prepares the csv file for recording the telemetry data.

record_telemetry() None[source]

Records the telemetry data from the sensor continuously.

cli.ouster_telemetry.create_ouster_client(host: str, port: int) socket[source]

Creates a socket connection to the ouster sensor.

cli.ouster_telemetry.get_args() Namespace[source]

Gets the command line arguments.

2.4.3.4.7. cli.track_creator module

class cli.track_creator.FeatureSelector(image: ndarray[Any, dtype[ScalarType]], mode: FeatureSelectorMode, killer: GracefulKiller)[source]

Bases: object

add_feature(event, x, y, flags, param)[source]
delete_features_in_rectangle(position1: Tuple[int, int], position2: Tuple[int, int])[source]
plot_features()[source]
plot_order()[source]
recover_features(features: List[Tuple[int, int, int, int, int]])[source]
run()[source]
class cli.track_creator.FeatureSelectorMode(value)[source]

Bases: Enum

An enumeration.

ACCELERATION_CENTERPOINTS = 5
BOUNDARIES = 1
CONES = 3
SKIDPAD_CENTERPOINTS = 4
VEHICLE_POSE = 2
class cli.track_creator.GracefulKiller[source]

Bases: object

exit_gracefully(*args)[source]
kill_now = False
class cli.track_creator.ImageSelector(input_video_path: str, killer: GracefulKiller)[source]

Bases: object

find_image_i(image_i: int, key: str)[source]
select_image() ndarray[Any, dtype[ScalarType]][source]
class cli.track_creator.TrackCreator(input_file_path: str, killer: GracefulKiller, plot: bool, test_day: str, track_layout: str, track_height: float, track_width: float, improve_world_cones: bool, centerpoints_width: float, centerpoints: bool, recover_centerpoints: bool, improve_centerpoints: bool, recover_map_origin: bool, recover_world_cones: bool, recover_track_boundaries: bool, manual_track: bool, mission: str)[source]

Bases: object

calculate_acceleration_world_centerpoints()[source]
calculate_centerpoints()[source]
calculate_map_coordinates_of_cones()[source]
calculate_perspective_transform()[source]
calculate_skidpad_world_centerpoints()[source]
calculate_world_to_map_transform()[source]
static chain_perspective_transforms(first_transform: ndarray[Any, dtype[ScalarType]], second_transform: ndarray[Any, dtype[ScalarType]])[source]
export_map()[source]
static generate_and_move_base_circle(radius: float, points_n: int, angle: float, inverse: bool, offset: ndarray[Any, dtype[ScalarType]]) ndarray[Any, dtype[ScalarType]][source]
static generate_circle_from_two_points(centerpoint: ndarray[Any, dtype[ScalarType]], circlepoint: ndarray[Any, dtype[ScalarType]], inverse: bool) ndarray[Any, dtype[ScalarType]][source]
plot_results()[source]
read_in_image()[source]
run()[source]
save_recovery()[source]
select_cones()[source]
select_map_origin()[source]
select_support_centerpoints()[source]
select_track_boundaries()[source]

2.4.3.4.8. cli.visualizer module

class cli.visualizer.Visualizer(input_rosbag: str, output_rosbag_suffix: str, use_header_time: bool, vehicle: str, test_day: str, track_layout: str, transforms: bool, n_stddev: float, recording: Tuple[str] | None, generate_detection_image: bool, gps: bool, start: float, duration: float | None, end: float | None, calibration_matrices_subdirectory: str, image_visualization: bool, local_motion_planning_color_scale: str)[source]

Bases: object

add_image_to_video_stream(image_msg: Image | CompressedImage, topic: str)[source]
add_transform_messages(transformation_handler: TransformationHandler, time: float)[source]
camera_offset = 0.59
static compute_eigenvalues_and_angle_of_covariance(cov: ndarray[Any, dtype[ScalarType]], n_std: float = 1.0) Tuple[ndarray[Any, dtype[ScalarType]], ndarray[Any, dtype[ScalarType]], float][source]

Compute the eigenvalues and the angle of the covariance matrix.

https://stackoverflow.com/questions/20126061/creating-a-confidence-ellipses-in-a-sccatterplot-using-matplotlib.

Parameters:
  • cov (npt.NDArray) – Covariance matrix.

  • n_std (float, optional) – Number of standard deviations to plot, by default 1.0.

Returns:

Weight, Height and angle of the covariance matrix.

Return type:

Tuple[npt.NDArray, npt.NDArray, float]

static cone_list_to_array(cone_list: ConeList | List[ConePosition]) List[Tuple[float, float, int, float, bool]][source]
create_all_motion_planning_path_slices_markers(pbar: tqdm)[source]
create_bounding_boxes_entries(topic: str, bounding_boxes: BoundingBoxes, t: Time) List[Tuple[float, str, TrajectoryPathSlices]][source]
create_centerpoints_entries(topic: str, center_points: CenterPoints, t)[source]
create_centerpoints_strategy_markers(centerpoints: List[Tuple[float, float, float, float, int]], time) List[Marker][source]
create_centerpoints_width_marker(centerpoints: Tuple[float, float, float, float], time, namespace: str, color: ColorRGBA) List[Marker][source]
create_cone_annotation_marker(cone: Tuple[float, float, int, float], annotation: str, frame_id: str, namespace: str, cone_id: int, time) Marker[source]
create_cone_marker(cone: Tuple[float, float, int, float, bool], frame_id: str, namespace: str, cone_id: int, time, stretch: bool = False) Marker[source]
create_control_markers(pbar: tqdm)[source]
create_control_predicted_states_entries(topic: str, predicted_states: PredictedStates, t)[source]
create_detection_image(outbag: Bag, t: Time, topic: str, image_msg: Image | CompressedImage)[source]
create_gps_entries(topic: str, gps: GPSFix, t: Time) List[Tuple[float, str, GPSFix]][source]
create_gps_heading_entries(topic: str, gps: HEADING2, t: Time) List[Tuple[float, str, HEADING2]][source]
create_gps_markers(pbar: tqdm)[source]
create_ground_truth_marker(cone: Tuple[float, float, int]) Marker[source]
create_ground_truth_markers(pbar: tqdm)[source]
create_heading_uncertainity_marker(vehicle_pose: Tuple[float, float, float, float], time, heading_uncertainty: float, namespace: str, frame_id: str, marker_id: int, color: ColorRGBA, length: float) Marker[source]
create_image_visualization(outbag: Bag, t: Time, topic: str, image_msg: Image | CompressedImage)[source]
create_landmark_compatibility_markers(frame_id: str, namespace: str, time, color: ColorRGBA, individual_compatibility: ndarray[Any, dtype[ScalarType]], observed_landmarks: List[ConePosition], observable_landmarks: List[ConePosition]) List[Marker][source]
create_landmark_mapping_markers(frame_id: str, namespace: str, observed_landmarks: List[ConePosition], observable_landmarks: List[ConePosition], mapping, time) List[Marker][source]
create_landmark_position_and_uncertainty_markers(landmarks: List[ConePositionWithCovariance], base_namespace: str, frame_id: str, time) List[Marker][source]
create_local_mapping_entries(topic: str, cone_list: ConeList, t)[source]
create_local_mapping_markers(pbar: tqdm)[source]
create_local_motion_planning_path_slices_entries(topic: str, path_slices: TrajectoryPathSlices, t: Time) List[Tuple[float, str, TrajectoryPathSlices]][source]
create_map_alignment_arrow(frame_id: str, namespace: str, time: float, translation: Tuple[float, float], rotation_matrix: Tuple[float, float, float, float]) List[Marker][source]
create_map_alignment_pose(frame_id: str, namespace: str, time: Time, translation: Tuple[float, float], rotation_matrix: Tuple[float, float, float, float]) PoseStamped[source]
static create_mock_marker(frame_id: str, namespace: str, cone_id: int, time) Marker[source]
create_mock_markers(length: int, max_length: int, frame_id: str, namespace: str, time) List[Marker][source]
create_motion_planning_markers(pbar: tqdm)[source]
create_motion_planning_path_slices_markers(path_slices: TrajectoryPathSlices, time: float) List[Marker][source]
create_observable_landmarks_markers(frame_id: str, namespace: str, time, observable_landmarks: List[ConePosition]) List[Marker][source]
create_observed_landmarks_markers(frame_id: str, namespace: str, time, observed_landmarks: List[ConePosition], weights: List[float] | None = None, mapping: List[int] | None = None) List[Marker][source]
create_path_marker(path: List[Tuple[float, float]], frame_id: str, namespace: str, cone_id: int, time, color: ColorRGBA, previous_path: List[Point] | None = None) Marker[source]
create_path_planning_markers(pbar: tqdm)[source]
create_pose_arrow_marker(vehicle_pose: Tuple[float, float, float], time, color: ColorRGBA, namespace: str, marker_id: int, frame_id: str, length: float = 1.0) Marker[source]
create_position_uncertainty_marker(position: Tuple[float, float], covariance: ndarray[Any, dtype[ScalarType]], time, namespace: str, frame_id: str, marker_id: int, color: ColorRGBA, n_std: float = 1) Marker[source]
create_predicted_measurements_entries(topic: str, predicted_measurements: PredictedMeasurements, t)[source]
create_predicted_observed_landmarks_markers(frame_id: str, namespace: str, time, predicted_observed_landmarks: List[ConePosition]) List[Marker][source]
create_slam_landmark_entries(topic: str, landmarks: ConeListWithCovariance, t)[source]
create_slam_landmark_markers(pbar: tqdm)[source]
create_slam_map_alignment_entries(topic: str, map_alignment: MapAlignment, t: Time)[source]
create_slam_map_alignment_markers(pbar: tqdm)[source]
create_slam_pose_markers(filter: str, base_color: ColorRGBA, pbar: tqdm)[source]
create_sphere_markers(coordinates: List[Tuple[float, float]], color: ColorRGBA, id: str, radius: float, frame_id: str, time) List[Marker][source]
create_tracked_landmarks_rectangle_marker(frame_id: str, namespace: str, time: float, tracked_landmarks_rectangle: List[ConePosition]) List[Marker][source]
create_trajectory_entries(topic: str, trajectory_points: TrajectoryPoints, t)[source]
create_transforms(pbar: tqdm)[source]
create_vehicle_marker(pbar: tqdm)[source]
create_vehicle_pose_entries(topic: str, vehicle_pose: VehiclePose, t)[source]
finish_recordings()[source]
generate_output_bag(pbar: tqdm)[source]
get_values_from_rosbag_df(name: str, start_time: float | None = None, end_time: float | None = None, duration: float | None = None)[source]
import_camera_calibration_matrices()[source]
import_rosbag(pbar: tqdm)[source]
static interpolate_multidimensional(x: ndarray[Any, dtype[ScalarType]], xp: ndarray[Any, dtype[ScalarType]], fp: ndarray[Any, dtype[ScalarType]]) ndarray[Any, dtype[ScalarType]][source]

Interpolates 2d function for given points.

Parameters:
  • x (npt.NDArray) – Points for which the function should be interpolated, shape: (m, ) with m equal to number of to be interpolated points.

  • xp (npt.NDArray) – X values of the 2d function, shape: (n, ) with n equal to the support points of the function.

  • fp (npt.NDArray) – Y values of the 2d function, shape: (n, k) with n equal to the support points of the function and with k equal number of dimensions of function.

Returns:

Interpolated function for the given points, shape: (m, 2) with m equal to number of to be interpolated points and with k equal number of dimensions of function.

Return type:

npt.NDArray

plot_centerpoints_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera: str)[source]
plot_control_informations_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]
plot_coordinates(image: ndarray[Any, dtype[ScalarType]], global_coordinates_list: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera: str, colors: Tuple[int], thickness: int = 3)[source]
plot_detection_image(bounding_boxes: BoundingBoxes, image: ndarray[Any, dtype[ScalarType]]) None[source]

Visualize detected bounding boxes and highest point of cone on an image along with their associated information.

Parameters:
  • bounding_boxes (BoundingBoxes) – An object containing a list of BoundingBox objects, each of which has attributes defining the coordinates of the bounding box (xmin, ymin, xmax, ymax), probability of detection, and cone top coordinates (x_cone_top, y_cone_top).

  • image (npt.NDArray) – A NumPy array representing the image to be annotated, where the image shape is expected to be in the form (height, width, num_channels) with num_channels.

plot_future_driven_path_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]
plot_landmark_informations_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]
plot_motion_planning_informations_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]
prepare_recordings()[source]
project_world_coordinates_to_image_pixels(coordinates: ndarray[Any, dtype[ScalarType]], camera: str)[source]
run()[source]
save_rosbag()[source]
transform_global_to_local_coordinates_by_vehicle_pose(global_coordinates: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]]) ndarray[Any, dtype[ScalarType]][source]