2.4.3.4. cli package¶
2.4.3.4.1. Submodules¶
2.4.3.4.2. cli.analyzer module¶
- class cli.analyzer.RosbagAnalyzer(input_rosbag: str, time_delay: bool, actuator_delay: bool, imu_tilt: bool, parameter_dump: bool, calibration_matrices: bool, gggv_dump: bool, vehicle_name: str, start: float, duration: float | None, end: float | None)[source]¶
Bases:
object
2.4.3.4.3. cli.drone_visualization module¶
- class cli.drone_visualization.DroneVisualization(input_rosbag: str, input_video: str, recalculate_transforms: bool, no_new_transforms: bool, track_height: float, track_width: float, recover_offset: bool, recover_initial_features: bool, killer: GracefulKiller, start: int, duration: int | None, end: int | None, output: str, save_recovery: bool, recover_initial_position: bool, recover_initial_vehicle_mask: bool, visualize_only_debug_information: bool)[source]¶
Bases:
object
- static chain_affine_transforms(first_transform: ndarray[Any, dtype[ScalarType]], second_transform: ndarray[Any, dtype[ScalarType]])[source]¶
- static chain_perspective_transforms(first_transform: ndarray[Any, dtype[ScalarType]], second_transform: ndarray[Any, dtype[ScalarType]])[source]¶
- draw_rounded_rectangle(image: ndarray[Any, dtype[ScalarType]], origin: Tuple[int, int], size: Tuple[int, int], radius: int, color: Tuple[int, int, int], alpha: float)[source]¶
- get_values_from_rosbag_df(name: str, start_time: float | None = None, end_time: float | None = None)[source]¶
- static rotate_and_move(xy: ndarray[Any, dtype[ScalarType]], rotation: float = 0.0, offset_before_rotation: ndarray[Any, dtype[ScalarType]] = array([0, 0]), offset_after_rotation: ndarray[Any, dtype[ScalarType]] = array([0, 0])) ndarray[Any, dtype[ScalarType]] [source]¶
Function to move and rotate given coordinates.
First moves the coordinates, then rotates them around the origin and then move them again. Using a numpy built rotation matrix.
- Parameters:
xy (npt.NDArray) – Coordinates to move, rotate and move again.
rotation (float, optional) – Rotation angle to rotate the coordinates around the origin, by default 0.
offset_before_rotation (npt.NDArray, optional) – Offset to move the coordinates before rotating them, by default np.array([0, 0]).
offset_after_rotation (npt.NDArray, optional) – Offset to move the coordinates after rotating them, by default np.array([0, 0]).
- Returns:
Moved, rotated and moved coordinates.
- Return type:
npt.NDArray
- transform_map_coordinated_to_world_coordinates(map_coordinates: ndarray[Any, dtype[ScalarType]], frame_i: int)[source]¶
- visualize_acceleration(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], radius: int)[source]¶
- visualize_brake(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], height: int)[source]¶
- visualize_centerpoints(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]¶
- visualize_driven_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]¶
- visualize_fov(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, fov_angle: float, fov_distance: float, color: Tuple[int, int, int])[source]¶
- visualize_fov_gate(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]¶
- visualize_gps_measurement(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, uncertainty: bool, add_to_legend: bool = False)[source]¶
- visualize_heading_covariance(image: ndarray[Any, dtype[ScalarType]], frame_i: int, color: Tuple[int], pose: ndarray[Any, dtype[ScalarType]], uncertainty: float)[source]¶
- visualize_landmarks(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, uncertainty: bool = False)[source]¶
- visualize_planned_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]¶
- visualize_position_covariance(image: ndarray[Any, dtype[ScalarType]], frame_i: int, color: Tuple[int], uncertainty: ndarray[Any, dtype[ScalarType]], center: ndarray[Any, dtype[ScalarType]])[source]¶
- visualize_predicted_path(image: ndarray[Any, dtype[ScalarType]], frame_i: int, frame_t: int, add_to_legend: bool = False)[source]¶
- visualize_speed(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int])[source]¶
- visualize_steering_wheel_angle(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], height: int)[source]¶
- visualize_torque(image: ndarray[Any, dtype[ScalarType]], frame_t: int, origin: Tuple[int, int], height: int)[source]¶
- class cli.drone_visualization.FeatureSelector(image: ndarray[Any, dtype[ScalarType]], killer: GracefulKiller, track_width: float, track_height: float, mode: FeatureSelectorMode)[source]¶
Bases:
object
- class cli.drone_visualization.FeatureSelectorMode(value)[source]¶
Bases:
Enum
An enumeration.
- BOUNDARIES = 1¶
- POSE_DEFINITION = 2¶
- VEHICLE_MASK = 3¶
- class cli.drone_visualization.OffsetFinder(video_path: str, input_rosbag_path: str, killer: GracefulKiller)[source]¶
Bases:
object
- cli.drone_visualization.generate_progressbar(name: str, data_length: int, variable_widgets: List[Variable] = []) ProgressBar [source]¶
Generates a progressbar object with the given data length with more information than the standard one.
- Parameters:
data_length (int) – Length of data points to process.
- Returns:
Progressbar object with the given data length with more information than the standard one.
- Return type:
progressbar.ProgressBar
2.4.3.4.4. cli.merge_bags module¶
2.4.3.4.5. cli.migrator module¶
- class cli.migrator.RosbagMigrator(input_rosbag: str, output_rosbag_suffix: str, uncompress_images: bool, compress_images: str | None, start: float, duration: float | None, end: float | None, delete_pipeline: Tuple[str] | None, delete_lidar_points: bool, delete_images: bool, delete_transforms: bool, delete_visualization: bool, migration_strategy: Tuple[str] | None, can_time_offset: float | None, gps: bool, fix_clock: bool, rename_vehicle: str | None, fixing_strategy: Tuple[str] | None, header_time_delta_topics: Tuple[str] | None, header_time_deltas: Tuple[int] | None)[source]¶
Bases:
object
2.4.3.4.6. cli.ouster_telemetry module¶
- class cli.ouster_telemetry.OusterTelemetry(input_current_ma: int, input_voltage_mv: int, internal_temperature_deg_c: int)[source]¶
Bases:
object
Represents the telemetry data of the ouster sensor.
- class cli.ouster_telemetry.OusterTelemetryRecorder(host: str, port: int, interval: int, output: str)[source]¶
Bases:
object
Records the telemetry data of the ouster sensor. Use the method record_telemetry() to start recording.
- host¶
IP address or hostname of the ouster sensor
- port¶
Port of the ouster sensor
- interval¶
Interval in seconds to record the telemetry data
- output¶
Path to the output file
- get_current_sensor_telemetry() OusterTelemetry [source]¶
Gets the telemetry data from the sensor.
2.4.3.4.7. cli.track_creator module¶
- class cli.track_creator.FeatureSelector(image: ndarray[Any, dtype[ScalarType]], mode: FeatureSelectorMode, killer: GracefulKiller)[source]¶
Bases:
object
- class cli.track_creator.FeatureSelectorMode(value)[source]¶
Bases:
Enum
An enumeration.
- ACCELERATION_CENTERPOINTS = 5¶
- BOUNDARIES = 1¶
- CONES = 3¶
- SKIDPAD_CENTERPOINTS = 4¶
- VEHICLE_POSE = 2¶
- class cli.track_creator.ImageSelector(input_video_path: str, killer: GracefulKiller)[source]¶
Bases:
object
- class cli.track_creator.TrackCreator(input_file_path: str, killer: GracefulKiller, plot: bool, test_day: str, track_layout: str, track_height: float, track_width: float, improve_world_cones: bool, centerpoints_width: float, centerpoints: bool, recover_centerpoints: bool, improve_centerpoints: bool, recover_map_origin: bool, recover_world_cones: bool, recover_track_boundaries: bool, manual_track: bool, mission: str)[source]¶
Bases:
object
- static chain_perspective_transforms(first_transform: ndarray[Any, dtype[ScalarType]], second_transform: ndarray[Any, dtype[ScalarType]])[source]¶
- static generate_and_move_base_circle(radius: float, points_n: int, angle: float, inverse: bool, offset: ndarray[Any, dtype[ScalarType]]) ndarray[Any, dtype[ScalarType]] [source]¶
2.4.3.4.8. cli.visualizer module¶
- class cli.visualizer.Visualizer(input_rosbag: str, output_rosbag_suffix: str, use_header_time: bool, vehicle: str, test_day: str, track_layout: str, transforms: bool, n_stddev: float, recording: Tuple[str] | None, generate_detection_image: bool, gps: bool, start: float, duration: float | None, end: float | None, calibration_matrices_subdirectory: str, image_visualization: bool, local_motion_planning_color_scale: str)[source]¶
Bases:
object
- add_transform_messages(transformation_handler: TransformationHandler, time: float)[source]¶
- camera_offset = 0.59¶
- static compute_eigenvalues_and_angle_of_covariance(cov: ndarray[Any, dtype[ScalarType]], n_std: float = 1.0) Tuple[ndarray[Any, dtype[ScalarType]], ndarray[Any, dtype[ScalarType]], float] [source]¶
Compute the eigenvalues and the angle of the covariance matrix.
- static cone_list_to_array(cone_list: ConeList | List[ConePosition]) List[Tuple[float, float, int, float, bool]] [source]¶
- create_bounding_boxes_entries(topic: str, bounding_boxes: BoundingBoxes, t: Time) List[Tuple[float, str, TrajectoryPathSlices]] [source]¶
- create_centerpoints_strategy_markers(centerpoints: List[Tuple[float, float, float, float, int]], time) List[Marker] [source]¶
- create_centerpoints_width_marker(centerpoints: Tuple[float, float, float, float], time, namespace: str, color: ColorRGBA) List[Marker] [source]¶
- create_cone_annotation_marker(cone: Tuple[float, float, int, float], annotation: str, frame_id: str, namespace: str, cone_id: int, time) Marker [source]¶
- create_cone_marker(cone: Tuple[float, float, int, float, bool], frame_id: str, namespace: str, cone_id: int, time, stretch: bool = False) Marker [source]¶
- create_detection_image(outbag: Bag, t: Time, topic: str, image_msg: Image | CompressedImage)[source]¶
- create_gps_heading_entries(topic: str, gps: HEADING2, t: Time) List[Tuple[float, str, HEADING2]] [source]¶
- create_heading_uncertainity_marker(vehicle_pose: Tuple[float, float, float, float], time, heading_uncertainty: float, namespace: str, frame_id: str, marker_id: int, color: ColorRGBA, length: float) Marker [source]¶
- create_image_visualization(outbag: Bag, t: Time, topic: str, image_msg: Image | CompressedImage)[source]¶
- create_landmark_compatibility_markers(frame_id: str, namespace: str, time, color: ColorRGBA, individual_compatibility: ndarray[Any, dtype[ScalarType]], observed_landmarks: List[ConePosition], observable_landmarks: List[ConePosition]) List[Marker] [source]¶
- create_landmark_mapping_markers(frame_id: str, namespace: str, observed_landmarks: List[ConePosition], observable_landmarks: List[ConePosition], mapping, time) List[Marker] [source]¶
- create_landmark_position_and_uncertainty_markers(landmarks: List[ConePositionWithCovariance], base_namespace: str, frame_id: str, time) List[Marker] [source]¶
- create_local_motion_planning_path_slices_entries(topic: str, path_slices: TrajectoryPathSlices, t: Time) List[Tuple[float, str, TrajectoryPathSlices]] [source]¶
- create_map_alignment_arrow(frame_id: str, namespace: str, time: float, translation: Tuple[float, float], rotation_matrix: Tuple[float, float, float, float]) List[Marker] [source]¶
- create_map_alignment_pose(frame_id: str, namespace: str, time: Time, translation: Tuple[float, float], rotation_matrix: Tuple[float, float, float, float]) PoseStamped [source]¶
- create_mock_markers(length: int, max_length: int, frame_id: str, namespace: str, time) List[Marker] [source]¶
- create_motion_planning_path_slices_markers(path_slices: TrajectoryPathSlices, time: float) List[Marker] [source]¶
- create_observable_landmarks_markers(frame_id: str, namespace: str, time, observable_landmarks: List[ConePosition]) List[Marker] [source]¶
- create_observed_landmarks_markers(frame_id: str, namespace: str, time, observed_landmarks: List[ConePosition], weights: List[float] | None = None, mapping: List[int] | None = None) List[Marker] [source]¶
- create_path_marker(path: List[Tuple[float, float]], frame_id: str, namespace: str, cone_id: int, time, color: ColorRGBA, previous_path: List[Point] | None = None) Marker [source]¶
- create_pose_arrow_marker(vehicle_pose: Tuple[float, float, float], time, color: ColorRGBA, namespace: str, marker_id: int, frame_id: str, length: float = 1.0) Marker [source]¶
- create_position_uncertainty_marker(position: Tuple[float, float], covariance: ndarray[Any, dtype[ScalarType]], time, namespace: str, frame_id: str, marker_id: int, color: ColorRGBA, n_std: float = 1) Marker [source]¶
- create_predicted_measurements_entries(topic: str, predicted_measurements: PredictedMeasurements, t)[source]¶
- create_predicted_observed_landmarks_markers(frame_id: str, namespace: str, time, predicted_observed_landmarks: List[ConePosition]) List[Marker] [source]¶
- create_sphere_markers(coordinates: List[Tuple[float, float]], color: ColorRGBA, id: str, radius: float, frame_id: str, time) List[Marker] [source]¶
- create_tracked_landmarks_rectangle_marker(frame_id: str, namespace: str, time: float, tracked_landmarks_rectangle: List[ConePosition]) List[Marker] [source]¶
- get_values_from_rosbag_df(name: str, start_time: float | None = None, end_time: float | None = None, duration: float | None = None)[source]¶
- static interpolate_multidimensional(x: ndarray[Any, dtype[ScalarType]], xp: ndarray[Any, dtype[ScalarType]], fp: ndarray[Any, dtype[ScalarType]]) ndarray[Any, dtype[ScalarType]] [source]¶
Interpolates 2d function for given points.
- Parameters:
x (npt.NDArray) – Points for which the function should be interpolated, shape: (m, ) with m equal to number of to be interpolated points.
xp (npt.NDArray) – X values of the 2d function, shape: (n, ) with n equal to the support points of the function.
fp (npt.NDArray) – Y values of the 2d function, shape: (n, k) with n equal to the support points of the function and with k equal number of dimensions of function.
- Returns:
Interpolated function for the given points, shape: (m, 2) with m equal to number of to be interpolated points and with k equal number of dimensions of function.
- Return type:
npt.NDArray
- plot_centerpoints_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera: str)[source]¶
- plot_control_informations_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]¶
- plot_coordinates(image: ndarray[Any, dtype[ScalarType]], global_coordinates_list: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera: str, colors: Tuple[int], thickness: int = 3)[source]¶
- plot_detection_image(bounding_boxes: BoundingBoxes, image: ndarray[Any, dtype[ScalarType]]) None [source]¶
Visualize detected bounding boxes and highest point of cone on an image along with their associated information.
- Parameters:
bounding_boxes (BoundingBoxes) – An object containing a list of BoundingBox objects, each of which has attributes defining the coordinates of the bounding box (xmin, ymin, xmax, ymax), probability of detection, and cone top coordinates (x_cone_top, y_cone_top).
image (npt.NDArray) – A NumPy array representing the image to be annotated, where the image shape is expected to be in the form (height, width, num_channels) with num_channels.
- plot_future_driven_path_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]¶
- plot_landmark_informations_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]¶
- plot_motion_planning_informations_on_visualization_image(t: Time, image: ndarray[Any, dtype[ScalarType]], vehicle_pose: ndarray[Any, dtype[ScalarType]], camera)[source]¶