← Back to Docs

Object Detection Node

Build and run the object detection node, then verify that detections are being published.
1

Build the `obj_detect` Package

Build the obj_detect package in your ROS 2 workspace so you can run the object detection node.

Before running the object detection node you need to compile its package. This module walks you through building the obj_detect package and sourcing the workspace so ROS 2 can find the node.

bash
cd ~/repos/common_platform/common_platform_ws
colcon build --packages-select obj_detect --symlink-install
source install/setup.bash
2

Launch the Object Detection Node

Use the provided launch file to start the object detection node with default parameters. This node runs a YOLOv11n model on your Hailo hardware and publishes detections.

With the obj_detect package built and your workspace sourced, you can start the object detection node using its launch file. This will load the model located at /home/rcr/repos/common_platform/models/yolov11n_2cls.hef, run inference on images from your camera topic and publish detection results on the /detect topic. The default parameters use a confidence threshold of 0.5 and input size 640×640.

bash
ros2 launch obj_detect object_detector.launch.py
3

Launch the Object Detection Node with Custom Parameters

Override the default parameters when launching the object detection node to use a different model file or adjust inference settings.

The default launch file loads a pre‑trained model and uses a confidence threshold of 0.5. If you have trained your own model or wish to change the confidence threshold, input dimensions or log directory, you can override the parameters at launch time. The example below shows how to specify a custom HEF model, set a higher confidence threshold and choose a custom log directory. Adjust these values to match your own files and preferences.

bash
ros2 launch obj_detect object_detector.launch.py \
  model_path:=/path/to/your/yolov11n_2cls_finetune.hef \
  confidence_threshold:=0.6 \
  input_width:=640 \
  input_height:=640 \
  hailort_log_path:=/path/to/logs
4

Run the Object Detection Node Directly

Execute the object detection node directly using ros2 run when you want to launch the node without a launch file. You will need to configure parameters separately.

You can run the object detection node without the launch file by using the ros2 run command. This will start the node with whatever parameters are compiled into the node or passed via the parameter server. If you need to override parameters (such as the model path or confidence threshold), it is simpler to use the launch file with custom arguments. Running the node directly can be useful for debugging or when integrating into another launch system.

bash
ros2 run obj_detect object_detector
5

Verify Object Detection Results

Check that the object detection node is running and publishing detection messages, and ensure that the camera input is available.

After launching the object detection node, you should confirm that it is active and that detection messages are being published. Use the following ROS 2 commands to inspect the node and topics:

bash
# List running nodes
ros2 node list

# Display information about the object detection node
ros2 node info /rcr0nn/object_detector

To monitor detection messages and topic statistics:

bash
# View detection messages
ros2 topic echo /rcr0nn/detect

# Check topic information
ros2 topic info /rcr0nn/detect
ros2 topic hz /rcr0nn/detect

# Verify the camera is publishing
ros2 topic list | grep camera
ros2 topic hz /rcr0nn/camera/image_raw

Subscribe to our newsletter

The latest educational robotics news and articles, sent to your inbox weekly.