Sensing
Sensing is the first step in a robotic system’s pipeline. It involves acquiring raw data from various sensors, which the robot will later process to understand its environment.
Robots typically use multiple types of sensors to perceive the world:
1. LiDAR (Light Detection and Ranging)
- Purpose: Measures distances to surrounding objects using laser beams.
- Applications:
- 2D LiDAR: Indoor AMR navigation, obstacle avoidance
- 3D LiDAR: High-precision mapping, autonomous vehicle perception
- Example Visualization:



2. Camera (RGB / RGB-D / Stereo)
- Purpose: Captures visual information for object recognition, SLAM, and environment understanding.
- Applications:
- RGB camera: Object detection, marker tracking (e.g., AprilTag)
- RGB-D / Stereo: Depth perception, 3D mapping
- Example Visualization:




3. IMU (Inertial Measurement Unit)
- Purpose: Measures acceleration, angular velocity, and sometimes orientation (roll, pitch, yaw).
- Applications:
- Dead reckoning for robot localization
- Stabilization of drones and robotic arms
- Example Visualization:



4. Sensor Fusion
- Combining multiple sensors improves perception accuracy and robustness.
- Example: LiDAR + IMU + Camera → High-precision SLAM
5. Practical Implementation
- In ROS 2, sensor data is usually published on topics, e.g.:
/scan # LiDAR
/camera/image_raw # Camera
/imu/data # IMU
Next Step: The raw sensor data is passed to the Perception module, where it is transformed into meaningful information.