Skip to main content

3D LiDAR Mobility

3D LiDAR Mobility enables robots to perform high-precision perception and navigation in complex environments.

By capturing full 3D spatial information, robots can better understand their surroundings, making this configuration suitable for outdoor scenarios, uneven terrain, and dynamic environments.

This section focuses on the system architecture and core capabilities, and connects to hands-on demos provided in the Robotic Suite.


1. System Overview

A typical 3D LiDAR-based robot includes:

  • Sensor: 3D LiDAR
  • Perception: 3D SLAM, ground segmentation, object detection
  • Planning: Advanced path planning with obstacle awareness
  • Action: Robot base control

This corresponds to the robotics pipeline:

  • /points_raw → /map → /plan → /cmd_vel

2. Key Capabilities

  • 3D Mapping (3D SLAM)
    Build detailed point cloud maps of the environment

  • Accurate Localization
    Robust localization in large-scale or outdoor environments

  • Obstacle Detection (3D)
    Detect objects with height and volume information

  • Terrain Awareness
    Handle slopes, ramps, and uneven surfaces

  • Autonomous Navigation
    Navigate safely in complex and dynamic environments


3. Demo & Sample Code

To help you get started, the Robotic Suite provides pre-built demo applications and sample code for 3D LiDAR mobility.

Available Samples

These samples are all packaged in containers to enable rapid evaluation and learning.

👉 Please refer to the corresponding sample code pages for:

  • Step-by-step setup instructions
  • Launch commands
  • Code structure and customization

4. Use Cases

  • Outdoor delivery robots
  • Autonomous vehicles (ADAS / self-driving)
  • Industrial inspection robots
  • Construction and mining robotics

5. Considerations

  • Higher computational cost compared to 2D LiDAR
  • Larger data size (point clouds)
  • May require GPU acceleration for optimal performance
  • Sensor cost is typically higher

3D LiDAR Mobility provides rich environmental understanding and robust perception, enabling robots to operate in more complex real-world scenarios.