navigation and mapping -凯发k8网页登录
to understand an unknown environment and navigate to a desired destination, a robot must have a clear picture of its surroundings. especially in the absence of gps data, a simultaneous localization and mapping(slam) algorithm can help a robot make effective decisions and plan a path through its environment.
slam consists of these two processes:
localization — estimating the pose of the robot in a known environment.
mapping — building a map of an unknown environment by using a known robot pose and sensor data.
localization requires the robot to have a map of the environment, and mapping requires a good pose estimate. in the slam process, a robot creates a map of an environment while localizing itself. for more information, see .
to perform slam, you must preprocess point clouds. lidar toolbox™ provides functions to extract features from point clouds and use them to register point clouds to one another. for an example of how to use fast point feature histogram (fpfh) feature extraction in a 3-d slam workflow for aerial data, see .
you can also perform slam by using 2-d lidar scans. by storing the data for a 2-d lidar scan in a object, you can perform scan matching to estimate pose. for more information, see build map from 2-d lidar scans using slam.
lidar toolbox supports various graph-based slam workflows, including 2-d slam, 3-d slam, online slam and offline slam.
functions
topics
understand point cloud registration and mapping workflow.
this example shows how to estimate a rigid transformation between two point clouds.
- match and visualize corresponding features in point clouds
this example shows how to match corresponding features between point clouds using the
pcmatchfeatures
function and visualize them using thepcshowmatchedfeatures
function. this example shows you how to generate lidar point cloud data for a driving scene with roads, pedestrians, and vehicles.