localize turtlebot using monte carlo localization algorithm -凯发k8网页登录
apply the monte carlo localization algorithm on a turtlebot® robot in a simulated gazebo® environment.
monte carlo localization (mcl) is an algorithm to localize a robot using a particle filter. the algorithm requires a known map and the task is to estimate the pose (position and orientation) of the robot within the map based on the motion and sensing of the robot. the algorithm starts with an initial belief of the robot pose's probability distribution, which is represented by particles distributed according to such belief. these particles are propagated following the robot's motion model each time the robot's pose changes. upon receiving new sensor readings, each particle will evaluate its accuracy by checking how likely it would receive such sensor readings at its current pose. next the algorithm will redistribute (resample) particles to bias particles that are more accurate. keep iterating these moving, sensing and resampling steps, and all particles should converge to a single cluster near the true pose of robot if localization is successful.
adaptive monte carlo localization (amcl) is the variant of mcl implemented in . amcl dynamically adjusts the number of particles based on kl-distance [1] to ensure that the particle distribution converge to the true distribution of robot state based on all past sensor and motion measurements with high probability.
the current matlab® amcl implementation can be applied to any differential drive robot equipped with a range finder.
the gazebo turtlebot simulation must be running for this example to work.
prerequisites: (ros toolbox), (ros toolbox), (ros toolbox).
connect to the turtlebot in gazebo
first, spawn a simulated turtlebot inside an office environment in a virtual machine by following steps in the (ros toolbox) to launch the gazebo office
world
from the desktop, as shown below.
in your matlab instance on the host computer, run the following commands to initialize ros global node in matlab and connect to the ros master in the virtual machine through its ip address ipaddress
. replace ipaddress
with the ip address of your turtlebot in virtual machine.
ipaddress = '192.168.2.150';
rosinit(ipaddress,11311);
initializing global node /matlab_global_node_73841 with nodeuri http://192.168.2.1:53461/
the layout of simulated office environment:
load the map of the simulation world
load a binary occupancy grid of the office environment in gazebo. the map is generated by driving turtlebot inside the office environment. the map is constructed using range-bearing readings from kinect® and ground truth poses from gazebo/model_states
topic.
load officemap.mat
show(map)
setup the laser sensor model and turtlebot motion model
turtlebot can be modeled as a differential drive robot and its motion can be estimated using odometry data. the noise
property defines the uncertainty in robot's rotational and linear motion. increasing the odometrymodel.noise
property will allow more spread when propagating particles using odometry measurements. refer to for property details.
odometrymodel = odometrymotionmodel; odometrymodel.noise = [0.2 0.2 0.2 0.2];
the sensor on turtlebot is a simulated range finder converted from kinect readings. the likelihood field method is used to compute the probability of perceiving a set of measurements by comparing the end points of the range finder measurements to the occupancy map. if the end points match the occupied points in occupancy map, the probability of perceiving such measurements is high. the sensor model should be tuned to match the actual sensor property to achieve better test results. the property sensorlimits
defines the minimum and maximum range of sensor readings. the property map
defines the occupancy map used for computing likelihood field. please refer to for property details.
rangefindermodel = likelihoodfieldsensormodel; rangefindermodel.sensorlimits = [0.45 8]; rangefindermodel.map = map;
set rangefindermodel.sensorpose
to the coordinate transform of the fixed camera with respect to the robot base. this is used to transform the laser readings from camera frame to the base frame of turtlebot. please refer to (ros toolbox) for details on coordinate transformations.
note that currently sensormodel
is only compatible with sensors that are fixed on the robot's frame, which means the sensor transform is constant.
% query the transformation tree (tf tree) in ros. tftree = rostf; waitfortransform(tftree,'/base_link','/base_scan'); sensortransform = gettransform(tftree,'/base_link', '/base_scan'); % get the euler rotation angles. laserquat = [sensortransform.transform.rotation.w sensortransform.transform.rotation.x ... sensortransform.transform.rotation.y sensortransform.transform.rotation.z]; laserrotation = quat2eul(laserquat, 'zyx'); % setup the |sensorpose|, which includes the translation along base_link's % x, y direction in meters and rotation angle along base_link's z axis % in radians. rangefindermodel.sensorpose = ... [sensortransform.transform.translation.x sensortransform.transform.translation.y laserrotation(1)];
receiving sensor measurements and sending velocity commands
create ros subscribers for retrieving sensor and odometry measurements from turtlebot.
lasersub = rossubscriber('/scan'); odomsub = rossubscriber('/odom');
create ros publisher for sending out velocity commands to turtlebot. turtlebot subscribes to '/mobile_base/commands/velocity'
for velocity commands.
[velpub,velmsg] = ... rospublisher('/cmd_vel','geometry_msgs/twist');
initialize amcl object
instantiate an amcl object amcl
. see for more information on the class.
amcl = montecarlolocalization; amcl.uselidarscan = true;
assign the motionmodel
and sensormodel
properties in the amcl
object.
amcl.motionmodel = odometrymodel; amcl.sensormodel = rangefindermodel;
the particle filter only updates the particles when the robot's movement exceeds the updatethresholds
, which defines minimum displacement in [x, y, yaw] to trigger filter update. this prevents too frequent updates due to sensor noise. particle resampling happens after the amcl.resamplinginterval
filter updates. using larger numbers leads to slower particle depletion at the price of slower particle convergence as well.
amcl.updatethresholds = [0.2,0.2,0.2]; amcl.resamplinginterval = 1;
configure amcl object for localization with initial pose estimate.
amcl.particlelimits
defines the lower and upper bound on the number of particles that will be generated during the resampling process. allowing more particles to be generated may improve the chance of converging to the true robot pose, but has an impact on computation speed and particles may take longer time or even fail to converge. please refer to the 'kl-d sampling' section in [1] for computing a reasonable bound value on the number of particles. note that global localization may need significantly more particles compared to localization with an initial pose estimate. if the robot knows its initial pose with some uncertainty, such additional information can help amcl localize robots faster with a less number of particles, i.e. you can use a smaller value of upper bound in amcl.particlelimits
.
now set amcl.globallocalization
to false and provide an estimated initial pose to amcl. by doing so, amcl holds the initial belief that robot's true pose follows a gaussian distribution with a mean equal to amcl.initialpose
and a covariance matrix equal to amcl.initialcovariance
. initial pose estimate should be obtained according to your setup. this example helper retrieves the robot's current true pose from gazebo.
please refer to section configure amcl object for global localization for an example on using global localization.
amcl.particlelimits = [500 5000]; amcl.globallocalization = false; amcl.initialpose = examplehelperamclgazebotruepose; amcl.initialcovariance = eye(3)*0.5;
setup helper for visualization and driving turtlebot.
setup examplehelperamclvisualization to plot the map and update robot's estimated pose, particles, and laser scan readings on the map.
visualizationhelper = examplehelperamclvisualization(map);
robot motion is essential for the amcl algorithm. in this example, we drive turtlebot randomly using the examplehelperamclwanderer class, which drives the robot inside the environment while avoiding obstacles using the class.
wanderhelper = ...
examplehelperamclwanderer(lasersub, sensortransform, velpub, velmsg);
localization procedure
the amcl algorithm is updated with odometry and sensor readings at each time step when the robot is moving around. please allow a few seconds before particles are initialized and plotted in the figure. in this example we will run numupdates
amcl updates. if the robot doesn't converge to the correct robot pose, consider using a larger numupdates
.
numupdates = 60; i = 0; while i < numupdates % receive laser scan and odometry message. scanmsg = receive(lasersub); odompose = odomsub.latestmessage; % create lidarscan object to pass to the amcl object. scan = lidarscan(scanmsg); % for sensors that are mounted upside down, you need to reverse the % order of scan angle readings using 'flip' function. % compute robot's pose [x,y,yaw] from odometry message. odomquat = [odompose.pose.pose.orientation.w, odompose.pose.pose.orientation.x, ... odompose.pose.pose.orientation.y, odompose.pose.pose.orientation.z]; odomrotation = quat2eul(odomquat); pose = [odompose.pose.pose.position.x, odompose.pose.pose.position.y odomrotation(1)]; % update estimated robot's pose and covariance using new odometry and % sensor readings. [isupdated,estimatedpose, estimatedcovariance] = amcl(pose, scan); % drive robot to next pose. wander(wanderhelper); % plot the robot's estimated pose, particles and laser scans on the map. if isupdated i = i 1; plotstep(visualizationhelper, amcl, estimatedpose, scan, i) end end
stop the turtlebot and shutdown ros in matlab
stop(wanderhelper); rosshutdown
shutting down global node /matlab_global_node_73841 with nodeuri http://192.168.2.1:53461/
sample results for amcl localization with initial pose estimate
amcl is a probabilistic algorithm, the simulation result on your computer may be slightly different from the sample run shown here.
after first amcl update, particles are generated by sampling gaussian distribution with mean equal to amcl.initialpose
and covariance equal to amcl.initialcovariance
.
after 8 updates, the particles start converging to areas with higher likelihood:
after 60 updates, all particles should converge to the correct robot pose and the laser scans should closely align with the map outlines.
configure amcl object for global localization.
in case no initial robot pose estimate is available, amcl will try to localize robot without knowing the robot's initial position. the algorithm initially assumes that the robot has equal probability in being anywhere in the office's free space and generates uniformly distributed particles inside such space. thus global localization requires significantly more particles compared to localization with initial pose estimate.
to enable amcl global localization feature, replace the code sections in configure amcl object for localization with initial pose estimate with the code in this section.
amcl.globallocalization = true; amcl.particlelimits = [500 50000];
sample results for amcl global localization
amcl is a probabilistic algorithm, the simulation result on your computer may be slightly different from the sample run shown here.
after first amcl update, particles are uniformly distributed inside the free office space:
after 8 updates, the particles start converging to areas with higher likelihood:
after 60 updates, all particles should converge to the correct robot pose and the laser scans should closely align with the map outlines.
references
[1] s. thrun, w. burgard and d. fox, probabilistic robotics. cambridge, ma: mit press, 2005.