main content

visual-凯发k8网页登录

this example shows how to estimate the pose (position and orientation) of a ground vehicle using an inertial measurement unit (imu) and a monocular camera. in this example, you:

  1. create a driving scenario containing the ground truth trajectory of the vehicle.

  2. use an imu and visual odometry model to generate measurements.

  3. fuse these measurements to estimate the pose of the vehicle and then display the results.

visual-inertial odometry estimates pose by fusing the visual odometry pose estimate from the monocular camera and the pose estimate from the imu. the imu returns an accurate pose estimate for small time intervals, but suffers from large drift due to integrating the inertial sensor measurements. the monocular camera returns an accurate pose estimate over a larger time interval, but suffers from a scale ambiguity. given these complementary strengths and weaknesses, the fusion of these sensors using visual-inertial odometry is a suitable choice. this method can be used in scenarios where gps readings are unavailable, such as in an urban canyon.

create a driving scenario with trajectory

create a (automated driving toolbox) object that contains:

  • the road the vehicle travels on

  • the buildings surrounding either side of the road

  • the ground truth pose of the vehicle

  • the estimated pose of the vehicle

the ground truth pose of the vehicle is shown as a solid blue cuboid. the estimated pose is shown as a transparent blue cuboid. note that the estimated pose does not appear in the initial visualization because the ground truth and estimated poses overlap.

generate the baseline trajectory for the ground vehicle using the system object™. note that the waypointtrajectory is used in place of drivingscenario/trajectory since the acceleration of the vehicle is needed. the trajectory is generated at a specified sampling rate using a set of waypoints, times of arrival, and velocities.

% create the driving scenario with both the ground truth and estimated
% vehicle poses.
scene = drivingscenario;
groundtruthvehicle = vehicle(scene, 'plotcolor', [0 0.4470 0.7410]);
estvehicle = vehicle(scene, 'plotcolor', [0 0.4470 0.7410]);
% generate the baseline trajectory.
samplerate = 100;
waypoints = [  0   0 0;
             200   0 0;
             200  50 0;
             200 230 0;
             215 245 0;
             260 245 0;
             290 240 0;
             310 258 0;
             290 275 0;
             260 260 0;
             -20 260 0];
t = [0 20 25 44 46 50 54 56 59 63 90].';
speed = 10;
velocities = [ speed     0 0;
               speed     0 0;
                   0 speed 0;
                   0 speed 0;
               speed     0 0;
               speed     0 0;
               speed     0 0;
                   0 speed 0;
              -speed     0 0;
              -speed     0 0;
              -speed     0 0];
traj = waypointtrajectory(waypoints, 'timeofarrival', t, ...
    'velocities', velocities, 'samplerate', samplerate);
% add a road and buildings to scene and visualize.
helperpopulatescene(scene, groundtruthvehicle);

create a fusion filter

create the filter to fuse imu and visual odometry measurements. this example uses a loosely coupled method to fuse the measurements. while the results are not as accurate as a tightly coupled method, the amount of processing required is significantly less and the results are adequate. the fusion filter uses an error-state kalman filter to track orientation (as a quaternion), position, velocity, and sensor biases.

the insfiltererrorstate object has the following functions to process sensor data: predict and fusemvo.

the predict function takes the accelerometer and gyroscope measurements from the imu as inputs. call the predict function each time the accelerometer and gyroscope are sampled. this function predicts the state forward by one time step based on the accelerometer and gyroscope measurements, and updates the error state covariance of the filter.

the fusemvo function takes the visual odometry pose estimates as input. this function updates the error states based on the visual odometry pose estimates by computing a kalman gain that weighs the various inputs according to their uncertainty. as with the predict function, this function also updates the error state covariance, this time taking the kalman gain into account. the state is then updated using the new error state and the error state is reset.

filt = insfiltererrorstate('imusamplerate', samplerate, ...
    'referenceframe', 'enu')
% set the initial state and error state covariance.
helperinitialize(filt, traj);
filt = 
  insfiltererrorstate with properties:
        imusamplerate: 100               hz         
    referencelocation: [0 0 0]           [deg deg m]
                state: [17x1 double]                
      statecovariance: [16x16 double]               
   process noise variances
            gyroscopenoise: [1e-06 1e-06 1e-06]       (rad/s)²
        accelerometernoise: [0.0001 0.0001 0.0001]    (m/s²)² 
        gyroscopebiasnoise: [1e-09 1e-09 1e-09]       (rad/s)²
    accelerometerbiasnoise: [0.0001 0.0001 0.0001]    (m/s²)² 

specify the visual odometry model

define the visual odometry model parameters. these parameters model a feature matching and tracking-based visual odometry system using a monocular camera. the scale parameter accounts for the unknown scale of subsequent vision frames of the monocular camera. the other parameters model the drift in the visual odometry reading as a combination of white noise and a first-order gauss-markov process.

% the flag usevo determines if visual odometry is used:
% usevo = false; % only imu is used.
usevo = true; % both imu and visual odometry are used.
paramsvo.scale = 2;
paramsvo.sigman = 0.139;
paramsvo.tau = 232;
paramsvo.sigmab = sqrt(1.34);
paramsvo.driftbias = [0 0 0];

specify the imu sensor

define an imu sensor model containing an accelerometer and gyroscope using the imusensor system object. the sensor model contains properties to model both deterministic and stochastic noise sources. the property values set here are typical for low-cost mems sensors.

% set the rng seed to default to obtain the same results for subsequent
% runs.
rng('default')
imu = imusensor('samplerate', samplerate, 'referenceframe', 'enu');
% accelerometer
imu.accelerometer.measurementrange =  19.6; % m/s^2
imu.accelerometer.resolution = 0.0024; % m/s^2/lsb
imu.accelerometer.noisedensity = 0.01; % (m/s^2)/sqrt(hz)
% gyroscope
imu.gyroscope.measurementrange = deg2rad(250); % rad/s
imu.gyroscope.resolution = deg2rad(0.0625); % rad/s/lsb
imu.gyroscope.noisedensity = deg2rad(0.0573); % (rad/s)/sqrt(hz)
imu.gyroscope.constantbias = deg2rad(2); % rad/s

set up the simulation

specify the amount of time to run the simulation and initialize variables that are logged during the simulation loop.

% run the simulation for 60 seconds.
numsecondstosimulate = 60;
numimusamples = numsecondstosimulate * samplerate;
% define the visual odometry sampling rate.
imusamplespercamera = 4;
numcamerasamples = ceil(numimusamples / imusamplespercamera);
% preallocate data arrays for plotting results.
[pos, orient, vel, acc, angvel, ...
    posvo, orientvo, ...
    posest, orientest, velest] ...
    = helperpreallocatedata(numimusamples, numcamerasamples);
% set measurement noise parameters for the visual odometry fusion.
rposvo = 0.1;
rorientvo = 0.1;

run the simulation loop

run the simulation at the imu sampling rate. each imu sample is used to predict the filter's state forward by one time step. once a new visual odometry reading is available, it is used to correct the current filter state.

there is some drift in the filter estimates that can be further corrected with an additional sensor such as a gps or an additional constraint such as a road boundary map.

cameraidx = 1;
for i = 1:numimusamples
    % generate ground truth trajectory values.
    [pos(i,:), orient(i,:), vel(i,:), acc(i,:), angvel(i,:)] = traj();
    % generate accelerometer and gyroscope measurements from the ground truth
    % trajectory values.
    [accelmeas, gyromeas] = imu(acc(i,:), angvel(i,:), orient(i));
    % predict the filter state forward one time step based on the
    % accelerometer and gyroscope measurements.
    predict(filt, accelmeas, gyromeas);
    if (1 == mod(i, imusamplespercamera)) && usevo
        % generate a visual odometry pose estimate from the ground truth
        % values and the visual odometry model.
        [posvo(cameraidx,:), orientvo(cameraidx,:), paramsvo] = ...
            helpervisualodometrymodel(pos(i,:), orient(i,:), paramsvo);
        % correct filter state based on visual odometry data.
        fusemvo(filt, posvo(cameraidx,:), rposvo, ...
            orientvo(cameraidx), rorientvo);
        cameraidx = cameraidx   1;
    end
    [posest(i,:), orientest(i,:), velest(i,:)] = pose(filt);
    % update estimated vehicle pose.
    helperupdatepose(estvehicle, posest(i,:), velest(i,:), orientest(i));
    % update ground truth vehicle pose.
    helperupdatepose(groundtruthvehicle, pos(i,:), vel(i,:), orient(i));
    % update driving scenario visualization.
    updateplots(scene);
    drawnow limitrate;
end

plot the results

plot the ground truth vehicle trajectory, the visual odometry estimate, and the fusion filter estimate.

figure
if usevo
    plot3(pos(:,1), pos(:,2), pos(:,3), '-.', ...
        posvo(:,1), posvo(:,2), posvo(:,3), ...
        posest(:,1), posest(:,2), posest(:,3), ...
        'linewidth', 3)
    legend('ground truth', 'visual odometry (vo)', ...
        'visual-inertial odometry (vio)', 'location', 'northeast')
else
    plot3(pos(:,1), pos(:,2), pos(:,3), '-.', ...
        posest(:,1), posest(:,2), posest(:,3), ...
        'linewidth', 3)
    legend('ground truth', 'imu pose estimate')
end
view(-90, 90)
title('vehicle position')
xlabel('x (m)')
ylabel('y (m)')
grid on

the plot shows that the visual odometry estimate is relatively accurate in estimating the shape of the trajectory. the fusion of the imu and visual odometry measurements removes the scale factor uncertainty from the visual odometry measurements and the drift from the imu measurements.

supporting functions

helpervisualodometrymodel

compute visual odometry measurement from ground truth input and parameters struct. to model the uncertainty in the scaling between subsequent frames of the monocular camera, a constant scaling factor combined with a random drift is applied to the ground truth position.

function [posvo, orientvo, paramsvo] ...
    = helpervisualodometrymodel(pos, orient, paramsvo)
% extract model parameters.
scalevo = paramsvo.scale;
sigman = paramsvo.sigman;
tau = paramsvo.tau;
sigmab = paramsvo.sigmab;
sigmaa = sqrt((2/tau)   1/(tau*tau))*sigmab;
b = paramsvo.driftbias;
% calculate drift.
b = (1 - 1/tau).*b   randn(1,3)*sigmaa;
drift = randn(1,3)*sigman   b;
paramsvo.driftbias = b;
% calculate visual odometry measurements.
posvo = scalevo*pos   drift;
orientvo = orient;
end

helperinitialize

set the initial state and covariance values for the fusion filter.

function helperinitialize(filt, traj)
% retrieve the initial position, orientation, and velocity from the
% trajectory object and reset the internal states.
[pos, orient, vel] = traj();
reset(traj);
% set the initial state values.
filt.state(1:4) = compact(orient(1)).';
filt.state(5:7) = pos(1,:).';
filt.state(8:10) = vel(1,:).';
% set the gyroscope bias and visual odometry scale factor covariance to
% large values corresponding to low confidence.
filt.statecovariance(10:12,10:12) = 1e6;
filt.statecovariance(end) = 2e2;
end

helperpreallocatedata

preallocate data to log simulation results.

function [pos, orient, vel, acc, angvel, ...
    posvo, orientvo, ...
    posest, orientest, velest] ...
    = helperpreallocatedata(numimusamples, numcamerasamples)
% specify ground truth.
pos = zeros(numimusamples, 3);
orient = quaternion.zeros(numimusamples, 1);
vel = zeros(numimusamples, 3);
acc = zeros(numimusamples, 3);
angvel = zeros(numimusamples, 3);
% visual odometry output.
posvo = zeros(numcamerasamples, 3);
orientvo = quaternion.zeros(numcamerasamples, 1);
% filter output.
posest = zeros(numimusamples, 3);
orientest = quaternion.zeros(numimusamples, 1);
velest = zeros(numimusamples, 3);
end

helperupdatepose

update the pose of the vehicle.

function helperupdatepose(veh, pos, vel, orient)
veh.position = pos;
veh.velocity = vel;
rpy = eulerd(orient, 'zyx', 'frame');
veh.yaw = rpy(1);
veh.pitch = rpy(2);
veh.roll = rpy(3);
end

references

  • sola, j. "quaternion kinematics for the error-state kalman filter." arxiv e-prints, arxiv:1711.02508v1 [cs.ro] 3 nov 2017.

  • r. jiang, r., r. klette, and s. wang. "modeling of unbounded long-range drift in visual odometry." 2010 fourth pacific-rim symposium on image and video technology. nov. 2010, pp. 121-126.

网站地图