main content

detect, classify, and track vehicles using lidar -凯发k8网页登录

this example shows how to detect, classify, and track vehicles by using lidar point cloud data captured by a lidar sensor mounted on an ego vehicle. the lidar data used in this example is recorded from a highway-driving scenario. in this example, the point cloud data is segmented to determine the class of objects using the pointseg network. a joint probabilistic data association (jpda) tracker with an interactive multiple model filter is used to track the detected vehicles.

overview

the perception module plays an important role in achieving full autonomy for vehicles with an adas system. lidar and camera are essential sensors in the perception workflow. lidar is good at extracting accurate depth information of objects, while camera produces rich and detailed information of the environment which is useful for object classification.

this example mainly includes these parts:

  • ground plane segmentation

  • semantic segmentation

  • oriented bounding box fitting

  • tracking oriented bounding boxes

the flowchart gives an overview of the whole system.

load data

the lidar sensor generates point cloud data either in an organized format or an unorganized format. the data used in this example is collected using an ouster os1 lidar sensor. this lidar produces an organized point cloud with 64 horizontal scan lines. the point cloud data is comprised of three channels, representing the x-, y-, and z-coordinates of the points. each channel is of the size 64-by-1024. use the helper function helperdownloaddata to download the data and load them into the matlab® workspace.

note: this download can take a few minutes.

[ptclouds,pretrainedmodel] = helperdownloaddata;

ground plane segmentation

this example employs a hybrid approach that uses the and functions. first, estimate the ground plane parameters using the segmentgroundfromlidardata function. the estimated ground plane is divided into strips along the direction of the vehicle in order to fit the plane, using the pcfitplane function on each strip. this hybrid approach robustly fits the ground plane in a piecewise manner and handles variations in the point cloud.

% load point cloud
ptcloud = ptclouds{1};
% define roi for cropping point cloud
xlimit = [-30,30];
ylimit = [-12,12];
zlimit = [-3,15];
roi = [xlimit,ylimit,zlimit];
% extract ground plane
[nonground,ground] = helperextractground(ptcloud,roi);
figure;
pcshowpair(nonground,ground);
axis on;
legend({'\color{white} nonground','\color{white} ground'},'location','northeastoutside');

semantic segmentation

this example uses a pretrained pointseg network model. pointseg is an end-to-end real-time semantic segmentation network trained for object classes like cars, trucks, and background. the output from the network is a masked image with each pixel labeled per its class. this mask is used to filter different types of objects in the point cloud. the input to the network is five-channel image, that is x, y, z, intensity, and range. for more information on the network or how to train the network, refer to the example.

prepare input data

the helperpreparedata function generates five-channel data from the loaded point cloud data.

% load and visualize a sample frame
frame = helperpreparedata(ptcloud);
figure;
subplot(5,1,1);
imagesc(frame(:,:,1));
title('x channel');
subplot(5,1,2);
imagesc(frame(:,:,2));
title('y channel');
subplot(5,1,3);
imagesc(frame(:,:,3));
title('z channel');
subplot(5,1,4);
imagesc(frame(:,:,4));
title('intensity channel');
subplot(5,1,5);
imagesc(frame(:,:,5));
title('range channel');

run forward inference on one frame from the loaded pre-trained network.

if ~exist('net','var')
    net = pretrainedmodel.net;
end
% define classes
classes = ["background","car","truck"];
% define color map
lidarcolormap = [
            0.98  0.98   0.00  % unknown
            0.01  0.98   0.01  % green color for car
            0.01  0.01   0.98  % blue color for motorcycle
            ];
% run forward pass
pxdsresults = semanticseg(frame,net);
% overlay intensity image with segmented output
segmentedimage = labeloverlay(uint8(frame(:,:,4)),pxdsresults,'colormap',lidarcolormap,'transparency',0.5);
% display results
figure;
imshow(segmentedimage);
helperpixellabelcolorbar(lidarcolormap,classes);

use the generated semantic mask to filter point clouds containing trucks. similarly, filter point clouds for other classes.

truckindices = pxdsresults == 'truck';
truckpointcloud = select(nonground,truckindices,'outputsize','full');
% crop point cloud for better display
croppedptcloud = select(ptcloud,findpointsinroi(ptcloud,roi));
croppedtruckptcloud = select(truckpointcloud,findpointsinroi(truckpointcloud,roi));
% display ground and nonground points
figure;
pcshowpair(croppedptcloud,croppedtruckptcloud);
axis on;
legend({'\color{white} nonvehicle','\color{white} vehicle'},'location','northeastoutside');

clustering and bounding box fitting

after extracting point clouds of different object classes, the objects are clustered by applying euclidean clustering using the function. to group all the points belonging to one single cluster, the point cloud obtained as a cluster is used as seed points for growing region in nonground points. use the function to loop over all the points to grow the region. the extracted cluster is fitted in an l-shape bounding box using the function. these clusters of vehicles resemble the shape of the letter l when seen from a top-down view. this feature helps in estimating the orientation of the vehicle. the oriented bounding box fitting helps in estimating the heading angle of the objects, which is useful in applications such as path planning and traffic maneuvering.

the cuboid boundaries of the clusters can also be calculated by finding the minimum and maximum spatial extents in each direction. however, this method fails in estimating the orientation of the detected vehicles. the difference between the two methods is shown in the figure.

[labels,numclusters] = pcsegdist(croppedtruckptcloud,1);
% define cuboid parameters
params = zeros(0,9);
for clusterindex = 1:numclusters
    ptsincluster = labels == clusterindex;
        
    pc = select(croppedtruckptcloud,ptsincluster);
    location = pc.location;
    
    xl = (max(location(:,1)) - min(location(:,1)));
    yl = (max(location(:,2)) - min(location(:,2)));
    zl = (max(location(:,3)) - min(location(:,3)));
    
    % filter small bounding boxes
    if size(location,1)*size(location,2) > 20 && any(any(pc.location)) && xl > 1 && yl > 1
        indices = zeros(0,1);
        objectptcloud = pointcloud(location);        
        for i = 1:size(location,1)
            seedpoint = location(i,:);
            indices(end 1) = findnearestneighbors(nonground,seedpoint,1);
        end
        
        % remove overlapping indices        
        indices = unique(indices);
        
        % fit oriented bounding box
        model = pcfitcuboid(select(nonground,indices));
        params(end 1,:) = model.parameters;
    end
end
% display point cloud and detected bounding box
figure;
pcshow(croppedptcloud.location,croppedptcloud.location(:,3));
showshape('cuboid',params,"color","red","label","truck");
axis on;

visualization setup

use the helperlidarobjectdetectiondisplay class to visualize the complete workflow in one window. the layout of the visualization window is divided into the following sections:

  1. lidar range image: point cloud image in 2-d as a range image

  2. segmented image: detected labels generated from the semantic segmentation network overlaid with the intensity image or the fourth channel of the data

  3. oriented bounding box detection: 3-d point cloud with oriented bounding boxes

  4. top view: top view of the point cloud with oriented bounding boxes

display = helperlidarobjectdetectiondisplay;

loop through data

the helperlidarobjectdetection class is a wrapper encapsulating all the segmentation, clustering, and bounding box fitting steps mentioned in the above sections. use the finddetections function to extract the detected objects.

% initialize lidar object detector
lidardetector = helperlidarobjecdetector('model',net,'xlimits',xlimit,...
    'ylimit',ylimit,'zlimit',zlimit);
% prepare 5-d lidar data
inputdata = helperpreparedata(ptclouds);
% set random number generator for reproducible results
s = rng(2018);
% initialize the display
initializedisplay(display);
numframes = numel(inputdata);
for count = 1:numframes
    
    % get current data
    input = inputdata{count};
    
    rangeimage = input(:,:,5);
    
    % extact bounding boxes from lidar data
    [boundingbox,coloredptcloud,pointlabels] = detectbbox(lidardetector,input);
            
    % update display with colored point cloud
    updatepointcloud(display,coloredptcloud);
    
    % update bounding boxes
    updateboundingbox(display,boundingbox);
    
    % update segmented image 
    updatesegmentedimage(display,pointlabels,rangeimage);
    
    drawnow('limitrate');
end

tracking oriented bounding boxes

in this example, you use a joint probabilistic data association (jpda) tracker. the time step dt is set to 0.1 seconds since the dataset is captured at 10 hz. the state-space model used in the tracker is based on a cuboid model with parameters, [x,y,z,ϕ,l,w,h]. for more details on how to track bounding boxes in lidar data, see the track vehicles using lidar: from point cloud to track list (sensor fusion and tracking toolbox) example. in this example, the class information is provided using the objectattributes property of the objectdetection object. when creating new tracks, the filter initialization function, defined using the helper function helpermulticlassinitimmfilter uses the class of the detection to set up initial dimensions of the object. this helps the tracker to adjust bounding box measurement model with the appropriate dimensions of the track.

set up a jpda tracker object with these parameters.

assignmentgate = [10 100]; % assignment threshold;
confthreshold = [7 10];    % confirmation threshold for history logi
delthreshold = [2 3];     % deletion threshold for history logic
kc = 1e-5;                 % false-alarm rate per unit volume
% imm filter initialization function
filterinitfcn = @helpermulticlassinitimmfilter;
% a joint probabilistic data association tracker with imm filter
tracker = trackerjpda('filterinitializationfcn',filterinitfcn,...
    'tracklogic','history',...
    'assignmentthreshold',assignmentgate,...
    'clutterdensity',kc,...
    'confirmationthreshold',confthreshold,...
    'deletionthreshold',delthreshold,'initializationthreshold',0);
alltracks = struct([]);
time = 0;
dt = 0.1;
% define measurement noise
measnoise = blkdiag(0.25*eye(3),25,eye(3));
numtracks = zeros(numframes,2);

the detected objects are assembled as a cell array of (automated driving toolbox) objects using the helperassembledetections function.

display = helperlidarobjectdetectiondisplay;
initializedisplay(display);
for count = 1:numframes
    time = time   dt;
    % get current data
    input = inputdata{count};
    
    rangeimage = input(:,:,5);
    
    % extact bounding boxes from lidar data
    [boundingbox,coloredptcloud,pointlabels] = detectbbox(lidardetector,input);
    
    % assemble bounding boxes into objectdetections
    detections = helperassembledetections(boundingbox,measnoise,time);
    
    % pass detections to tracker
    if ~isempty(detections)
        % update the tracker
         [confirmedtracks,tentativetracks,alltracks,info] = tracker(detections,time);
         numtracks(count,1) = numel(confirmedtracks);
    end
    
    % update display with colored point cloud
    updatepointcloud(display,coloredptcloud);
            
    % update segmented image 
    updatesegmentedimage(display,pointlabels,rangeimage);
    
    % update the display if the tracks are not empty
     if ~isempty(confirmedtracks)
        updatetracks(display,confirmedtracks);
     end
     
    drawnow('limitrate');
end

summary

this example showed how to detect and classify vehicles fitted with oriented bounding box on lidar data. you also learned how to use imm filter to track objects with multiple class information. the semantic segmentation results can be improved further by adding more training data.

supporting functions

helperpreparedata

function multichanneldata = helperpreparedata(input)
% create 5-channel data as x, y, z, intensity and range
% of size 64-by-1024-by-5 from pointcloud.
if isa(input, 'cell')
    numframes = numel(input);
    multichanneldata = cell(1, numframes);
    for i = 1:numframes
        inputdata = input{i};
        
        x = inputdata.location(:,:,1);
        y = inputdata.location(:,:,2);
        z = inputdata.location(:,:,3);
        
        intensity = inputdata.intensity;
        range = sqrt(x.^2   y.^2   z.^2);
        
        multichanneldata{i} = cat(3, x, y, z, intensity, range);
    end
else
    x = input.location(:,:,1);
    y = input.location(:,:,2);
    z = input.location(:,:,3);
    
    intensity = input.intensity;
    range = sqrt(x.^2   y.^2   z.^2);
    
    multichanneldata = cat(3, x, y, z, intensity, range);
end
end

pixellabelcolorbar

function helperpixellabelcolorbar(cmap, classnames)
% add a colorbar to the current axis. the colorbar is formatted
% to display the class names with the color.
colormap(gca,cmap)
% add colorbar to current figure.
c = colorbar('peer', gca);
% use class names for tick marks.
c.ticklabels = classnames;
numclasses = size(cmap,1);
% center tick labels.
c.ticks = 1/(numclasses*2):1/numclasses:1;
% remove tick mark.
c.ticklength = 0;
end

helperextractground

function [ptcloudnonground,ptcloudground] = helperextractground(ptcloudin,roi)
% crop the point cloud
idx = findpointsinroi(ptcloudin,roi);
pc = select(ptcloudin,idx,'outputsize','full');
% get the ground plane the indices using piecewise plane fitting
[ptcloudground,idx] = piecewiseplanefitting(pc,roi);
nongroundidx = true(size(pc.location,[1,2]));
nongroundidx(idx) = false;
ptcloudnonground = select(pc,nongroundidx,'outputsize','full');
end
function [groundplane,idx] = piecewiseplanefitting(ptcloudin,roi)
groundptsidx = ...
    segmentgroundfromlidardata(ptcloudin, ...
    'elevationangledelta',5,'initialelevationangle',15);
groundpc = select(ptcloudin,groundptsidx,'outputsize','full');
% divide x-axis in 3 regions
segmentlength = (roi(2) - roi(1))/3;
x1 = [roi(1),roi(1)   segmentlength];
x2 = [x1(2),x1(2)   segmentlength];
x3 = [x2(2),x2(2)   segmentlength];
roi1 = [x1,roi(3:end)];
roi2 = [x2,roi(3:end)];
roi3 = [x3,roi(3:end)];
idxback = findpointsinroi(groundpc,roi1);
idxcenter = findpointsinroi(groundpc,roi2);
idxforward = findpointsinroi(groundpc,roi3);
% break the point clouds in front and back
ptback = select(groundpc,idxback,'outputsize','full');
ptforward = select(groundpc,idxforward,'outputsize','full');
[~,inliersforward] = planefit(ptforward);
[~,inliersback] = planefit(ptback);
idx = [inliersforward; idxcenter; inliersback];
groundplane = select(ptcloudin, idx,'outputsize','full');
end
function [plane,inlinersidx] = planefit(ptcloudin)
[~,inlinersidx, ~] = pcfitplane(ptcloudin,1,[0, 0, 1]);
plane = select(ptcloudin,inlinersidx,'outputsize','full');
end

helperassembledetections

function mydetections = helperassembledetections(bboxes,measnoise,timestamp)
% assemble bounding boxes as cell array of objectdetection
mydetections = cell(size(bboxes,1),1);
for i = 1:size(bboxes,1)
    classid = bboxes(i,end);
    lidarmodel = [bboxes(i,1:3), bboxes(i,end-1), bboxes(i,4:6)];
    % to avoid direct confirmation by the tracker, the classid is passed as
    % objectattributes.
    mydetections{i} = objectdetection(timestamp, ...
        lidarmodel','measurementnoise',...
        measnoise,'objectattributes',struct('classid',classid));
end
end

helperdownloaddata

function [lidardata, pretrainedmodel] = helperdownloaddata
outputfolder = fullfile(tempdir,'wpi');
url = 'https://ssd.mathworks.com/supportfiles/lidar/data/lidarsegmentationandtrackingdata.tar.gz';
lidardatatarfile = fullfile(outputfolder,'lidarsegmentationandtrackingdata.tar.gz');
if ~exist(lidardatatarfile,'file')
    mkdir(outputfolder);
    websave(lidardatatarfile,url);
    untar(lidardatatarfile,outputfolder);
end
% check if tar.gz file is downloaded, but not uncompressed
if ~exist(fullfile(outputfolder,'highwaydata.mat'),'file')
    untar(lidardatatarfile,outputfolder);
end
% load lidar data
data = load(fullfile(outputfolder,'highwaydata.mat'));
lidardata = data.ptclouddata;
% download pretrained model
url = 'https://ssd.mathworks.com/supportfiles/lidar/data/pretrainedpointsegmodel.mat';
modelfile = fullfile(outputfolder,'pretrainedpointsegmodel.mat');
if ~exist(modelfile,'file')
    websave(modelfile,url);
end
pretrainedmodel = load(fullfile(outputfolder,'pretrainedpointsegmodel.mat'));
end

references

[1] xiao zhang, wenda xu, chiyu dong and john m. dolan, "efficient l-shape fitting for vehicle detection using laser scanners", ieee intelligent vehicles symposium, june 2017

[2] y. wang, t. shi, p. yun, l. tai, and m. liu, “pointseg: real-time semantic segmentation based on 3d lidar point cloud,” arxiv preprint arxiv:1807.06288, 2018.

网站地图