track-凯发k8网页登录
this example shows you how to generate an object-level track list from measurements of a radar and a lidar sensor and further fuse them using a track-level fusion scheme. you process the radar measurements using an extended object tracker and the lidar measurements using a joint probabilistic data association (jpda) tracker. you further fuse these tracks using a track-level fusion scheme. the schematic of the workflow is shown below.
see (ros toolbox) for an example of this algorithm using recorded data on a rosbag.
setup scenario for synthetic data generation
the scenario used in this example is created using (automated driving toolbox). the data from radar and lidar sensors is simulated using (automated driving toolbox) and (automated driving toolbox), respectively. the creation of the scenario and the sensor models is wrapped in the helper function helpercreateradarlidarscenario
. for more information on scenario and synthetic data generation, refer to (automated driving toolbox).
% for reproducible results rng(2021); % create scenario, ego vehicle and get radars and lidar sensor [scenario, egovehicle, radars, lidar] = helpercreateradarlidarscenario;
the ego vehicle is mounted with four 2-d radar sensors. the front and rear radar sensors have a field of view of 45 degrees. the left and right radar sensors have a field of view of 150 degrees. each radar has a resolution of 6 degrees in azimuth and 2.5 meters in range. the ego is also mounted with one 3-d lidar sensor with a field of view of 360 degrees in azimuth and 40 degrees in elevation. the lidar has a resolution of 0.2 degrees in azimuth and 1.25 degrees in elevation (32 elevation channels). visualize the configuration of the sensors and the simulated sensor data in the animation below. notice that the radars have higher resolution than objects and therefore return multiple measurements per object. also notice that the lidar interacts with the low-poly mesh of the actor as well as the road surface to return multiple points from these objects.
radar tracking algorithm
as mentioned, the radars have higher resolution than the objects and return multiple detections per object. conventional trackers such as global nearest neighbor (gnn) and joint probabilistic data association (jpda) assume that the sensors return at most one detection per object per scan. therefore, the detections from high-resolution sensors must be either clustered before processing it with conventional trackers or must be processed using extended object trackers. extended object trackers do not require pre-clustering of detections and usually estimate both kinematic states (for example, position and velocity) and the extent of the objects. for a more detailed comparison between conventional trackers and extended object trackers, refer to the extended object tracking of highway vehicles with radar and camera example.
in general, extended object trackers offer better estimation of objects as they handle clustering and data association simultaneously using temporal history of tracks. in this example, the radar detections are processed using a gaussian mixture probability hypothesis density (gm-phd) tracker ( and ) with a rectangular target model. for more details on configuring the tracker, refer to the "gm-phd rectangular object tracker" section of the extended object tracking of highway vehicles with radar and camera example.
the algorithm for tracking objects using radar measurements is wrapped inside the helper class, helperradartrackingalgorithm
, implemented as a system object™. this class outputs an array of objects and define their state according to the following convention:
radartrackingalgorithm = helperradartrackingalgorithm(radars);
lidar tracking algorithm
similar to radars, the lidar sensor also returns multiple measurements per object. further, the sensor returns a large number of points from the road, which must be removed before used as inputs for an object-tracking algorithm. while lidar data from obstacles can be directly processed via extended object tracking algorithm, conventional tracking algorithms are still more prevalent for tracking using lidar data. the first reason for this trend is mainly observed due to higher computational complexity of extended object trackers for large data sets. the second reason is the investments into advanced deep learning-based detectors such as pointpillars [1], voxelnet [2] and pixor [3], which can segment a point cloud and return bounding box detections for the vehicles. these detectors can help in overcoming the performance degradation of conventional trackers due to improper clustering.
in this example, the lidar data is processed using a conventional joint probabilistic data association (jpda) tracker, configured with an interacting multiple model (imm) filter. the pre-processing of lidar data to remove point cloud is performed by using a ransac-based plane-fitting algorithm and bounding boxes are formed by performing a euclidian-based distance clustering algorithm. for more information about the algorithm, refer to the track vehicles using lidar: from point cloud to track list example. compared the linked example, the tracking is performed in the scenario frame and the tracker is tuned differently to track objects of different sizes. further the states of the variables are defined differently to constrain the motion of the tracks in the direction of its estimated heading angle.
the algorithm for tracking objects using lidar data is wrapped inside the helper class, helperlidartrackingalgorithm
implemented as system object. this class outputs an array of objects and defines their state according to the following convention:
the states common to the radar algorithm are defined similarly. also, as a 3-d sensor, the lidar tracker outputs three additional states, , and , which refer to z-coordinate (m), z-velocity (m/s), and height (m) of the tracked object respectively.
lidartrackingalgorithm = helperlidartrackingalgorithm(lidar);
set up fuser, metrics, and visualization
fuser
next, you will set up a fusion algorithm for fusing the list of tracks from radar and lidar trackers. similar to other tracking algorithms, the first step towards setting up a track-level fusion algorithm is defining the choice of state vector (or state-space) for the fused or central tracks. in this case, the state-space for fused tracks is chosen to be same as the lidar. after choosing a central track state-space, you define the transformation of the central track state to the local track state. in this case, the local track state-space refers to states of radar and lidar tracks. to do this, you use a object.
define the configuration of the radar source. the helperradartrackingalgorithm
outputs tracks with sourceindex
set to 1. the sourceindex
is provided as a property on each tracker to uniquely identify it and allows a fusion algorithm to distinguish tracks from different sources. therefore, you set the sourceindex
property of the radar configuration as same as those of the radar tracks. you set isinitializingcentraltracks
to true
to let that unassigned radar tracks initiate new central tracks. next, you define the transformation of a track in central state-space to the radar state-space and vice-versa. the helper functions central2radar
and radar2central
perform the two transformations and are included at the end of this example.
radarconfig = fusersourceconfiguration('sourceindex',1,... 'isinitializingcentraltracks',true,... 'centraltolocaltransformfcn',@central2radar,... 'localtocentraltransformfcn',@radar2central);
define the configuration of the lidar source. since the state-space of a lidar track is same as central track, you do not define any transformations.
lidarconfig = fusersourceconfiguration('sourceindex',2,... 'isinitializingcentraltracks',true);
the next step is to define the state-fusion algorithm. the state-fusion algorithm takes multiple states and state covariances in the central state-space as input and returns a fused estimate of the state and the covariances. in this example, you use a covariance intersection algorithm provided by the helper function, helperradarlidarfusionfcn
. a generic covariance intersection algorithm for two gaussian estimates with mean and covariance can be defined according to the following equations:
where and are the fused state and covariance and and are mixing coefficients from each estimate. typically, these mixing coefficients are estimated by minimizing the determinant or the trace of the fused covariance. in this example, the mixing weights are estimated by minimizing the determinant of positional covariance of each estimate. furthermore, as the radar does not estimate 3-d states, 3-d states are only fused with lidars. for more details, refer to the helperradarlidarfusionfcn
function shown at the end of this script.
next, you assemble all the information using a trackfuser
object.
% the state-space of central tracks is same as the tracks from the lidar, % therefore you use the same state transition function. the function is % defined inside the helperlidartrackingalgorithm class. f = lidartrackingalgorithm.statetransitionfcn; % create a trackfuser object fuser = trackfuser('sourceconfigurations',{radarconfig;lidarconfig},... 'statetransitionfcn',f,... 'processnoise',diag([1 3 1]),... 'hasadditiveprocessnoise',false,... 'assignmentthreshold',[250 inf],... 'confirmationthreshold',[3 5],... 'deletionthreshold',[5 5],... 'statefusion','custom',... 'customstatefusionfcn',@helperradarlidarfusionfcn);
metrics
in this example, you assess the performance of each algorithm using the generalized optimal subpattern assignment metric (gospa) metric. you set up three separate metrics using for each of the trackers. the gospa metric aims to evaluate the performance of a tracking system by providing a scalar cost. a lower value of the metric indicates better performance of the tracking algorithm.
to use the gospa metric with custom motion models like the one used in this example, you set the distance
property to 'custom' and define a distance function between a track and its associated ground truth. these distance functions, shown at the end of this example are helperradardistance
, and helperlidardistance
.
% radar gospa gosparadar = trackgospametric('distance','custom',... 'distancefcn',@helperradardistance,... 'cutoffdistance',25); % lidar gospa gospalidar = trackgospametric('distance','custom',... 'distancefcn',@helperlidardistance,... 'cutoffdistance',25); % central/fused gospa gospacentral = trackgospametric('distance','custom',... 'distancefcn',@helperlidardistance,...% state-space is same as lidar 'cutoffdistance',25);
visualization
the visualization for this example is implemented using a helper class helperlidarradartrackfusiondisplay
. the display is divided into 4 panels. the display plots the measurements and tracks from each sensor as well as the fused track estimates. the legend for the display is shown below. furthermore, the tracks are annotated by their unique identity (trackid
) as well as a prefix. the prefixes "r", "l" and "f" stand for radar, lidar, and fused estimate, respectively.
% create a display. % followactorid controls the actor shown in the close-up % display display = helperlidarradartrackfusiondisplay('followactorid',3); % show persistent legend showlegend(display,scenario);
run scenario and trackers
next, you advance the scenario, generate synthetic data from all sensors and process it to generate tracks from each of the systems. you also compute the metric for each tracker using the ground truth available from the scenario.
% initialzie gospa metric and its components for all tracking algorithms. gospa = zeros(3,0); misstarget = zeros(3,0); falsetracks = zeros(3,0); % initialize fusedtracks fusedtracks = objecttrack.empty(0,1); % a counter for time steps elapsed for storing gospa metrics. idx = 1; % ground truth for metrics. this variable updates every time-step % automatically being a handle to the actors. groundtruth = scenario.actors(2:end); while advance(scenario) % current time time = scenario.simulationtime; % collect radar and lidar measurements and ego pose to track in % scenario frame. see helpercollectsensordata below. [radardetections, ptcloud, egopose] = helpercollectsensordata(egovehicle, radars, lidar, time); % generate radar tracks radartracks = radartrackingalgorithm(egopose, radardetections, time); % generate lidar tracks and analysis information like bounding box % detections and point cloud segmentation information [lidartracks, lidardetections, segmentationinfo] = ... lidartrackingalgorithm(egopose, ptcloud, time); % concatenate radar and lidar tracks localtracks = [radartracks;lidartracks]; % update the fuser. first call must contain one local track if ~(isempty(localtracks) && ~islocked(fuser)) fusedtracks = fuser(localtracks,time); end % capture gospa and its components for all trackers [gospa(1,idx),~,~,~,misstarget(1,idx),falsetracks(1,idx)] = gosparadar(radartracks, groundtruth); [gospa(2,idx),~,~,~,misstarget(2,idx),falsetracks(2,idx)] = gospalidar(lidartracks, groundtruth); [gospa(3,idx),~,~,~,misstarget(3,idx),falsetracks(3,idx)] = gospacentral(fusedtracks, groundtruth); % update the display display(scenario, radars, radardetections, radartracks, ... lidar, ptcloud, lidardetections, segmentationinfo, lidartracks,... fusedtracks); % update the index for storing gospa metrics idx = idx 1; end % update example animations updateexampleanimations(display);
evaluate performance
evaluate the performance of each tracker using visualization as well as quantitative metrics. analyze different events in the scenario and understand how the track-level fusion scheme helps achieve a better estimation of the vehicle state.
track maintenance
the animation below shows the entire run every three time-steps. note that each of the three tracking systems - radar, lidar, and the track-level fusion - were able to track all four vehicles in the scenario and no false tracks were confirmed.
you can also quantitatively measure this aspect of the performance using "missed target" and "false track" components of the gospa metric. notice in the figures below that missed target component starts from a higher value due to establishment delay and goes down to zero in about 5-10 steps for each tracking system. also, notice that the false track component is zero for all systems, which indicates that no false tracks were confirmed.
% plot missed target component figure; plot(misstarget','linewidth',2); legend('radar','lidar','fused'); title("missed target metric"); xlabel('time step'); ylabel('metric'); grid on; % plot false track component figure; plot(falsetracks','linewidth',2); legend('radar','lidar','fused'); title("false track metric"); xlabel('time step'); ylabel('metric'); grid on;
track-level accuracy
the track-level or localization accuracy of each tracker can also be quantitatively assessed by the gospa metric at each time step. a lower value indicates better tracking accuracy. as there were no missed targets or false tracks, the metric captures the localization errors resulting from state estimation of each vehicle.
note that the gospa metric for fused estimates is lower than the metric for individual sensor, which indicates that track accuracy increased after fusion of track estimates from each sensor.
% plot gospa figure; plot(gospa','linewidth',2); legend('radar','lidar','fused'); title("gospa metric"); xlabel('time step'); ylabel('metric'); grid on;
closely-spaced targets
as mentioned earlier, this example uses a euclidian-distance based clustering and bounding box fitting to feed the lidar data to a conventional tracking algorithm. clustering algorithms typically suffer when objects are closely-spaced. with the detector configuration used in this example, when the passing vehicle approaches the vehicle in front of the ego vehicle, the detector clusters the point cloud from each vehicle into a bigger bounding box. you can notice in the animation below that the track drifted away from the vehicle center. because the track was reported with higher certainty in its estimate for a few steps, the fused estimated was also affected initially. however, as the uncertainty increases, its association with the fused estimate becomes weaker. this is because the covariance intersection algorithm chooses a mixing weight for each assigned track based on the certainty of each estimate.
this effect is also captured in the gospa metric. you can notice in the gospa metric plot above that the lidar metric shows a peak around the 65th time step.
the radar tracks are not affected during this event because of two main reasons. firstly, the radar sensor outputs range-rate information in each detection, which is different beyond noise-levels for the passing car as compared to the slower moving car. this results in an increased statistical distance between detections from individual cars. secondly, extended object trackers evaluate multiple possible clustering hypothesis against predicted tracks, which results in rejection of improper clusters and acceptance of proper clusters. note that for extended object trackers to properly choose the best clusters, the filter for the track must be robust to a degree that can capture the difference between two clusters. for example, a track with high process noise and highly uncertain dimensions may not be able to properly claim a cluster because of its premature age and higher flexibility to account for uncertain events.
targets at long range
as targets recede away from the radar sensors, the accuracy of the measurements degrade because of reduced signal-to-noise ratio at the detector and the limited resolution of the sensor. this results in high uncertainty in the measurements, which in turn reduces the track accuracy. notice in the close-up display below that the track estimate from the radar is further away from the ground truth for the radar sensor and is reported with a higher uncertainty. however, the lidar sensor reports enough measurements in the point cloud to generate a "shrunk" bounding box. the shrinkage effect modeled in the measurement model for lidar tracking algorithm allows the tracker to maintain a track with correct dimensions. in such situations, the lidar mixing weight is higher than the radar and allows the fused estimate to be more accurate than the radar estimate.
summary
in this example, you learned how to set up a track-level fusion algorithm for fusing tracks from radar and lidar sensors. you also learned how to evaluate a tracking algorithm using the generalized optimal subpattern metric and its associated components.
utility functions
collectsensordata
a function to generate radar and lidar measurements at the current time-step.
function [radardetections, ptcloud, egopose] = helpercollectsensordata(egovehicle, radars, lidar, time) % current poses of targets with respect to ego vehicle tgtposes = targetposes(egovehicle); radardetections = cell(0,1); for i = 1:numel(radars) thisradardetections = step(radars{i},tgtposes,time); radardetections = [radardetections;thisradardetections]; %#okend % generate point cloud from lidar rdmesh = roadmesh(egovehicle); ptcloud = step(lidar, tgtposes, rdmesh, time); % compute pose of ego vehicle to track in scenario frame. typically % obtained using an ins system. if unavailable, this can be set to % "origin" to track in ego vehicle's frame. egopose = pose(egovehicle); end
radar2cental
a function to transform a track in the radar state-space to a track in the central state-space.
function centraltrack = radar2central(radartrack) % initialize a track of the correct state size centraltrack = objecttrack('state',zeros(10,1),... 'statecovariance',eye(10)); % sync properties of radartrack except state and statecovariance with % radartrack see synctrack defined below. centraltrack = synctrack(centraltrack,radartrack); xradar = radartrack.state; pradar = radartrack.statecovariance; h = zeros(10,7); % radar to central linear transformation matrix h(1,1) = 1; h(2,2) = 1; h(3,3) = 1; h(4,4) = 1; h(5,5) = 1; h(8,6) = 1; h(9,7) = 1; xcentral = h*xradar; % linear state transformation pcentral = h*pradar*h'; % linear covariance transformation pcentral([6 7 10],[6 7 10]) = eye(3); % unobserved states % set state and covariance of central track centraltrack.state = xcentral; centraltrack.statecovariance = pcentral; end
central2radar
a function to transform a track in the central state-space to a track in the radar state-space.
function radartrack = central2radar(centraltrack) % initialize a track of the correct state size radartrack = objecttrack('state',zeros(7,1),... 'statecovariance',eye(7)); % sync properties of centraltrack except state and statecovariance with % radartrack see synctrack defined below. radartrack = synctrack(radartrack,centraltrack); xcentral = centraltrack.state; pcentral = centraltrack.statecovariance; h = zeros(7,10); % central to radar linear transformation matrix h(1,1) = 1; h(2,2) = 1; h(3,3) = 1; h(4,4) = 1; h(5,5) = 1; h(6,8) = 1; h(7,9) = 1; xradar = h*xcentral; % linear state transformation pradar = h*pcentral*h'; % linear covariance transformation % set state and covariance of radar track radartrack.state = xradar; radartrack.statecovariance = pradar; end
synctrack
a function to syncs properties of one track with another except the state
and statecovariance
properties.
function tr1 = synctrack(tr1,tr2) props = properties(tr1); notstate = ~strcmpi(props,'state'); notcov = ~strcmpi(props,'statecovariance'); props = props(notstate & notcov); for i = 1:numel(props) tr1.(props{i}) = tr2.(props{i}); end end
pose
a function to return pose of the ego vehicle as a structure.
function egopose = pose(egovehicle) egopose.position = egovehicle.position; egopose.velocity = egovehicle.velocity; egopose.yaw = egovehicle.yaw; egopose.pitch = egovehicle.pitch; egopose.roll = egovehicle.roll; end
helperlidardistance
function to calculate a normalized distance between the estimate of a track in radar state-space and the assigned ground truth.
function dist = helperlidardistance(track, truth) % calculate the actual values of the states estimated by the tracker % center is different than origin and the trackers estimate the center rorigintocenter = -truth.originoffset(:) [0;0;truth.height/2]; rot = quaternion([truth.yaw truth.pitch truth.roll],'eulerd','zyx','frame'); actpos = truth.position(:) rotatepoint(rot,rorigintocenter')'; % actual speed and z-rate actvel = [norm(truth.velocity(1:2));truth.velocity(3)]; % actual yaw actyaw = truth.yaw; % actual dimensions. actdim = [truth.length;truth.width;truth.height]; % actual yaw rate actyawrate = truth.angularvelocity(3); % calculate error in each estimate weighted by the "requirements" of the % system. the distance specified using mahalanobis distance in each aspect % of the estimate, where covariance is defined by the "requirements". this % helps to avoid skewed distances when tracks under/over report their % uncertainty because of inaccuracies in state/measurement models. % positional error. estpos = track.state([1 2 6]); reqposcov = 0.1*eye(3); e = estpos - actpos; d1 = sqrt(e'/reqposcov*e); % velocity error estvel = track.state([3 7]); reqvelcov = 5*eye(2); e = estvel - actvel; d2 = sqrt(e'/reqvelcov*e); % yaw error estyaw = track.state(4); reqyawcov = 5; e = estyaw - actyaw; d3 = sqrt(e'/reqyawcov*e); % yaw-rate error estyawrate = track.state(5); reqyawratecov = 1; e = estyawrate - actyawrate; d4 = sqrt(e'/reqyawratecov*e); % dimension error estdim = track.state([8 9 10]); reqdimcov = eye(3); e = estdim - actdim; d5 = sqrt(e'/reqdimcov*e); % total distance dist = d1 d2 d3 d4 d5; end
helperradardistance
function to calculate a normalized distance between the estimate of a track in radar state-space and the assigned ground truth.
function dist = helperradardistance(track, truth) % calculate the actual values of the states estimated by the tracker % center is different than origin and the trackers estimate the center rorigintocenter = -truth.originoffset(:) [0;0;truth.height/2]; rot = quaternion([truth.yaw truth.pitch truth.roll],'eulerd','zyx','frame'); actpos = truth.position(:) rotatepoint(rot,rorigintocenter')'; actpos = actpos(1:2); % only 2-d % actual speed actvel = norm(truth.velocity(1:2)); % actual yaw actyaw = truth.yaw; % actual dimensions. only 2-d for radar actdim = [truth.length;truth.width]; % actual yaw rate actyawrate = truth.angularvelocity(3); % calculate error in each estimate weighted by the "requirements" of the % system. the distance specified using mahalanobis distance in each aspect % of the estimate, where covariance is defined by the "requirements". this % helps to avoid skewed distances when tracks under/over report their % uncertainty because of inaccuracies in state/measurement models. % positional error estpos = track.state([1 2]); reqposcov = 0.1*eye(2); e = estpos - actpos; d1 = sqrt(e'/reqposcov*e); % speed error estvel = track.state(3); reqvelcov = 5; e = estvel - actvel; d2 = sqrt(e'/reqvelcov*e); % yaw error estyaw = track.state(4); reqyawcov = 5; e = estyaw - actyaw; d3 = sqrt(e'/reqyawcov*e); % yaw-rate error estyawrate = track.state(5); reqyawratecov = 1; e = estyawrate - actyawrate; d4 = sqrt(e'/reqyawratecov*e); % dimension error estdim = track.state([6 7]); reqdimcov = eye(2); e = estdim - actdim; d5 = sqrt(e'/reqdimcov*e); % total distance dist = d1 d2 d3 d4 d5; % a constant penality for not measuring 3-d state dist = dist 3; end
helperradarlidarfusionfcn
function to fuse states and state covariances in central track state-space
function [x,p] = helperradarlidarfusionfcn(xall,pall) n = size(xall,2); dets = zeros(n,1); % initialize x and p x = xall(:,1); p = pall(:,:,1); onlylidarstates = false(10,1); onlylidarstates([6 7 10]) = true; % only fuse this information with lidar xonlylidar = xall(onlylidarstates,:); ponlylidar = pall(onlylidarstates,onlylidarstates,:); % states and covariances for intersection with radar and lidar both xtofuse = xall(~onlylidarstates,:); ptofuse = pall(~onlylidarstates,~onlylidarstates,:); % sorted order of determinants. this helps to sequentially build the % covariance with comparable determinations. for example, two large % covariances may intersect to a smaller covariance, which is comparable to % the third smallest covariance. for i = 1:n dets(i) = det(ptofuse(1:2,1:2,i)); end [~,idx] = sort(dets,'descend'); xtofuse = xtofuse(:,idx); ptofuse = ptofuse(:,:,idx); % initialize fused estimate thisx = xtofuse(:,1); thisp = ptofuse(:,:,1); % sequential fusion for i = 2:n [thisx,thisp] = fusecovintusingpos(thisx, thisp, xtofuse(:,i), ptofuse(:,:,i)); end % assign fused states from all sources x(~onlylidarstates) = thisx; p(~onlylidarstates,~onlylidarstates,:) = thisp; % fuse some states only with lidar source valid = any(abs(xonlylidar) > 1e-6,1); xmerge = xonlylidar(:,valid); pmerge = ponlylidar(:,:,valid); if sum(valid) > 1 [xl,pl] = fusecovint(xmerge,pmerge); elseif sum(valid) == 1 xl = xmerge; pl = pmerge; else xl = zeros(3,1); pl = eye(3); end x(onlylidarstates) = xl; p(onlylidarstates,onlylidarstates) = pl; end function [x,p] = fusecovintusingpos(x1,p1,x2,p2) % covariance intersection in general is employed by the following % equations: % p^-1 = w1*p1^-1 w2*p2^-1 % x = p*(w1*p1^-1*x1 w2*p2^-1*x2); % where w1 w2 = 1 % usually a scalar representative of the covariance matrix like "det" or % "trace" of p is minimized to compute w. this is offered by the function % "fusecovint". however. in this case, the w are chosen by minimizing the % determinants of "positional" covariances only. n = size(x1,1); idx = [1 2]; detp1pos = det(p1(idx,idx)); detp2pos = det(p2(idx,idx)); w1 = detp2pos/(detp1pos detp2pos); w2 = detp1pos/(detp1pos detp2pos); i = eye(n); p1inv = i/p1; p2inv = i/p2; pinv = w1*p1inv w2*p2inv; p = i/pinv; x = p*(w1*p1inv*x1 w2*p2inv*x2); end
references
[1] lang, alex h., et al. "pointpillars: fast encoders for object detection from point clouds." proceedings of the ieee conference on computer vision and pattern recognition. 2019.
[2] zhou, yin, and oncel tuzel. "voxelnet: end-to-end learning for point cloud based 3d object detection." proceedings of the ieee conference on computer vision and pattern recognition. 2018.
[3] yang, bin, wenjie luo, and raquel urtasun. "pixor: real-time 3d object detection from point clouds." proceedings of the ieee conference on computer vision and pattern recognition. 2018.