main content

lane detection -凯发k8网页登录

this example shows how to implement a lane-marking detection algorithm for fpgas.

lane detection is a critical processing stage in advanced driving assistance systems (adas). automatically detecting lane boundaries from a video stream is computationally challenging and therefore hardware accelerators such as fpgas and gpus are often required to achieve real time performance.

in this example model, an fpga-based lane candidate generator is coupled with a software-based polynomial fitting engine, to determine lane boundaries.

download input file

this example uses the visionhdl_caltech.avi file as an input. the file is approximately 19 mb in size. download the file from the mathworks website and unzip the downloaded file.

lanezipfile = matlab.internal.examples.downloadsupportfile('visionhdl_hdlcoder','caltech_dataset.zip');
[outputfolder,~,~] = fileparts(lanezipfile);
unzip(lanezipfile,outputfolder);
caltechvideofile = fullfile(outputfolder,'caltech_dataset');
addpath(caltechvideofile);

system overview

the system is shown below. the hdllanedetector subsystem represents the hardware accelerated part of the design, while the swlanefitandoverlay subsystem represent the software based polynomial fitting engine. prior to the frame to pixels block, the rgb input is converted to intensity color space.

modelname = 'lanedetectionhdl';
open_system(modelname);
set_param(modelname,'sampletimecolors','on');
set_param(modelname,'simulationcommand','update');
set_param(modelname,'open','on');
set(allchild(0),'visible','off');

hdl lane detector

the hdl lane detector represents the hardware-accelerated part of the design. this subsystem receives the input pixel stream from the front-facing camera source, transforms the view to obtain the birds-eye view, locates lane marking candidates from the transformed view and then buffers them up into a vector to send to the software side for curve fitting and overlay.

set_param(modelname, 'sampletimecolors', 'off');
open_system([modelname '/hdllanedetector'],'force');

birds-eye view

the birds-eye view block transforms the front-facing camera view to a birds-eye perspective. working with the images in this view simplifies the processing requirements of the downstream lane detection algorithms. the front-facing view suffers from perspective distortion, causing the lanes to converge at the vanishing point. the perspective distortion is corrected by applying an inverse perspective transform.

the inverse perspective mapping (ipm) is given by the following expression:

$$(\hat{x},\hat{y}) = round\left(\frac{h_{11}x   h_{12}y   h_{13}}{h_{31}x   h_{32}y   h_{33}}, \frac{h_{21}x   h_{22}y   h_{23}}{h_{31}x   h_{32}y   h_{33}}\right)$$

the homography matrix, h, is derived from four intrinsic parameters of the physical camera setup, namely the focal length, pitch, height, and principle point (from a pinhole camera model). for more details, refer to the computer vision toolbox™ documentation.

you can estimate the homography matrix by using the computer vision toolbox™ estgeotform2d function or the image processing toolbox™ fitgeotform2d function to create a projtform2d object. these functions require a set of matched points between the source frame and birds-eye view frame. the source frame points are taken as the vertices of a trapezoidal region of interest, and can extend past the source frame limits to capture a larger region. for the trapezoid shown the point mapping is:

$$sourcepoints = [c_{x},c_{y};\ d_{x},d_{y};\ a_{x},a_{y};\ b_{x},b_{y}]$$

$$birdseyepoints = [1,1;\ bappl,1;\ 1,bavl;\ bappl,bavl]$$

where bappl and bavl are the birds-eye view active pixels per line and active video lines respectively.

direct evaluation of the source (front-facing) to destination (birds-eye) mapping in real time on fpga/asic hardware is challenging. the requirement for division along with the potential for non-sequential memory access from a frame buffer mean that the computational requirements of this part of the design are substantial. therefore instead of directly evaluating the ipm calculation in real time, an offline analysis of the input to output mapping has been performed and used to pre-compute a mapping scheme. this is possible as the homography matrix is fixed after factory calibration/installation of the camera, due to the camera position, height and pitch being fixed.

in this particular example, the birds-eye output image is a frame of [700x640] dimensions, whereas the front-facing input image is of [480x640] dimensions. there is not sufficient blanking available in order to output the full birds-eye frame before the next front-facing camera input is streamed in. the birds-eye view block will therefore not accept any new frame data until it has finished processing the current birds-eye frame.

open_system([modelname '/hdllanedetector'],'force');

line buffering and address computation

a full sized projective transformation from input to output would result in a [900x640] output image. this requires that the full [480x640] input image is stored in memory, while the source pixel location is calculated using the source location and homography matrix. ideally on-chip memory should be used for this purpose, removing the requirement for an off-chip frame buffer.

you can determine the number of lines to buffer on-chip by performing inverse row mapping using the homography matrix. the following script calculates the homography matrix from the point mapping, using it to an inverse transform to map the source frame rows to birds-eye view rows.

% source & birds-eye frame parameters
%   avl:  active video lines, appl: active pixels per line
savl  = 480;
sappl = 640;
% birds-eye frame
bavl  = 700;
bappl = 640;
% determine homography matrix
%   point mapping [nw; ne; sw; se]
sourcepoints   = [218,196; 421,196; -629,405; 1276,405];
birdseyepoints = [001,001; 640,001;  001,900;  640,900];
%   estimate transform
tf = estgeotform2d(sourcepoints,birdseyepoints,'projective');
%   homography matrix
h = tf.t;
% visualize birds-eye roi on source frame
vidobj   = videoreader('visionhdl_caltech.avi');
vidframe = readframe(vidobj);
vidframeannotated = insertshape(vidframe,'polygon',[sourcepoints(1,:) ...
    sourcepoints(2,:) sourcepoints(4,:) sourcepoints(3,:)],           ...
    'linewidth',5,'color','red');
vidframeannotated = insertshape(vidframeannotated,'filledpolygon',    ...
    [sourcepoints(1,:) sourcepoints(2,:) sourcepoints(4,:)            ...
    sourcepoints(3,:)],'linewidth',5,'color','red','opacity',0.2);
figure(1);
subplot(2,1,1);
imshow(vidframeannotated)
title('source video frame');
% determine required birds-eye line buffer depth
%   inverse row mapping at frame centre
x = round(sourcepoints(2,1)-((sourcepoints(2,1)-sourcepoints(1,1))/2));
y = zeros(1,bavl);
for ii = 1:1:bavl
    [~,y(ii)] = transformpointsinverse(tf,x,ii);
end
numrequiredrows = ceil(y(0.98*bavl) - y(1));
% visualize inverse row mapping
subplot(2,1,2);
plot(y,'handlevisibility','off');   % inverse row mapping
xline(0.98*bavl,'r','98%','labelhorizontalalignment','left',          ...
    'handlevisibility','off');      % line buffer depth
yline(y(1),'r--','handlevisibility','off')
yline(y(0.98*bavl),'r')
title('birds-eye view inverse row mapping');
xlabel('output row');
ylabel('input row');
legend(['line buffer depth: ',num2str(numrequiredrows),' lines'],     ...
    'location','northwest');
axis equal;
grid on;

the plot shows the mapping of input line to output line revealing that in order to generate the first 700 lines of the top down birds eye output image, around 50 lines of the input image are required. this is an acceptable number of lines to store using on-chip memory.

lane detection

with the birds-eye view image obtained, the actual lane detection can be performed. there are many techniques which can be considered for this purpose. to achieve an implementation which is robust, works well on streaming image data and which can be implemented in fpga/asic hardware at reasonable resource cost, this example uses the approach described in [1]. this algorithm performs a full image convolution with a vertically oriented first order gaussian derivative filter kernel, followed by sub-region processing.

open_system([modelname '/hdllanedetector/lanedetection'],'force');

vertically oriented filter convolution

immediately following the birds-eye mapping of the input image, the output is convolved with a filter designed to locate strips of high intensity pixels on a dark background. the width of the kernel is 8 pixels, which relates to the width of the lines that appear in the birds-eye image. the height is set to 16 which relates to the size of the dashed lane markings which appear in the image. as the birds-eye image is physically related to the height, pitch etc. of the camera, the width at which lanes appear in this image is intrinsically related to the physical measurement on the road. the width and height of the kernel may need to be updated when operating the lane detection system in different countries.

the output of the filter kernel is shown below, using jet colormap to highlight differences in intensity. because the filter kernel is a general, vertically oriented gaussian derivative, there is some response from many different regions. however, for the locations where a lane marking is present, there is a strong positive response located next to a strong negative response, which is consistent across columns. this characteristic of the filter output is used in the next stage of the detection algorithm to locate valid lane candidates.

lane candidate generation

after convolution with the gaussian derivative kernel, sub-region processing of the output is performed in order to find the coordinates where a lane marking is present. each region consists of 18 lines, with a ping-pong memory scheme in place to ensure that data can be continuously streamed through the subsystem.

%seeing as
open_system([modelname '/hdllanedetector/lanedetection/lanecandidategeneration'],'force');

histogram column count

firstly, histogramcolumncount counts the number of thresholded pixels in each column over the 18 line region. a high column count indicates that a lane is likely present in the region. this count is performed for both the positive and the negative thresholded images. the positive histogram counts are offset to account for the kernel width. lane candidates occur where the positive count and negative counts are both high. this exploits the previously noted property of the convolution output where positive tracks appear next to negative tracks.

internally, the column counting histogram generates the control signalling that selects an 18 line region, computes the column histogram, and outputs the result when ready. a ping-pong buffering scheme is in place which allows one histogram to be reading while the next is writing.

overlap and multiply

as noted, when a lane is present in the birds-eye image, the convolution result will produce strips of high-intensity positive output located next to strips of high-intensity negative output. the positive and negative column count histograms locate such regions. in order to amplify these locations, the positive count output is delayed by 8 clock cycles (an intrinsic parameter related to the kernel width), and the positive and negative counts are multiplied together. this amplifies columns where the positive and negative counts are in agreement, and minimizes regions where there is disagreement between the positive and negative counts. the design is pipelined in order to ensure high throughput operation.

zero crossing filter

at the output of the overlap and multiply subsystem, peaks appear where there are lane markings present. a peak detection algorithm determines the columns where lane markings are present. because the snr is relatively high in the data, this example uses a simple fir filtering operation followed by zero crossing detection. the zero crossing filter is implemented using the discrete fir filter block from dsp system toolbox™. it is pipelined for high-throughput operation.

store dominant lanes

the zero crossing filter output is then passed into the store dominant lanes subsystem. this subsystem has a maximum memory of 7 entries, and is reset every time a new batch of 18 lines is reached. therefore, for each sub-region 7 potential lane candidates are generated. in this subsystem, the zero crossing filter output is streamed through, and examined for potential zero crossings. if a zero crossing does occur, then the difference between the address immediately prior to zero crossing and the address after zero crossing is taken in order to get a measurement of the size of the peak. the subsystem stores the zero crossing locations with the highest magnitude.

open_system([modelname '/hdllanedetector/lanedetection/lanecandidategeneration/storedominantlanes'],'force');

compute ego lanes

the lane detection subsystem outputs the 7 most viable lane markings. in many applications, we are most interested in the lane markings that contain the lane in which the vehicle is driving. by computing the so called "ego-lanes" on the hardware side of the design, we can reduce the memory bandwidth between hardware and software, by sending 2 lanes rather than 7 to the processor. the ego-lane computation is split into two subsystems. the firstpassegolane subsystem assumes that the centre column of the image corresponds to the middle of the lane, when the vehicle is correctly operating within the lane boundaries. the lane candidates which are closest to the center are therefore assumed as the ego lanes. the outlier removal subsystem maintains an average width of the distance from lane markings to centre coordinate. lane markers which are not within tolerance of the current width are rejected. performing early rejection of lane markers gives better results when performing curve fitting later on in the design.

open_system([modelname '/hdllanedetector/computeegolanes'],'force');

control interface

finally, the computed ego lanes are sent to the ctrlinterface matlab function subsystem. this state machine uses the four control signal inputs - enable, hwstart, hwdone, and swstart to determine when to start buffering, accept new lane coordinate into the 40x1 buffer and finally indicate to the software that all 40 lane coordinates have been buffered and so the lane fitting and overlay can be performed. the dataready signal ensures that software will not attempt lane fitting until all 40 coordinates have been buffered, while the swstart signal ensures that the current set of 40 coordinates will be held until lane fitting is completed.

software lane fit and overlay

the detected ego-lanes are then passed to the sw lane fit and overlay subsystem, where robust curve fitting and overlay is performed. recall that the birds-eye output is produced once every two frames or so rather than on every consecutive frame. the curve fitting and overlay is therefore placed in an enabled subsystem, which is only enabled when new ego lanes are produced.

open_system([modelname '/swlanefitandoverlay'],'force');

driver

the driver matlab function subsystem controls the synchronization between hardware and software. initially it is in a polling state, where it samples the dataready input at regular intervals per frame to determine when hardware has buffered a full [40x1] vector of lane coordinates. once this occurs, it transitions into software processing state where swstart and process outputs are held high. the driver remains in the software processing state until swdone input is high. seeing as the process output loops back to swdone input with a rate transition block in between, there is effectively a constant time budget specified for the fitlanesandoverlay subsystem to perform the fitting and overlay. when swdone is high, the driver will transition into a synchronization state, where swstart is held low to indicate to hardware that processing is complete. the synchronization between software and hardware is such that hardware will hold the [40x1] vector of lane coordinates until the swstart signal transitions back to low. when this occurs, dataready output of hardware will then transition back to low.

fit lanes and overlay

the fit lanes and overlay subsystem is enabled by the driver. it performs the necessary arithmetic required in order to fit a polynomial onto the lane coordinate data received at input, and then draws the fitted lane and lane coordinates onto the birds-eye image.

fit lanes

the fit lanes subsystem runs a ransac based line-fitting routine on the generated lane candidates. ransac is an iterative algorithm which builds up a table of inliers based on a distance measure between the proposed curve, and the input data. at the output of this subsystem, there is a [3x1] vector which specifies the polynomial coefficients found by the ransac routine.

overlay lane markings

the overlay lane markings subsystem performs image visualization operations. it overlays the ego lanes and curves found by the lane-fitting routine.

open_system([modelname '/swlanefitandoverlay/fitlanesandoverlay'],'force');

results of the simulation

the model includes two video displays shown at the output of the simulation results. the birdseye display shows the output in the warped perspective after lane candidates have been overlaid, polynomial fitting has been performed and the resulting polynomial overlaid onto the image. the originaloverlay display shows the birdseye output warped back into the original perspective.

due to the large frame sizes used in this model, simulation can take a relatively long time to complete. if you have an hdl verifier™ license, you can accelerate simulation speed by directly running the hdl lane detector subsystem in hardware using fpga in the loop.

hdl code generation

to check and generate the hdl code referenced in this example, you must have an hdl coder™ license.

to generate the hdl code, use the following command.

makehdl('lanedetectionhdl/hdllanedetector')

to generate the test bench, use the following command. note that test bench generation takes a long time due to the large data size. you may want to reduce the simulation time before generating the test bench.

makehdltb('lanedetectionhdl/hdllanedetector')

for faster test bench simulation, you can generate a systemverilog dpic test bench using the following command.

makehdltb('lanedetectionhdl/hdllanedetector','generatesvdpitestbench','modelsim')

conclusion

this example has provided insight into the challenges of designing adas systems in general, with particular emphasis paid to the acceleration of critical parts of the design in hardware.

references

[1] r. k. satzoda and mohan m. trivedi, "vision based lane analysis: exploration of issues and approaches for embedded realization", 2013 ieee conference on computer vision and pattern recognition.

[2] video from caltech lanes dataset - mohamed aly, "real time detection of lane markers in urban streets", 2008 ieee intelligent vehicles symposium - used with permission.

related topics

  • (hdl coder)
网站地图