main content

fast corner detection -凯发k8网页登录

this example shows how to perform corner detection using the features-from-accelerated-segment test (fast) algorithm. the algorithm is suitable for fpgas.

corner detection is used in computer vision systems to find features in an image. it is often one of the first steps in applications like motion detection, tracking, image registration and object recognition.

the fast algorithm determines if a corner is present by testing a circular area around the potential center of the corner. the test detects a corner if a contiguous section of pixels are either brighter than the center plus a threshold or darker than the center minus a threshold. for another corner detection algorithm for fpgas, see the harris corner detection example.

in a software implementation the fast algorithm allows for a quick test to rule out potential corners by only testing the four pixels along the axes. software algorithms only perform the full test if the quick test passes. a hardware implementation can easily perform all the tests in parallel so a quick test is not particularly advantageous and is not included in this example.

the fast algorithm can be used at many sizes or scales. this example detects corners using a sixteen-pixel circle. in these sixteen pixels, if any nine contiguous pixel meet the brighter or darker limit then a corner is detected.

matlab fast corner detection

the computer vision system toolbox™ includes a software fast corner detection algorithm in the detectfastfeatures function. this example uses this function as the behavioral model to compare against the fast algorithm design for hardware in simulink®. the function has parameters for setting the minimum contrast and the minimum quality.

the minimum contrast parameter is the threshold value that is added or subtracted from the center pixel value before comparing to the ring of pixels.

the minimum quality parameter controls which detected corners are "strong" enough to be marked as actual corners. the strength metric in the original fast paper is based on summing the differences of the pixels in the circular area to the central pixel [2]. later versions of this algorithm use a different strength metric based on the smallest change in pixel value that would make the detection no longer a corner. detectfastfeatures uses the smallest-change metric.

this code reads the first frame of video, converts it to gray scale, and calls detectfastfeatures. the result is a vector of corner locations. to display the corner locations, use the vector to draw bright green dots over the corner pixels in the output frame.

v = videoreader('rhinos.avi');
i = rgb2gray(readframe(v));
% create output rgb frame
y = repmat(i,[1 1 3]);
corners = detectfastfeatures(i,'mincontrast',15/255,'minquality',1/255);
locs = corners.location;
for ii = 1:size(locs,1)
    y(floor(locs(ii,2)),floor(locs(ii,1)),2) = 255; % green dot
end
imshow(y)

limitations of the fast algorithm

other corner detection methods work very differently from the fast method and a surprising result is that fast does not detect corners on computer generated images that are perfectly aligned to the x and y axes. since the detected corner must have a ring of darker or lighter pixel values around the center that includes both edges of the corner, crisp images do not work well. for example, try the fast algorithm on the input image used in the harris harris corner detection example.

i = imread('cornerboxes.png');
ig = rgb2gray(i);
corners = detectfastfeatures(ig,'mincontrast',15/255,'minquality',1/255)
corners = 
  0x1 cornerpoints array with properties:
    location: [0x2 single]
      metric: [0x1 single]
       count: 0

you can see that the function detected zero corners. this because the fast algorithm requires a ring of contrasting pixels more than halfway around the center of corner. in the computer generated image, both edges of a box at a corner are in the ring of pixel used, so the test for a corner fails. a work-around to this problem is to add blur (by applying a gaussian filter) to the image so that the corners are less precise but can be detected. after blurring, the fast algorithm now detects over 100 corners.

h = fspecial('gauss',5);
ig = imfilter(ig,h);
corners = detectfastfeatures(ig,'mincontrast',15/255,'minquality',1/255)
locs = corners.location;
for ii = 1:size(locs,1)
    i(floor(locs(ii,2)),floor(locs(ii,1)),2) = 255; % green dot
end
imshow(i)
corners = 
  136x1 cornerpoints array with properties:
    location: [136x2 single]
      metric: [136x1 single]
       count: 136

behavioral model for verification

the simulink model uses the detectfastfeatures function as a behavioral model to verify the results of the hardware algorithm. you can use a matlab function block to run matlab code in simulink.

modelname = 'fastcornerhdl';
open_system(modelname);
set_param(modelname,'sampletimecolors','on');
set_param(modelname,'simulationcommand','update');
set_param(modelname,'open','on');
set(allchild(0),'visible','off');

the code in a matlab function block must either generate c code or be declared extrinsic. an extrinsic declaration allows the specified function to run in matlab while the rest of the matlab function block runs in simulink. the detectfastfeatures function does not support code generation, so the matlab function block must use an extrinsic helper function.

for frame-by-frame visual comparison, and the ability to vary the contrast parameter, the helper function takes an input image and the minimum contrast as inputs. it returns an output image with green dots marking the detected corners.

function y = fasthelper(i,mincontrast)
y = i;
corners = detectfastfeatures(i(:,:,1),'mincontrast',double(mincontrast)/255,'minquality',1/255);
locs = corners.location;
for ii = 1:size(locs,1)
    y(floor(locs(ii,2)),floor(locs(ii,1)),2) = 255; % green dot
end
end

the matlab function block must have a defined size for the output array. a fast way to define the output size is to copy the input to the output before calling the helper function. this is the code inside the matlab function block:

function y = fcn(i,mincontrast)
    coder.extrinsic('fasthelper');
    y = i;
    y = fasthelper(i,mincontrast);
end

implementation for hdl

the fast algorithm implemented in the vision hdl toolbox corner detector block in this model tests 9 contiguous pixels from a ring of 16 pixels, and compares their values to the center pixel value. a kernel of 7x7 pixels around each test pixel includes the 16-pixel ring. the diagram shows the center pixel and the ring of 16 pixels around it that is used for the test. the ring pixels, clockwise from the top-middle, are

  indices = [22 29 37 45 46 47 41 35 28 21 13 5 4 3 9 15];

these pixel indices are used for selection and comparison. the order must be contiguous, but the ring can begin at any point.

after computing corner metrics using these rings of pixels, the algorithm determines the maximum corner metric in each region and suppresses other detected corners. the model then overlays the non-suppressed corner markers onto the original input image.

the hardware algorithm is in the fasthdlalgorithm subsystem. this subsystem supports hdl code generation.

open_system([modelname '/fasthdlalgorithm'],'force');

corner detection

to determine the presence of a corner, look for all possible 9-pixel contiguous segments of the ring that have values either greater than or less than the threshold value.

in hardware, you can perform all these comparisons in parallel. each comparator block expands to 16 comparators. the output of the block is 16 binary decisions representing each segment of the ring.

non-maximal suppression

the fast algorithm identifies many, many potential corners. to reduce subsequent processing, all corners except the corners with the maximum corner metric in a particular region can be removed or suppressed. there are many algorithms for non-maximal suppression suitable for software implementation, but few suitable for hardware. in software, a gradient-based approach is used, which can be resource intensive in hardware. in this model a simple but very effective technique is to compare corner metrics in a 5x5 kernel and produce a boolean result. the boolean output is true if the corner metric in the center of the kernel is greater than zero (i.e. it is a corner) and also it is the maximum of all the other corner metrics in the 5x5 region. the greater-than-zero condition matches setting minquality to 1 for the detectfastfeatures function.

since the processing of the pixel stream is from left to right and top to bottom, the results contain some directional effects, such as that the detected corners do not always perfectly align with the objects. the nonmaxsuppress subsystem includes a constant block that allows you to disable suppression and visualize the complete results.

open_system([modelname '/fasthdlalgorithm/nonmaxsuppress'],'force');

align and overlay

at the output of the nonmaxsuppress subsystem, the pixel stream includes markers for the strongest corner in each 5x5 region. next, the model realigns the detected corners with the original pixel stream using the pixel stream aligner block. after the original stream and the markers are aligned in time, the model overlays a green dot on the corners. the overlay subsystem contains an alpha mixer with constants for the color and alpha values.

the output viewers show the overlaid green dots for corners detected. the behavioral video viewer shows the output of the detectfastfeatures function, and the hdl video viewer shows the output of the hdl algorithm.

going further

the non-maximal suppression algorithm could be improved by following gradients and using a multiple-pass strategy, but that computation would also use more hardware resources.

conclusion

this example shows how to start using detectfastfeatures in matlab and then move to simulink for the fpga portion of the design. the hardware algorithm in the corner detector block includes a test of the ring around the central pixel in a kernel, and a corner strength metric. the model uses a non-maximal suppression function to remove all but the strongest detected corners. the design then overlays the corner locations onto the original video input, highlighting the corners in green.

references

[1] rosten, e., and t. drummond. "fusing points and lines for high performance tracking" proceedings of the ieee international conference on computer vision, vol. 2 (october 2005): pp. 1508-1511.

[2] rosten, e., and t. drummond. "machine learning for high-speed corner detection" computer vision - eccv 2006 lecture notes in computer science, 2006, 430-43. doi:10.1007/11744023_34.

网站地图