harris corner detection -凯发k8网页登录
this example shows how to use edge detection as the first step in corner detection. the algorithm is suitable for fpgas.
corner detection is used in computer vision systems to find features in an image. it is often one of the first steps in applications like motion detection, tracking, image registration and object recognition.
a corner is intuitively defined as the intersection of two edges. this example uses the harris & stephens algorithm [1] in which the computation is simplified using an approximation of the eigenvalues of the harris matrix. for another corner detection algorithm for fpgas, see the fast corner detection example.
this example model provides a hardware-compatible algorithm. you can implement this algorithm on a board using a xilinx™ zynq™ reference design. see (vision hdl toolbox support package for xilinx zynq-based hardware).
introduction
the system is shown below. the hdl corner algorithm subsystem contains a corner detector block with the method parameter set to harris
.
first step: find the gradients
the first step in the harris algorithm is to find the edges in the image. the corner detector block uses two gradient image filters with coefficients and to produce gradients and . square and cross-multiply to form , and .
second step: circular filtering
the second step of the algorithm is to perform gaussian filtering to average , and over a circular window. the size of the circular window determines the scale of the detected corner. the block uses a 5x5 window. for three components, the block uses three filters with the same filter coefficients.
final step: form the harris matrix
the final step of the algorithm is to estimate the eigenvalue of the harris matrix. the harris matrix is a symmetric matrix similar to a covariance matrix. the main diagonal is composed of the two averages of the gradients squared and . the off diagonal elements are the averages of the gradient cross-product . the harris matrix is:
compute the response from the harris matrix
the key simplification of the harris algorithm is estimating the eigenvalues of the harris matrix as the determinant minus the scaled trace squared.
where is a constant typically 0.04.
the corner metric response, , expressed using the gradients is:
when the response is larger than a predefined threshold, a corner is detected:
fixed-point settings
the overall function from input image to output corner metric response is a fourth-order polynomial. this leads to some challenges determining the fixed-point scaling for each step of the computation. since we are targeting fpgas with built-in multipliers, the best strategy is to allow bit growth until the multiplier size is reached and then start to quantize results on a selective basis to stay within the bounds of the provided multipliers.
the input pixel stream is 8-bit grayscale pixel data. computing the gradients does not add much bit-growth since the filter kernel has only 1 and -1 coefficients. the result is a full-precision 9-bit signed fixed-point type.
squaring and cross-multiplying the gradients produces signed 18-bit results, still in full precision. many common fpga multipliers have 18-bit or 20-bit input wordlengths, so you will have to quantize at the next step.
the next step is to apply a circular window to the three components using three image filters with gaussian coefficients. the coefficients are quantized to 18-bit unsigned numbers to fit the fpga multipliers. to find the best fraction precision for the coefficients, create a fixed-point number using the fi() function but only specifying the wordlength. in this case a fractional scaling of 21-bits is best since the largest value in the coefficient matrix is between 1/8 and 1/16.
coeffs = fi(fspecial('gaussian',[5,5],1.5),0,18)
coeffs = 0.0144 0.0281 0.0351 0.0281 0.0144 0.0281 0.0547 0.0683 0.0547 0.0281 0.0351 0.0683 0.0853 0.0683 0.0351 0.0281 0.0547 0.0683 0.0547 0.0281 0.0144 0.0281 0.0351 0.0281 0.0144 datatypemode: fixed-point: binary point scaling signedness: unsigned wordlength: 18 fractionlength: 21
results of the simulation
you can see that the resulting images from the simulation are very similar but not exactly the same. the small differences in simulation results are because the behavioral model uses c integer arithmetic rules and the quantization is different from the hdl-ready corner detection block.
using simulink, you can understand these differences and decide if the errors are allowable for your application. if they are not acceptable, you can increase the bit-widths of the operators, although this increases the area used in the fpga.
hdl code generation
to check and generate the hdl code referenced in this example, you must have an hdl coder™ license.
to generate the hdl code, use the following command.
makehdl('cornerdetectionhdl/hdl corner algorithm')
to generate the test bench, use the following command. note that test bench generation takes a long time due to the large data size. you may want to reduce the simulation time before generating the test bench.
makehdltb('cornerdetectionhdl/hdl corner algorithm')
the part of this model that you can implement on an fpga is the part between the frame to pixels and pixels to frame blocks. that is the subsystem called hdl corner algorithm, which includes all elements of the corner detection algorithm seen above. the rest of the model, including the behavioral corner algorithm and the sources and sinks, form our simulink test bench.
going further
the harris & stephens algorithm is based on approximating the eigenvalues of the harris matrix as shown above. the harris algorithm uses as a metric, avoiding any division or square-root operations. another way to do corner detection is to compute the actual eigenvalues.
the analytical solution for the eigenvalues of a 2x2 matrix is well-known and can also be used in corner detection. when the eigenvalues are both positive and large with the same scale, a corner has been found.
substituting in our values we get:
for fpga implementation it is important to notice the repeated value of . we can compute this value once and then square to combine with . this means that the eigenvalue algorithm requires only two multipliers but at the expense of more adders and subtractors and a square-root function, which requires several multipliers on its own.
you must then compare both eigenvalues to a constant value to make sure they are large. since the eigenvalues scale up with image intensity, you also need to make sure they are both around the same size. you can do this by subtracting one from another and making sure that result is smaller than some predefined threshold value. notice that in this subtraction, the first terms cancel out and you are left with:
you can rearrange this so that it is very similar to harris metric above:
expanding the matrix gives:
the similarity between the difference of the eigenvalues and the harris metric shows how the harris approximation works. if you rearrange the terms under the square-root and swap the signs so the result must be greater than or equal to a predefined threshold, you arrive at essentially the harris metric with some scaling.
references
[1] c. harris and m. stephens (1988). "a combined corner and edge detector". proceedings of the 4th alvey vision conference. pp. 147-151.