noise removal and image sharpening -凯发k8网页登录
this example shows how to implement a front-end module of an image processing design. this front-end module removes noise and sharpens the image to provide a better initial condition for the subsequent processing.
an object out of focus results in a blurred image. dead or stuck pixels on the camera or video sensor, or thermal noise from hardware components, contribute to the noise in the image. in this example, the front-end module is implemented using two pixel-stream filter blocks from the vision hdl toolbox™. the median filter removes the noise and the image filter sharpens the image. the example compares the pixel-stream results with those generated by the full-frame blocks from the computer vision system toolbox™.
this example model provides a hardware-compatible algorithm. you can implement this algorithm on a board using a xilinx™ zynq™ reference design. see (vision hdl toolbox support package for xilinx zynq-based hardware).
structure of the example
computer vision toolbox blocks operate on an entire frame at a time. vision hdl toolbox blocks operate on a stream of pixel data, one pixel at a time. the conversion blocks in vision hdl toolbox, frame to pixels and pixels to frame, enable you to simulate streaming-pixel designs alongside full-frame designs.
the system is shown below.
the following diagram shows the structure of the full-frame behavioral model subsystem, which consists of the frame-based median filter and 2-d fir filter. as mentioned before, median filter removes the noise and 2-d fir filter is configured to sharpen the image.
the pixel-stream hdl model subsystem contains the streaming implementation of the median filter and 2-d fir filter, as shown in the diagram below. you can generate hdl code from the pixel-stream hdl model subsystem.
the verification subsystem compares the results from full-frame processing with those from pixel-stream processing.
one frame of the blurred and noisy source video, its de-noised version after median filtering, and the sharpened output after 2-d fir filtering, are shown from left to right in the diagram below.
image source
the following figure shows the image source subsystem.
the image source block imports a grayscale image, then uses a matlab function block named blur and add noise to blur the image and inject salt-and-pepper noise. the imfilter function uses a 3-by-3 averaging kernel to blur the image. the salt-and-pepper noise is injected by calling the imnoise(i,'salt & pepper',d) command, where d is the noise density defined as the ratio of the combined number of salt and pepper pixels to the total pixels in the image. this density value is specified by the noise density constant block, and it must be between 0 and 1. the image source subsystem outputs a 2-d matrix of a full image.
frame to pixels: generating a pixel stream
the frame to pixels block converts a full image frame to a pixel stream. the number of components field is set to 1 for grayscale image input, and the video format field is 240p to match that of the video source. the sample time of the video source is determined by the product of total pixels per line and total video lines in the frame to pixels block. for more information, see the block reference page.
pixel-stream hdl model
the median filter block is used to remove the salt and pepper noise. to learn more, refer to the block reference page.
based on the filter coefficients, the image filter block can be used to blur, sharpen, or detect the edges of the recovered image after median filtering. in this example, image filter is configured to sharpen an image. to learn more, refer to the block reference page.
pixels to frame: converting pixel stream back to full frame
the pixels to frame block converts a pixel stream to the full frame by making use of the synchronization signals. the number of components field and the video format field of the pixels to frame are set at 1 and 240p, respectively, to match the format of the video source.
verifying the pixel-stream processing design
the verification subsystem, as shown below, verifies the results from the pixel-stream hdl model against the full-frame behavioral model.
the peak signal to noise ratio (psnr) is calculated between the reference image and the stream processed image. ideally, the ratio should be inf, indicating that the output image from the full-frame behavioral model matches that generated from the pixel-stream hdl model.
generate hdl code and verify its behavior
to check and generate the hdl code referenced in this example, you must have an hdl coder™ license.
to generate the hdl code, use the following command:
makehdl('noiseremovalandimagesharpeninghdl/pixel-stream hdl model');
to generate test bench, use the following command:
makehdltb('noiseremovalandimagesharpeninghdl/pixel-stream hdl model');