gamma correction -凯发k8网页登录
this example shows how to model pixel-streaming gamma correction for hardware designs. the model compares the results from the vision hdl toolbox™ gamma corrector block with the results generated by the full-frame gamma block from computer vision toolbox™.
this example model provides a hardware-compatible algorithm. you can implement this algorithm on a board using a xilinx™ zynq™ reference design. see (vision hdl toolbox support package for xilinx zynq-based hardware).
structure of the example
the computer vision toolbox product models at a high level of abstraction. the blocks and objects perform full-frame processing, operating on one image frame at a time. however, fpga or asic systems perform pixel-stream processing, operating on one image pixel at a time. this example simulates full-frame and pixel-streaming algorithms in the same model.
the system is shown below.
the difference in the color of the lines feeding the full-frame gamma compensation and pixel-stream gamma compensation subsystems indicates the change in the image rate on the streaming branch of the model. this rate transition is because the pixel stream is sent out in the same amount of time as the full video frames and therefore it is transmitted at a higher rate.
in this example, the gamma correction is used to correct dark images. darker images are generated by feeding the video source to the corruption block. the video source outputs a 240p grayscale video, and the corruption block applies a de-gamma operation to make the source video perceptually darker. then, the downstream full-frame gamma compensation block or pixel-stream gamma compensation subsystem removes the previous de-gamma operation from the corrupted video to recover the source video.
one frame of the source video, its corrupted version, and recovered version, are shown from left to right in the diagram below.
it is a good practice to develop a behavioral system using blocks that process full image frames, the full-frame gamma compensation block in this example, before moving forward to working on an fpga-targeting design. such a behavioral model helps verify the video processing design. later on, it can serve as a reference for verifying the implementation of the algorithm targeted to an fpga. specifically, the lower psnr (peak signal-to-noise ratio) block in the result verification section at the top level of the model compares the results from full-frame processing with those from pixel-stream processing.
frame to pixels: generating a pixel stream
the task of the frame to pixels is to convert a full-frame image to pixel stream. to simulate the effect of horizontal and vertical blanking periods found in real life hardware video systems, the active image is augmented with non-image data. for more information on the streaming pixel protocol, see streaming pixel interface. the frame to pixels block is configured as shown:
the number of components field is set to 1 for grayscale image input, and the video format field is 240p to match that of the video source.
in this example, the active video region corresponds to the 240x320 matrix of the dark image from the upstream corruption block. six other parameters, namely, total pixels per line, total video lines, starting active line, ending active line, front porch, and back porch specify how many non-image data will be augmented on the four sides of the active video. for more information, see the block reference page.
note that the sample time of the video source is determined by the product of total pixels per line and total video lines.
gamma correction
as shown in the diagram below, the pixel-stream gamma compensation subsystem contains only a gamma corrector block.
the gamma corrector block accepts the pixel stream, as well as a bus containing five synchronization signals, from the frame to pixels block. it passes the same set of signals to the downstream pixels to frame block. such signal bundle and maintenance are necessary for pixel-stream processing.
pixels to frame: converting pixel stream back to full frame
as a companion to frame to pixels that converts a full image frame to pixel stream, the pixels to frame block, reversely, converts the pixel stream back to the full frame by making use of the synchronization signals. since the output of the pixels to frame block is a 2-d matrix of a full image, there is no need to further carry on the bus containing five synchronization signals.
the number of components field and the video format fields of both frame to pixels and pixels to frame are set at 1 and 240p, respectively, to match the format of the video source.
image viewer and result verification
when you run the simulation, three images will be displayed (refer to the images shown in the "structure of the example" section):
the source image given by the image source subsystem
the dark image produced by the corruption block
the hdl output generated by the pixel-stream gamma compensation subsystem
the presence of the four unit delay blocks on top level of the model is to time-align the 2-d matrices for a fair comparison.
while building the streaming portion of the design, the psnr block continuously verifies the hdlout results against the original full-frame design behavioralout. during the course of the simulation, this psnr block should give inf output, indicating that the output image from the full-frame gamma compensation matches the image generated from the stream processing pixel-stream gamma compensation model.
exploring the example
the example allows you to experiment with different gamma values to examine their effect on the gamma and de-gamma operation. specifically, a workspace variable with an initial value 2.2 is created upon opening the model. you can modify its value using the matlab command line as follows:
gammavalue=4
the updated will be propagated to the gamma field of the corruption block, the full-frame gamma compensation block, and the gamma corrector block inside pixel-stream gamma compensation subsystem. closing the model clears from your workspace.
although gamma operation is conceptually the inverse of de-gamma, feeding an image to gamma followed by a de-gamma (or de-gamma first then gamma) does not necessarily perfectly restore the original image. distortions are expected. to measure this, in our example, another psnr block is placed between the sourceimage and behavioralout. the higher the psnr, the less distortion has been introduced. ideally, if hdl output and the source image are identical, psnr outputs inf. in our example, this happens only when equals 1 (i.e., both gamma and de-gamma blocks pass the source image through).
we can also use gamma to corrupt a source image by making it brighter, followed by a de-gamma correction for image recovery.
generate hdl code and verify its behavior
to check and generate the hdl code referenced in this example, you must have an hdl coder™ license.
to generate the hdl code, use the following command:
makehdl('gammacorrectionhdl/pixel-stream gamma compensation')
to infer a ram to implement a lookup table used in the gamma corrector, the lutregisterresettype property is set to none. to access this property, right click the gamma corrector block inside pixel-stream gamma compensation, and navigate to hdl coder -> hdl block properties ...
to generate test bench, use the following command:
makehdltb('gammacorrectionhdl/pixel-stream gamma compensation')