main content

automatically detect and recognize text using pretrained craft network and ocr -凯发k8网页登录

this example shows how to perform text recognition by using a deep learning based text detector and ocr. in the example, you use a pretrained craft (character region awareness for text) deep learning network to detect the text regions in the input image. you can modify the region threshold and the affinity threshold values of the craft model to localise an entire paragraph, a sentence, or a word. then, you use ocr to recognize the characters in the detected text regions.

read image

read an image into the matlab® workspace.

i = imread("handicapsign.jpg");

detect text regions

detect text regions in the input image by using the detecttextcraft function. the characterthreshold value is the region threshold to use for localizing each character in the image. the linkthreshold value is the affinity threshold that defines the score for grouping two detected texts into a single instance. you can fine-tune the detection results by modifying the region and affinity threshold values. increase the value of the affinity threshold for more word-level and character-level detections. for information about the effect of the affinity threshold on the detection results, see the example.

to detect each word on the parking sign, set the value of the region threshold to 0.3. the default value for the affinity threshold is 0.4. the output is a set of bounding boxes that localize the words in the image scene. the bounding box specifies the spatial coordinates of the detected text regions in the image.

bbox = detecttextcraft(i,characterthreshold=0.3);

draw the output bounding boxes on the image by using the insertshape function.

iout = insertshape(i,"rectangle",bbox,linewidth=4);

display the input image and the output text detections.

figure(position=[1 1 600 600]);
ax = gca;
montage({i;iout},parent=ax);
title("input image | detected text regions")

recognize text

the ocr function performs best on images that contain dark text on light background. convert the input image to a binary image and invert it to obtain an image that contains dark text on a light background.

igray = im2gray(i);
ibinary = imbinarize(igray);
icomplement = imcomplement(ibinary);

display the binary image and the inverted binary image.

figure(position=[1 1 600 600]);
ax = gca;
montage({ibinary;icomplement},parent=ax);
title("binary image | inverted binary image")

recognize the text within the bounding boxes by using the ocr function. set the layoutanalysis name-value argument to "word" as the word regions are manually provided in the roi input.

output = ocr(icomplement,bbox,layoutanalysis="word");

display the recognized words.

recognizedwords = cat(1,output(:).words);
figure
imshow(i)
zoom(2)
showshape("rectangle",bbox,label=recognizedwords,color="yellow")

see also

| | | | | |

related topics

    网站地图