main content

defect detection -凯发k8网页登录

this example shows how to deploy a custom trained series network to detect defects in objects such as hexagon nuts. the custom networks were trained by using transfer learning. transfer learning is commonly used in deep learning applications. you can take a pretrained network and use it as a starting point to learn a new task. fine-tuning a network with transfer learning is usually much faster and easier than training a network with randomly initialized weights from scratch. you can quickly transfer learned features to a new task using a smaller number of training signals. this example uses two trained series networks, traineddefnet.mat and trainedblemdetnet.mat.

prerequisites

  • xilinx zcu102 soc development kit

  • deep learning hdl toolbox™support package for xilinx fpga and soc

  • deep learning toolbox™

  • deep learning hdl toolbox™

load pretrained networks

load the custom pretrained series network traineddefnet.

if ~isfile('traineddefnet.mat')
        url = 'https://www.mathworks.com/supportfiles/dlhdl/traineddefnet.mat';
        websave('traineddefnet.mat',url);
    end
    net1 = load('traineddefnet.mat');
   snet_defnet = net1.custom_alexnet
snet_defnet = 
  seriesnetwork with properties:
         layers: [25×1 nnet.cnn.layer.layer]
     inputnames: {'data'}
    outputnames: {'output'}

analyze the network. analyzenetwork displays an interactive plot of the network architecture and a table containing information about the network layers.

    analyzenetwork(snet_defnet)  

load the network snet_blemdetnet.

    
   
if ~isfile('trainedblemdetnet.mat')
        url = 'https://www.mathworks.com/supportfiles/dlhdl/trainedblemdetnet.mat';
        websave('trainedblemdetnet.mat',url);
    end
    net2 = load('trainedblemdetnet.mat');
    snet_blemdetnet = net2.convnet
snet_blemdetnet = 
  seriesnetwork with properties:
         layers: [12×1 nnet.cnn.layer.layer]
     inputnames: {'imageinput'}
    outputnames: {'classoutput'}

analyze the network. analyzenetwork displays an interactive plot of the network architecture and a table containing information about the network layers.

    analyzenetwork(snet_blemdetnet)

create target object

create a target object that has a custom name for your target device and an interface to connect your target device to the host computer. interface options are jtag and ethernet. to use the jtag connection, install the xilinx™ vivado™ design suite 2020.2.

set the xilinx vivado toolpath.

hdlsetuptoolpath('toolname', 'xilinx vivado', 'toolpath', 'c:\xilinx\vivado\2020.2\bin\vivado.bat');
ht = dlhdl.target('xilinx','interface','ethernet')
ht = 
  target with properties:
       vendor: 'xilinx'
    interface: ethernet
    ipaddress: '192.168.1.101'
     username: 'root'
         port: 22

generate bitstream to run network

the defect detection network consists of multiple cross channel normalization layers. to support this layer on hardware, the 'lrnblockgeneration' property of the conv module needs to be turned on in the bitstream used for fpga inference. the shipping zcu102_single bitstream does not have this property turned on. a new bitstream can be generated using the following lines of code. the generated bitstream can be used along with a dlhdl.workflow object for inference.

when creating a dlhdl.processorconfig object for an existing shipping bitstream, make sure that the bitstream name matches the data type and the fpga board that you are targeting. in this example the target fpga board is the xilinx zcu102 soc board and the date type is single. update the processor configuration with 'lrnblockgeneration' turned on and 'segmentationblockgeneration' turned off. turn the latter off to fit the deep learning ip on the fpga and avoid overutilization of resources.

hpc = dlhdl.processorconfig('bitstream', 'zcu102_single');
hpc.setmoduleproperty('conv', 'lrnblockgeneration', 'on');
hpc.setmoduleproperty('conv', 'segmentationblockgeneration', 'off');
dlhdl.buildprocessor(hpc)

to learn how to use the generated bitstream file, see generate custom bitstream.

create workflow object for traineddefnet network

create an object of the dlhdl.workflow class. when you create the class, specify the network and the bitstream name. make sure to use the generated bitstream which enables processing of cross channel normalization layers on the fpga. specify the saved pretrained neural network, snet_defnet, as the network.

hw = dlhdl.workflow('network',snet_defnet,'bitstream','dlprocessor.bit','target',ht);

compile traineddefnet series network

run the compile function of the dlhdl.workflow object.

hw.compile
### compiling network for deep learning fpga prototyping ...
### targeting fpga bitstream zcu102_single ...
### the network includes the following layers:
     1   'data'     image input                   128×128×1 images with 'zerocenter' normalization                                  (sw layer)
     2   'conv1'    convolution                   96 11×11×1 convolutions with stride [4  4] and padding [0  0  0  0]               (hw layer)
     3   'relu1'    relu                          relu                                                                              (hw layer)
     4   'norm1'    cross channel normalization   cross channel normalization with 5 channels per element                           (hw layer)
     5   'pool1'    max pooling                   3×3 max pooling with stride [2  2] and padding [0  0  0  0]                       (hw layer)
     6   'conv2'    grouped convolution           2 groups of 128 5×5×48 convolutions with stride [1  1] and padding [2  2  2  2]   (hw layer)
     7   'relu2'    relu                          relu                                                                              (hw layer)
     8   'norm2'    cross channel normalization   cross channel normalization with 5 channels per element                           (hw layer)
     9   'pool2'    max pooling                   3×3 max pooling with stride [2  2] and padding [0  0  0  0]                       (hw layer)
    10   'conv3'    convolution                   384 3×3×256 convolutions with stride [1  1] and padding [1  1  1  1]              (hw layer)
    11   'relu3'    relu                          relu                                                                              (hw layer)
    12   'conv4'    grouped convolution           2 groups of 192 3×3×192 convolutions with stride [1  1] and padding [1  1  1  1]  (hw layer)
    13   'relu4'    relu                          relu                                                                              (hw layer)
    14   'conv5'    grouped convolution           2 groups of 128 3×3×192 convolutions with stride [1  1] and padding [1  1  1  1]  (hw layer)
    15   'relu5'    relu                          relu                                                                              (hw layer)
    16   'pool5'    max pooling                   3×3 max pooling with stride [2  2] and padding [0  0  0  0]                       (hw layer)
    17   'fc6'      fully connected               4096 fully connected layer                                                        (hw layer)
    18   'relu6'    relu                          relu                                                                              (hw layer)
    19   'drop6'    dropout                       50% dropout                                                                       (hw layer)
    20   'fc7'      fully connected               4096 fully connected layer                                                        (hw layer)
    21   'relu7'    relu                          relu                                                                              (hw layer)
    22   'drop7'    dropout                       50% dropout                                                                       (hw layer)
    23   'fc8'      fully connected               2 fully connected layer                                                           (hw layer)
    24   'prob'     softmax                       softmax                                                                           (sw layer)
    25   'output'   classification output         crossentropyex with classes 'ng' and 'ok'                                         (sw layer)
3 memory regions created.
skipping: data
compiling leg: conv1>>pool5 ...
compiling leg: conv1>>pool5 ... complete.
compiling leg: fc6>>fc8 ...
compiling leg: fc6>>fc8 ... complete.
skipping: prob
skipping: output
creating schedule...
.......
creating schedule...complete.
creating status table...
......
creating status table...complete.
emitting schedule...
......
emitting schedule...complete.
emitting status table...
........
emitting status table...complete.
### allocating external memory buffers:
          offset_name          offset_address     allocated_space 
    _______________________    ______________    _________________
    "inputdataoffset"           "0x00000000"     "8.0 mb"         
    "outputresultoffset"        "0x00800000"     "4.0 mb"         
    "schedulerdataoffset"       "0x00c00000"     "4.0 mb"         
    "systembufferoffset"        "0x01000000"     "28.0 mb"        
    "instructiondataoffset"     "0x02c00000"     "4.0 mb"         
    "convweightdataoffset"      "0x03000000"     "12.0 mb"        
    "fcweightdataoffset"        "0x03c00000"     "84.0 mb"        
    "endoffset"                 "0x09000000"     "total: 144.0 mb"
### network compilation complete.
ans = struct with fields:
             weights: [1×1 struct]
        instructions: [1×1 struct]
           registers: [1×1 struct]
    syncinstructions: [1×1 struct]

program bitstream onto fpga and download network weights

to deploy the network on the xilinx zcu102 soc hardware, run the deploy function of the dlhdl.workflow object. this function uses the output of the compile function to program the fpga board by using the programming file. it also downloads the network weights and biases. the deploy function starts programming the fpga device and displays progress messages and the time it takes to deploy the network.

hw.deploy
### programming fpga bitstream using ethernet...
downloading target fpga device configuration over ethernet to sd card ...
# copied /tmp/hdlcoder_rd to /mnt/hdlcoder_rd
# copying bitstream hdlcoder_system.bit to /mnt/hdlcoder_rd
# set bitstream to hdlcoder_rd/hdlcoder_system.bit
# copying devicetree devicetree_dlhdl.dtb to /mnt/hdlcoder_rd
# set devicetree to hdlcoder_rd/devicetree_dlhdl.dtb
# set up boot for reference design: 'axi-stream ddr memory access : 3-axim'
downloading target fpga device configuration over ethernet to sd card done. the system will now reboot for persistent changes to take effect.
system is rebooting . . . . . .
### programming the fpga bitstream has been completed successfully.
### loading weights to conv processor.
### conv weights loaded. current time is 16-dec-2020 16:16:31
### loading weights to fc processor.
### 20% finished, current time is 16-dec-2020 16:16:32.
### 40% finished, current time is 16-dec-2020 16:16:32.
### 60% finished, current time is 16-dec-2020 16:16:33.
### 80% finished, current time is 16-dec-2020 16:16:34.
### fc weights loaded. current time is 16-dec-2020 16:16:34

run prediction for one image

load an image from the attached testimages folder and resize the image to match the network image input layer dimensions. run the predict function of the dlhdl.workflow object to retrieve and display the defect prediction from the fpga.

wi = uint32(320);
he = uint32(240);
ch = uint32(3);
filename = fullfile(pwd,'ng1.png');
img=imread(filename);
img = imresize(img, [he, wi]);
img = mat2ocv(img);
    % extract roi for preprocessing
    [iori, imgpacked, num, bbox] = myndnet_preprocess(img);
    % row-major to column-major conversion
    imgpacked2 = zeros([128,128,4],'uint8');
    for c = 1:4
        for i = 1:128
            for j = 1:128
                imgpacked2(i,j,c) = imgpacked((i-1)*128   (j-1)   (c-1)*128*128   1);
            end
        end
    end
    % classify detected nuts by using cnn
    scores = zeros(2,4);
    for i = 1:num
         [scores(:,i), speed] = hw.predict(single(imgpacked2(:,:,i)),'profile','on');
    end
### finished writing input activations.
### running single input activations.
              deep learning processor profiler performance results
                   lastframelatency(cycles)   lastframelatency(seconds)       framesnum      total latency     frames/s
                         -------------             -------------              ---------        ---------       ---------
network                   12231156                  0.05560                       1           12231156             18.0
    conv1                   414021                  0.00188 
    norm1                   172325                  0.00078 
    pool1                    56747                  0.00026 
    conv2                   654112                  0.00297 
    norm2                   119403                  0.00054 
    pool2                    43611                  0.00020 
    conv3                   777446                  0.00353 
    conv4                   595551                  0.00271 
    conv5                   404425                  0.00184 
    pool5                    17831                  0.00008 
    fc6                    1759699                  0.00800 
    fc7                    7030188                  0.03196 
    fc8                     185672                  0.00084 
 * the clock frequency of the dl processor is: 220mhz
    iori = reshape(iori, [1, he*wi*ch]);
    bbox = reshape(bbox, [1,16]);
    scores = reshape(scores, [1, 8]);
    % insert an annotation for postprocessing
    out = myndnet_postprocess(iori, num, bbox, scores, wi, he, ch);
    sz = [he wi ch];
    out = ocv2mat(out,sz);
    imshow(out)

    

create workflow object for trainedblemdetnet network

create an object of the dlhdl.workflow class. when you create the class, specify the network and the bitstream name. make sure to use the generated bitstream which enables processing of cross channel normalization layers on the fpga. specify the saved pretrained neural network, trainedblemdetnet, as the network.

hw = dlhdl.workflow('network',snet_blemdetnet,'bitstream','dlprocessor.bit','target',ht)

compile trainedblemdetnet series network

run the compile function of the dlhdl.workflow object.

hw.compile
### compiling network for deep learning fpga prototyping ...
### targeting fpga bitstream zcu102_single ...
### the network includes the following layers:
     1   'imageinput'    image input                   128×128×1 images with 'zerocenter' normalization                    (sw layer)
     2   'conv_1'        convolution                   20 5×5×1 convolutions with stride [1  1] and padding [0  0  0  0]   (hw layer)
     3   'relu_1'        relu                          relu                                                                (hw layer)
     4   'maxpool_1'     max pooling                   2×2 max pooling with stride [2  2] and padding [0  0  0  0]         (hw layer)
     5   'crossnorm'     cross channel normalization   cross channel normalization with 5 channels per element             (hw layer)
     6   'conv_2'        convolution                   20 5×5×20 convolutions with stride [1  1] and padding [0  0  0  0]  (hw layer)
     7   'relu_2'        relu                          relu                                                                (hw layer)
     8   'maxpool_2'     max pooling                   2×2 max pooling with stride [2  2] and padding [0  0  0  0]         (hw layer)
     9   'fc_1'          fully connected               512 fully connected layer                                           (hw layer)
    10   'fc_2'          fully connected               2 fully connected layer                                             (hw layer)
    11   'softmax'       softmax                       softmax                                                             (sw layer)
    12   'classoutput'   classification output         crossentropyex with classes 'ng' and 'ok'                           (sw layer)
3 memory regions created.
skipping: imageinput
compiling leg: conv_1>>maxpool_2 ...
compiling leg: conv_1>>maxpool_2 ... complete.
compiling leg: fc_1>>fc_2 ...
compiling leg: fc_1>>fc_2 ... complete.
skipping: softmax
skipping: classoutput
creating schedule...
.......
creating schedule...complete.
creating status table...
......
creating status table...complete.
emitting schedule...
......
emitting schedule...complete.
emitting status table...
........
emitting status table...complete.
### allocating external memory buffers:
          offset_name          offset_address    allocated_space 
    _______________________    ______________    ________________
    "inputdataoffset"           "0x00000000"     "8.0 mb"        
    "outputresultoffset"        "0x00800000"     "4.0 mb"        
    "schedulerdataoffset"       "0x00c00000"     "4.0 mb"        
    "systembufferoffset"        "0x01000000"     "28.0 mb"       
    "instructiondataoffset"     "0x02c00000"     "4.0 mb"        
    "convweightdataoffset"      "0x03000000"     "4.0 mb"        
    "fcweightdataoffset"        "0x03400000"     "36.0 mb"       
    "endoffset"                 "0x05800000"     "total: 88.0 mb"
### network compilation complete.
ans = struct with fields:
             weights: [1×1 struct]
        instructions: [1×1 struct]
           registers: [1×1 struct]
    syncinstructions: [1×1 struct]

program bitstream onto fpga and download network weights

to deploy the network on the xilinx zcu102 soc hardware, run the deploy function of the dlhdl.workflow object. this function uses the output of the compile function to program the fpga board by using the programming file. it also downloads the network weights and biases. the deploy function starts programming the fpga device and displays progress messages and the time it takes to deploy the network.

 hw.deploy
### fpga bitstream programming has been skipped as the same bitstream is already loaded on the target fpga.
### loading weights to conv processor.
### conv weights loaded. current time is 16-dec-2020 16:16:47
### loading weights to fc processor.
### 50% finished, current time is 16-dec-2020 16:16:48.
### fc weights loaded. current time is 16-dec-2020 16:16:48

run prediction for one image

load an image from the attached testimages folder and resize the image to match the network image input layer dimensions. run the predict function of the dlhdl.workflow object to retrieve and display the defect prediction from the fpga.

wi = uint32(320);
he = uint32(240);
ch = uint32(3);
filename = fullfile(pwd,'ok1.png');
img=imread(filename);
img = imresize(img, [he, wi]);
img = mat2ocv(img);
    % extract roi for preprocessing
    [iori, imgpacked, num, bbox] = myndnet_preprocess(img);
    % row-major to column-major conversion
    imgpacked2 = zeros([128,128,4],'uint8');
    for c = 1:4
        for i = 1:128
            for j = 1:128
                imgpacked2(i,j,c) = imgpacked((i-1)*128   (j-1)   (c-1)*128*128   1);
            end
        end
    end
    % classify detected nuts by using cnn
    scores = zeros(2,4);
    for i = 1:num
         [scores(:,i), speed] = hw.predict(single(imgpacked2(:,:,i)),'profile','on');
    end
### finished writing input activations.
### running single input activations.
              deep learning processor profiler performance results
                   lastframelatency(cycles)   lastframelatency(seconds)       framesnum      total latency     frames/s
                         -------------             -------------              ---------        ---------       ---------
network                    4892622                  0.02224                       1            4892622             45.0
    conv_1                  467921                  0.00213 
    maxpool_1               188086                  0.00085 
    crossnorm               159500                  0.00072 
    conv_2                  397561                  0.00181 
    maxpool_2                41455                  0.00019 
    fc_1                   3614625                  0.01643 
    fc_2                     23355                  0.00011 
 * the clock frequency of the dl processor is: 220mhz
    
    iori = reshape(iori, [1, he*wi*ch]);
    bbox = reshape(bbox, [1,16]);
    scores = reshape(scores, [1, 8]);
    % insert annotation for postprocessing
    out = myndnet_postprocess(iori, num, bbox, scores, wi, he, ch);
    sz = [he wi ch];
    out = ocv2mat(out,sz);
    imshow(out)

quantize and deploy trainedblemdetnet network

the trainedblemdetnet network improves performance to 45 frames per second. the target performance of the deployed network is 100 frames per second while staying within the target resource utilization budget. the resource utilization budget takes into consideration parameters such as memory size and onboard io. while you can increase the resource utilization budget by choosing a larger board, doing so increases the cost. instead, improve the deployed network performance and stay within the resource utilization budget by quantizing the network. quantize and deploy the trainedblemdetnet network.

load the data set as an image datastore. the imagedatastore labels the images based on folder names and stores the data. divide the data into calibration and validation data sets. use 50% of the images for calibration and 50% of the images for validation. expedite the calibration and validation process by using a subset of the calibration and validation image sets.

if ~isfile('dataset.zip')
        url = 'https://www.mathworks.com/supportfiles/dlhdl/dataset.zip';
        websave('dataset.zip',url);
end
unzip('dataset.zip')
unzip('dataset.zip')
imagedata = imagedatastore(fullfile('dataset'),...
'includesubfolders',true,'fileextensions','.png','labelsource','foldernames');
[calibrationdata, validationdata] = spliteachlabel(imagedata, 0.5,'randomized');
calibrationdata_reduced = calibrationdata.subset(1:20);
validationdata_reduced = validationdata.subset(1:1);

create a quantized network by using the dlquantizer object. set the target execution environment to fpga.

dlquantobj = dlquantizer(snet_blemdetnet,'executionenvironment','fpga')
dlquantobj = 
  dlquantizer with properties:
           networkobject: [1×1 seriesnetwork]
    executionenvironment: 'fpga'

use the calibrate function to exercise the network by using sample inputs and collect the range information. the calibrate function exercises the network and collects the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. the calibrate function returns a table. each row of the table contains range information for a learnable parameter of the quantized network.

dlquantobj.calibrate(calibrationdata_reduced)
ans=21×5 table
        optimized layer name        network layer name    learnables / activations     minvalue     maxvalue 
    ____________________________    __________________    ________________________    __________    _________
    {'conv_1_weights'          }      {'conv_1'    }           "weights"                -0.29022      0.21403
    {'conv_1_bias'             }      {'conv_1'    }           "bias"                  -0.021907    0.0053595
    {'conv_2_weights'          }      {'conv_2'    }           "weights"                -0.10499      0.13732
    {'conv_2_bias'             }      {'conv_2'    }           "bias"                  -0.010084     0.025773
    {'fc_1_weights'            }      {'fc_1'      }           "weights"               -0.051599     0.054506
    {'fc_1_bias'               }      {'fc_1'      }           "bias"                 -0.0048897    0.0072463
    {'fc_2_weights'            }      {'fc_2'      }           "weights"               -0.071356     0.064882
    {'fc_2_bias'               }      {'fc_2'      }           "bias"                  -0.062086     0.062084
    {'imageinput'              }      {'imageinput'}           "activations"                   0          255
    {'imageinput_normalization'}      {'imageinput'}           "activations"             -184.37       241.75
    {'conv_1'                  }      {'conv_1'    }           "activations"             -112.18       150.51
    {'relu_1'                  }      {'relu_1'    }           "activations"                   0       150.51
    {'maxpool_1'               }      {'maxpool_1' }           "activations"                   0       150.51
    {'crossnorm'               }      {'crossnorm' }           "activations"                   0       113.27
    {'conv_2'                  }      {'conv_2'    }           "activations"             -117.79       67.125
    {'relu_2'                  }      {'relu_2'    }           "activations"                   0       67.125
      ⋮

the trainedblemdetnet network consists of a cross channel normalization layer. to support this layer on hardware, the 'lrnblockgeneration' property of the conv module needs to be turned on in the bitstream used for fpga inference. the shipping zcu102_int8 bitstream does not have this property turned on. a new bitstream can be generated using the following lines of code. the generated bitstream can be used along with a dlhdl.workflow object for inference.

when creating a dlhdl.processorconfig object for an existing shipping bitstream, make sure that the bitstream name matches the data type and the fpga board that you are targeting. in this example the target fpga board is the xilinx zcu102 soc board and the date type is int8. update the processor configuration with 'lrnblockgeneration' turned on and 'segmentationblockgeneration' turned off. turn the latter off to fit the deep learning ip on the fpga and avoid overutilization of resources.

% hpc = dlhdl.processorconfig('bitstream', 'zcu102_int8');
% hpc.setmoduleproperty('conv', 'lrnblockgeneration', 'on');
% hpc.setmoduleproperty('conv', 'segmentationblockgeneration', 'off');
% dlhdl.buildprocessor(hpc)

to learn how to use the generated bitstream file, see generate custom bitstream.

create an object of the dlhdl.workflow class. when you create the class, specify the network and the bitstream name. make sure to use this newly generated bitstream which enables processing of cross channel normalization layers on the fpga. specify the saved pretrained quantized trainedblemdetnet object dlquantobj as the network.

hw = dlhdl.workflow('network', dlquantobj, 'bitstream', 'dlprocessor.bit','target',ht);

to compile the quantized network, run the compile function of the dlhdl.workflow object.

hw.compile('inputframenumberlimit',30)
### compiling network for deep learning fpga prototyping ...
### targeting fpga bitstream zcu102_int8 ...
### the network includes the following layers:
     1   'imageinput'    image input                   128×128×1 images with 'zerocenter' normalization                    (sw layer)
     2   'conv_1'        convolution                   20 5×5×1 convolutions with stride [1  1] and padding [0  0  0  0]   (hw layer)
     3   'relu_1'        relu                          relu                                                                (hw layer)
     4   'maxpool_1'     max pooling                   2×2 max pooling with stride [2  2] and padding [0  0  0  0]         (hw layer)
     5   'crossnorm'     cross channel normalization   cross channel normalization with 5 channels per element             (hw layer)
     6   'conv_2'        convolution                   20 5×5×20 convolutions with stride [1  1] and padding [0  0  0  0]  (hw layer)
     7   'relu_2'        relu                          relu                                                                (hw layer)
     8   'maxpool_2'     max pooling                   2×2 max pooling with stride [2  2] and padding [0  0  0  0]         (hw layer)
     9   'fc_1'          fully connected               512 fully connected layer                                           (hw layer)
    10   'fc_2'          fully connected               2 fully connected layer                                             (hw layer)
    11   'softmax'       softmax                       softmax                                                             (sw layer)
    12   'classoutput'   classification output         crossentropyex with classes 'ng' and 'ok'                           (sw layer)
3 memory regions created.
skipping: imageinput
compiling leg: conv_1>>maxpool_2 ...
compiling leg: conv_1>>maxpool_2 ... complete.
compiling leg: fc_1>>fc_2 ...
compiling leg: fc_1>>fc_2 ... complete.
skipping: softmax
skipping: classoutput
creating schedule...
.........
creating schedule...complete.
creating status table...
........
creating status table...complete.
emitting schedule...
......
emitting schedule...complete.
emitting status table...
..........
emitting status table...complete.
### allocating external memory buffers:
          offset_name          offset_address    allocated_space 
    _______________________    ______________    ________________
    "inputdataoffset"           "0x00000000"     "16.0 mb"       
    "outputresultoffset"        "0x01000000"     "4.0 mb"        
    "schedulerdataoffset"       "0x01400000"     "4.0 mb"        
    "systembufferoffset"        "0x01800000"     "28.0 mb"       
    "instructiondataoffset"     "0x03400000"     "4.0 mb"        
    "convweightdataoffset"      "0x03800000"     "4.0 mb"        
    "fcweightdataoffset"        "0x03c00000"     "12.0 mb"       
    "endoffset"                 "0x04800000"     "total: 72.0 mb"
### network compilation complete.
ans = struct with fields:
             weights: [1×1 struct]
        instructions: [1×1 struct]
           registers: [1×1 struct]
    syncinstructions: [1×1 struct]

to deploy the network on the xilinx zcu102 soc hardware, run the deploy function of the dlhdl.workflow object. this function uses the output of the compile function to program the fpga board by using the programming file. it also downloads the network weights and biases. the deploy function starts programming the fpga device and displays progress messages and the time it takes to deploy the network.

hw.deploy
### programming fpga bitstream using ethernet...
downloading target fpga device configuration over ethernet to sd card ...
# copied /tmp/hdlcoder_rd to /mnt/hdlcoder_rd
# copying bitstream hdlcoder_system.bit to /mnt/hdlcoder_rd
# set bitstream to hdlcoder_rd/hdlcoder_system.bit
# copying devicetree devicetree_dlhdl.dtb to /mnt/hdlcoder_rd
# set devicetree to hdlcoder_rd/devicetree_dlhdl.dtb
# set up boot for reference design: 'axi-stream ddr memory access : 3-axim'
downloading target fpga device configuration over ethernet to sd card done. the system will now reboot for persistent changes to take effect.
system is rebooting .

 . . . . .
### programming the fpga bitstream has been completed successfully.
### loading weights to conv processor.
### conv weights loaded. current time is 16-dec-2020 16:18:03
### loading weights to fc processor.
### fc weights loaded. current time is 16-dec-2020 16:18:03

load an image from the attached testimages folder and resize the image to match the network image input layer dimensions. run the predict function of the dlhdl.workflow object to retrieve and display the defect prediction from the fpga.

wi = uint32(320);
he = uint32(240);
ch = uint32(3);
filename = fullfile(pwd,'ok1.png');
img=imread(filename);
img = imresize(img, [he, wi]);
img = mat2ocv(img);
    % extract roi for preprocessing
    [iori, imgpacked, num, bbox] = myndnet_preprocess(img);
    % row-major > column-major conversion
    imgpacked2 = zeros([128,128,4],'uint8');
    for c = 1:4
        for i = 1:128
            for j = 1:128
                imgpacked2(i,j,c) = imgpacked((i-1)*128   (j-1)   (c-1)*128*128   1);
            end
        end
    end
    % classify detected nuts by using cnn
    scores = zeros(2,4);
    for i = 1:num
         [scores(:,i), speed] = hw.predict(single(imgpacked2(:,:,i)),'profile','on');
    end
### finished writing input activations.
### running single input activations.
              deep learning processor profiler performance results
                   lastframelatency(cycles)   lastframelatency(seconds)       framesnum      total latency     frames/s
                         -------------             -------------              ---------        ---------       ---------
network                    1754969                  0.00798                       1            1754969            125.4
    conv_1                  271340                  0.00123 
    maxpool_1                87533                  0.00040 
    crossnorm               125737                  0.00057 
    conv_2                  149972                  0.00068 
    maxpool_2                19657                  0.00009 
    fc_1                   1085683                  0.00493 
    fc_2                     14928                  0.00007 
 * the clock frequency of the dl processor is: 220mhz
    
    iori = reshape(iori, [1, he*wi*ch]);
    bbox = reshape(bbox, [1,16]);
    scores = reshape(scores, [1, 8]);
    % insert an annotation for postprocessing
    out = myndnet_postprocess(iori, num, bbox, scores, wi, he, ch);
    sz = [he wi ch];
    out = ocv2mat(out,sz);
    imshow(out)

to test that the quantized network can identify all test cases deploy an additional image, resize the image to match the network image input layer dimensions, and run the predict function of the dlhdl.workflow object to retrieve and display the defect prediction from the fpga.

wi = uint32(320);
he = uint32(240);
ch = uint32(3);
filename = fullfile(pwd,'okng.png');
img=imread(filename);
img = imresize(img, [he, wi]);
img = mat2ocv(img);
    % extract roi for preprocessing
    [iori, imgpacked, num, bbox] = myndnet_preprocess(img);
    % row-major > column-major conversion
    imgpacked2 = zeros([128,128,4],'uint8');
    for c = 1:4
        for i = 1:128
            for j = 1:128
                imgpacked2(i,j,c) = imgpacked((i-1)*128   (j-1)   (c-1)*128*128   1);
            end
        end
    end
    % classify detected nuts by using cnn
    scores = zeros(2,4);
    for i = 1:num
         [scores(:,i), speed] = hw.predict(single(imgpacked2(:,:,i)),'profile','on');
    end
### finished writing input activations.
### running single input activations.
              deep learning processor profiler performance results
                   lastframelatency(cycles)   lastframelatency(seconds)       framesnum      total latency     frames/s
                         -------------             -------------              ---------        ---------       ---------
network                    1754614                  0.00798                       1            1754614            125.4
    conv_1                  271184                  0.00123 
    maxpool_1                87557                  0.00040 
    crossnorm               125768                  0.00057 
    conv_2                  149819                  0.00068 
    maxpool_2                19602                  0.00009 
    fc_1                   1085664                  0.00493 
    fc_2                     14930                  0.00007 
 * the clock frequency of the dl processor is: 220mhz
### finished writing input activations.
### running single input activations.
              deep learning processor profiler performance results
                   lastframelatency(cycles)   lastframelatency(seconds)       framesnum      total latency     frames/s
                         -------------             -------------              ---------        ---------       ---------
network                    1754486                  0.00797                       1            1754486            125.4
    conv_1                  271014                  0.00123 
    maxpool_1                87662                  0.00040 
    crossnorm               125835                  0.00057 
    conv_2                  149789                  0.00068 
    maxpool_2                19661                  0.00009 
    fc_1                   1085505                  0.00493 
    fc_2                     14930                  0.00007 
 * the clock frequency of the dl processor is: 220mhz
    
    iori = reshape(iori, [1, he*wi*ch]);
    bbox = reshape(bbox, [1,16]);
    scores = reshape(scores, [1, 8]);
    % insert an annotation for postprocessing
    out = myndnet_postprocess(iori, num, bbox, scores, wi, he, ch);
    sz = [he wi ch];
    out = ocv2mat(out,sz);
    imshow(out)

quantizing the network improves the performance from 45 frames per second to 125 frames per second and reduces the deployed network size from 88 mb to 72 mb.

see also

| | | | |

related topics

    网站地图