deep learning prediction with nvidia tensorrt library -凯发k8网页登录
this example shows how to generate code for a deep learning application by using the nvidia® tensorrt™ library. this example uses the codegen
command to generate a mex file that performs prediction with a logo recognition classification network by using tensorrt. the example also demonstrates how to use codegen
command to generate a mex file that performs 8-bit integer and 16-bit floating point prediction.
third-party prerequisites
required
this example generates cuda® mex and requires a cuda-enabled nvidia gpu and compatible driver. you must have specific gpu compute capability for 8-bit integer and 16-bit floating point precision modes, see third-party hardware.
optional
for non-mex builds such as static, dynamic libraries or executables, you must also have:
nvidia toolkit.
nvidia cudnn and the tensorrt library.
environment variables for the compilers and libraries. for more information, see third-party hardware and .
verify gpu environment
use the function to verify that the compilers and libraries necessary for running this example are set up correctly.
envcfg = coder.gpuenvconfig('host'); envcfg.deeplibtarget = 'tensorrt'; envcfg.deepcodegen = 1; envcfg.quiet = 1; coder.checkgpuinstall(envcfg);
download and load pretrained network
this example uses a pretrained logo recognition network to classify logos in images. download the pretrained logonet
network from mathworks website and load the file. the network was developed in matlab and is approximately 42 mb in size. this network can recognize 32 logos under various lighting conditions and camera angles. for information on training the logo recognition network, see .
net = getlogonet;
the logonet_predict
entry-point function
the logonet_predict.m
entry-point function takes an image input and performs prediction on the image by using the deep learning network saved in the logonet.mat
file. the function loads the network object from logonet.mat
into a persistent variable logonet
and reuses the persistent variable during subsequent prediction calls.
type('logonet_predict.m')
function out = logonet_predict(in) %#codegen % 凯发官网入口首页 copyright 2017-2022 the mathworks, inc. % a persistent object logonet is used to load the network object. at the % first call to this function, the persistent object is constructed and % setup. when the function is called subsequent times, the same object is % reused to call predict on inputs, thus avoiding reconstructing and % reloading the network object. persistent logonet; if isempty(logonet) logonet = coder.loaddeeplearningnetwork('logonet.mat','logonet'); end out = logonet.predict(in); end
run mex code generation
to generate cuda code for the logonet_predict
entry-point function, create a gpu code configuration object for a mex target and set the target language to c . use the function to create a tensorrt deep learning configuration object and assign it to the deeplearningconfig
property of the gpu code configuration object. run the codegen
command by specifying an input size of 227-by-227-by-3. this value corresponds to the input layer size of the logo recognition network. by default, generating tensorrt code runs inference in 32-bit floats.
cfg = coder.gpuconfig('mex'); cfg.targetlang = 'c '; cfg.deeplearningconfig = coder.deeplearningconfig('tensorrt'); codegen -config cfg logonet_predict -args {coder.typeof(single(0),[227 227 3])} -report
code generation successful: view report
perform prediction on test image
load an input image. call logonet_predict_mex
on the input image.
im = imread('gpucoder_tensorrt_test.png'); im = imresize(im, [227,227]); predict_scores = logonet_predict_mex(single(im)); % get top 5 probability scores and their labels [val,indx] = sort(predict_scores, 'descend'); scores = val(1:5)*100; classnames = net.layers(end).classnames; top5labels = classnames(indx(1:5));
display the top five classification labels.
outputimage = zeros(227,400,3, 'uint8'); for k = 1:3 outputimage(:,174:end,k) = im(:,:,k); end scol = 1; srow = 20; for k = 1:5 outputimage = inserttext(outputimage, [scol, srow],... [char(top5labels(k)),' ',num2str(scores(k),'%2.2f'),'%'],... 'textcolor', 'w','fontsize',15, 'boxcolor', 'black'); srow = srow 20; end imshow(outputimage);
free the gpu memory by removing the loaded mex function.
clear mex;
generate tensorrt code for 8-bit integer prediction
generate tensorrt code that runs inference in int8 precision.
code generation by using the nvidia tensorrt library with inference computation in 8-bit integer precision supports these additional networks:
object detector networks, such as yolov2 and ssd
regression and semantic segmentation networks
tensorrt requires a calibration data set to calibrate a network that is trained in floating-point to compute inference in 8-bit integer precision. set the data type to int8
and the path to the calibration data set by using the deeplearningconfig
. logos_dataset
is a subfolder that contains images grouped by their classification labels. for int8
support, the gpu compute capability must be 6.1, 7.0, or higher.
note that for semantic segmentation networks, the calibration data images must be of a format supported by the imread
function.
unzip('logos_dataset.zip'); cfg = coder.gpuconfig('mex'); cfg.targetlang = 'c '; cfg.gpuconfig.computecapability = '6.1'; cfg.deeplearningconfig = coder.deeplearningconfig('tensorrt'); cfg.deeplearningconfig.datatype = 'int8'; cfg.deeplearningconfig.datapath = 'logos_dataset'; cfg.deeplearningconfig.numcalibrationbatches = 50; codegen -config cfg logonet_predict -args {coder.typeof(int8(0),[227 227 3])} -report
code generation successful: view report
run int8 prediction on test image
load an input image. call logonet_predict_mex
on the input image.
im = imread('gpucoder_tensorrt_test.png'); im = imresize(im, [227,227]); predict_scores = logonet_predict_mex(int8(im)); % get top 5 probability scores and their labels [val,indx] = sort(predict_scores, 'descend'); scores = val(1:5)*100; classnames = net.layers(end).classnames; top5labels = classnames(indx(1:5));
display the top five classification labels.
outputimage = zeros(227,400,3, 'uint8'); for k = 1:3 outputimage(:,174:end,k) = im(:,:,k); end scol = 1; srow = 20; for k = 1:5 outputimage = inserttext(outputimage, [scol, srow],... [char(top5labels(k)),' ',num2str(scores(k),'%2.2f'),'%'],... 'textcolor', 'w','fontsize',15, 'boxcolor', 'black'); srow = srow 20; end imshow(outputimage);
free the gpu memory by removing the loaded mex function.
clear mex;
generate tensorrt code for 16-bit floating point prediction
generate tensorrt code that runs inference in fp16
precision. for fp16
support, the gpu compute capability must be 5.3, 6.0, 6.2 or higher.
note that quantization error occurs when accumulating operations in single precision and converting them to half precision. for more information, see .
cfg = coder.gpuconfig('mex'); cfg.targetlang = 'c '; cfg.gpuconfig.computecapability = '5.3'; cfg.deeplearningconfig = coder.deeplearningconfig('tensorrt'); cfg.deeplearningconfig.datatype = 'fp16'; codegen -config cfg logonet_predict -args {coder.typeof(half(0),[227 227 3])} -report
code generation successful: view report
run fp16 prediction on test image
load an input image. call logonet_predict_mex
on the input image.
im = imread('gpucoder_tensorrt_test.png'); im = imresize(im, [227,227]); predict_scores = logonet_predict_mex(half(im)); % get top 5 probability scores and their labels [val,indx] = sort(predict_scores, 'descend'); scores = val(1:5)*100; classnames = net.layers(end).classnames; top5labels = classnames(indx(1:5));
display the top five classification labels.
outputimage = zeros(227,400,3, 'uint8'); for k = 1:3 outputimage(:,174:end,k) = im(:,:,k); end scol = 1; srow = 20; for k = 1:5 outputimage = inserttext(outputimage, [scol, srow],... [char(top5labels(k)),' ',num2str(scores(k),'%2.2f'),'%'],... 'textcolor', 'w','fontsize',15, 'boxcolor', 'black'); srow = srow 20; end imshow(outputimage);
free the gpu memory by removing the loaded mex function.
clear mex;
see also
functions
- |
codegen
| |
objects
- | | | |