installing prerequisite products
to use gpu coder™ for cuda® code generation, you must install and setup the following products. for setup instructions , see .
mathworks products and support packages
matlab® (required).
matlab coder™ (required).
parallel computing toolbox™ (required).
simulink® (required for generating code from simulink models).
computer vision toolbox™ (recommended).
deep learning toolbox™ (required for deep learning).
embedded coder® (recommended).
image processing toolbox™ (recommended).
simulink coder (required for generating code from simulink models).
gpu coder interface for deep learning support package (required for deep learning).
matlab coder support package for nvidia® jetson™ and nvidia drive® platforms (required for deployment to embedded targets such as nvidia jetson and drive).
for instructions on installing mathworks® products, see the matlab installation documentation for your platform. if you have installed matlab and want to check which other mathworks products are installed, enter in the matlab command window. to install the support packages, use add-on explorer in matlab.
if matlab is installed on a path that contains non 7-bit ascii characters, such as japanese characters, gpu coder does not work because it cannot locate code generation library functions.
third-party hardware
nvidia gpu enabled for cuda with a compatible graphics driver. for more information, see .
to see the cuda compute capability requirements for code generation, consult the following table.
target compute capability cuda mex
source code, static or dynamic library, and executables
3.2 or higher.
deep learning applications in 8-bit integer precision
6.1, 7.0 or higher.
deep learning applications in half-precision (16-bit floating point)
5.3, 6.0, 6.2 or higher.
arm® mali graphics processor.
for the mali device, gpu coder supports code generation for only deep learning networks.
third-party software
required
c/c compiler:
linux® | windows® |
---|---|
gcc c/c compiler. for supported versions, see . | microsoft® visual studio® 2017 |
microsoft visual studio 2019 | |
microsoft visual studio 2022 |
optional
for cuda mex, the code generator uses the nvidia compiler and libraries installed with matlab. standalone code (static library, dynamically linked library, or executable program) generation has additional software requirements.
software name | information |
---|---|
cuda toolkit | gpu coder has been tested with cuda toolkit v9.x-v11.8. to download the cuda toolkit, see . |
nvidia nsight™ systems | generate an execution profiling report for the generated cuda code. the report provides metrics that help you analyze your application algorithms and identify opportunities to optimize performance. gpu coder has been tested with nsight 2022.5.1 note the profiling tools from nvidia might not support legacy gpu hardware such as the kepler family of devices. for information on supported gpu devices, see the nvidia documentation. |
nvidia cuda deep neural network library (cudnn) for nvidia gpus | for the host gpu device, gpu coder has been tested with cudnn v8.7. to download cudnn, see . |
nvidia tensorrt™ high performance inference optimizer and runtime library | for the host gpu device, gpu coder has been tested with tensorrt v8.5.1.7. to download tensorrt, see . |
arm compute library for mali gpus | gpu coder has been tested with v19.05. for more information, see . |
open source computer vision library (opencv) | required for deep learning examples. for examples targeting nvidia gpus on the host development computer, use opencv v3.1.0. for examples targeting arm gpus, use opencv v2.4.9 on the arm target hardware. for more information, see . |
tips
see also
apps
- |
functions
codegen
|