gpu coder documentation -凯发k8网页登录
gpu coder™ generates optimized cuda® code from matlab® code and simulink® models. the generated code includes cuda kernels for parallelizable parts of your deep learning, embedded vision, and signal processing algorithms. for high performance, the generated code calls optimized nvidia® cuda libraries, including tensorrt, cudnn, cufft, cusolver, and cublas. the code can be integrated into your project as source code, static libraries, or dynamic libraries, and it can be compiled for desktops, servers, and gpus embedded on nvidia jetson™, nvidia drive®, and other platforms. you can use the generated cuda within matlab to accelerate deep learning networks and other computationally intensive portions of your algorithm. gpu coder lets you incorporate handwritten cuda code into your algorithms and into the generated code.
when used with embedded coder®, gpu coder lets you verify the numerical behavior of the generated code via software-in-the-loop (sil) and processor-in-the-loop (pil) testing.
get started
learn the basics of gpu coder
matlab algorithm design for gpu
matlab language syntax and functions for code generation
kernel creation
algorithm structures and patterns that create cuda gpu kernels
performance
troubleshoot code generation issues, improve code execution time, and reduce memory usage of generated code
deep learning with gpu coder
generate cuda code for deep learning neural networks
deployment
deploy generated code to nvidia tegra® hardware targets
gpu coder supported hardware
support for third-party hardware, such as nvidia drive and jetson platforms