main content

deep learning processor customization and ip generation -凯发k8网页登录

configure, build, and generate custom bitstreams and processor ip cores, estimate and benchmark custom deep learning processor performance

deep learning hdl toolbox™ provides functions to configure, build, and generate custom bitstreams and a custom processor ip. obtain performance and resource utilization of a pretrained series network on the custom processor. optimize the custom processor by using the estimation results.

classes

configure custom deep learning processor

functions

build and generate custom processor ip
generate calibration bitstream and path to generated bitstream files
deploy calibration bitstream and generate calibration data file
retrieve layer-level latencies and performance by using estimateperformance method
return estimated resources used by custom bitstream configuration
update network-specific deep learning processor configuration with optimized deep learning processor configuration
use the getmoduleproperty method to get values of module properties within the dlhdl.processorconfig object
use the setmoduleproperty method to set properties of modules within the dlhdl.processorconfig object
open a generated custom layer verification model to verify your custom layers
register the custom layer definition and simulink model representation of the custom layer
verify the functionality and accuracy of the custom layer by using the generated custom layer verification model

topics

custom processor configuration

  • custom processor configuration workflow
    accelerate the estimation and optimization of custom deep learning processor by configuring parameters of the conv processor and fc processor, created by using the dlhdl.processorconfig object workflow.
  • deep learning processor ip core architecture
    learn about the fpga architecture based custom deep learning processor architecture and using it to create a matlab® controlled deep learning processor.

  • analyze the deep learning network layer level latencies and overall performance before deployment.

  • expedite the time to identify a target hardware board that meets resource utilization budgets before deployment.

  • rapidly prototype custom processor configuration and networks by understanding how deep learning processor parameters affect resource utilization and network performance.

  • deploy your custom network that only has layers with the convolution module output format or only layers with the fully connected module output format by generating a resource optimized custom bitstream that satisfies your performance and resource requirements.

  • create a deep learning processor configuration that includes your custom layers.

custom processor code generation

  • generate custom bitstream
    rapidly prototype and iterate custom deep learning networks performance by configuring, building and generating custom bitstreams which can then be deployed to target fpga and soc boards.

  • build and generate ip for the dlhdl.processorconfig.
网站地图