generate hdl for a deep learning processor video -凯发k8网页登录
implementing deep learning inference efficiently in edge applications requires collaboration between the design of the deep learning network and the deep learning processor.
deep learning hdl toolbox™ enables fpga prototyping of deep learning networks from within matlab®. to increase performance or target custom hardware, you can explore trade-offs in matlab to converge on a custom fpga implementation of the deep learning processor. then a single matlab function drives hdl coder™ to generate an ip core with target-independent synthesizable rtl and axi interfaces. it can also optionally run fpga implementation to create a bitstream to program the deep learning processor onto the device.
deep learning hdl toolbox delivers fpga prototyping of deep learning inferencing from within matlab, so you can quickly iterate and converge on a network that delivers the performance for your system requirements while meeting your fpga constraints.
but what if you want to customize the fpga implementation, to improve performance or to target a custom board? for this, you can use matlab to configure the processor and to drive hdl coder to generate an ip core with rtl and axi interfaces.
this is all based on a deep learning processor architecture that has generic convolution and fully connected modules, so you can program your custom network and the logic that controls which layer is being run, along with its activation inputs and outputs. since each layer’s parameters need to be stored in external ddr memory, the processor also includes high-bandwidth memory access.
you can customize this deep learning processor for your system requirements, which coupled with the ability to customize the deep learning network, delivers a lot of options to optimize fpga implementation for your application.
to illustrate, let’s look an application that uses a series network that’s trained to classify logos. let’s say we need to process 15 frames per second.
so we just load the trained network.
and we will set up a custom processor configuration with all default settings and running at 220 mhz. note the data types and amount of parallel threads for the convolution module and fully-connected module. and this is set up by default to target a zcu102 board, which is what we are using.
then we apply the processor config to a workflow object for the trained network
now we can estimate the performance of this custom processor before we deploy it. the result is the total latency here, which at 220 mhz means the frame rate would be just under 6 frames per second, which is not going to meet our system requirements.
this is where it’s important to collaborate because we have options. let’s say we’re committed to this board. and our deep learning expert doesn’t think we can remove any layers and get the same accuracy, but we might be able to quantize to int8. going from 32-bit to 8-bit word lengths gives us the resources to perform more multiply-accumulates in parallel.
so we’ll set up a new custom processor configuration object, with both the convolution and fully-connected layers set to int8, and increase the parallel thread count by 4x for each.
now, we need to quantize the network itself in order to estimate its performance on the deep learning processor. you can learn more about this process in the documentation. it takes a minute to run, and returns for each layer the numeric ranges for the given calibration data store. normally we would run more calibration images and then validate with another set, but…
let’s see the estimation results for this new processor configuration – now we’re up to 16 frames per second, which is good enough for our fictional requirements.
from here, the buildprocessor function does the rest. it calls hdl coder to generate target-independent synthesizable rtl for the processor you’ve configured. and if you have set up a reference design it will generate an ip core with the axi register mapping so it plugs right into implementation. and if you’ve set up and defined an implementation workflow, it runs all the way through generating a bitstream to program the device.
we can take a look at the implementation results, here in vivado. we’re meeting timing at the 220 mhz target, with the resource usage shown here.
this shows how powerful it can be to collaborate between the design of the deep learning network and the implementation of the deep learning processor, and how easy it is to do right in matlab.
up next:
related videos:
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)