fitting ai models for embedded deployment video -凯发k8网页登录
ai is no longer limited to powerful computing environments such as gpus or high-end cpus, and is often integrated into systems with limited resources like patient monitoring, diagnostic systems in vehicles, and manufacturing equipment. fitting ai onto hardware with limited memory and power supply requires deliberate trade-offs between size of model, accuracy, inference speed, and power consumption—and that process is still challenging in many frameworks for ai development.
optimizing ai models for limited hardware generally proceeds in these three steps:
- model selection: identify less complex models and neural networks that still achieve the required accuracy
- size reduction: tune the hyperparameters to generate a more compact model or prune the neural network
- quantization: further reduce size by quantizing model parameters
additionally, especially for signal and text problems, feature extraction and selection result in more compact models. this talk demonstrates model compression techniques in matlab® and simulink® by fitting a machine learning model and pruning a convolutional network for an intelligent hearing aid.
featured product
deep learning toolbox
up next:
related videos:
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)