what is half precision? video -凯发k8网页登录
this video introduces the concept of half precision or float16, a relatively new floating-point data. it can be used to reduce memory usage by half and has become very popular for accelerating deep learning training and inference. we also look at the benefits as well as the tradeoffs over traditional 32-bit single precision or 64-bit double-precision data types for traditional control applications.
half precision or float16 is a relatively new floating-point data type that uses 16 bits, unlike traditional 32-bit single precision or 64-bit double-precision data types.
so, when you declare a variable as half in matlab, say the number pi, you may notice some loss of precision when compared to single or double representation as we see here.
the difference comes from the limited numbers of bits used by half precision. we only have 10 bits of precision and 5 bits for the exponent as opposed to 23 bits of precision and 8 bits for exponent in single. hence the eps is much larger and also the dynamic range is limited.
so why is it important? half’s recent popularity is because of its usefulness in accelerating deep learning training and inference mainly on nvidia gpus as highlighted in the articles here. in addition, both intel and arm platforms also support half to accelerate computations.
the obvious benefit of using half precision is in reducing the memory and reducing the data bandwidth by 50% as we see here for resnet50. in addition, the hardware vendors also provide hardware acceleration for computations in half such as the cuda intrinsics in the case of nvidia gpus.
we are seeing traditional applications such as powertrain control systems do the same where you may have data in the form of lookup tables as shown in a simple illustration here. by using half as the storage type, you are able to reduce the memory footprint of this 2d lookup table by 4x.
however, it is important to understand the tradeoff of the limited precision and range of half precision. for instance, in case of the deep learning network, the quantization error was of the order of 10^-4 and one has to analyze how this impacts the overall accuracy of the network.
this was a short introduction to half precision. please refer to links below to learn more on how to simulate and generate c/c or cuda code from half in matlab and simulink.
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)