Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators

Part of Proceedings of Machine Learning and Systems 3 (MLSys 2021)

Bibtex Paper

Authors

Hamzah Abdelaziz, ali shafiee, Jong Hoon Shin, Ardavan Pedram, Joseph Hassoun

Abstract

Mixed precision DNN accelerators become more ubiquitous especially when both efficient training and inference are required. In this paper, we propose a mixed-precision convolution unit architecture which supports different integer and floating point~(FP) precisions. The proposed architecture is based on low-bit inner product units and realizes higher precision based on temporal decomposition. We illustrate how to integrate FP computations on integer-based architecture and evaluate overheads incurred by FP arithmetic support. We argue that alignment and addition overhead for FP inner product can be significant since the maximum exponent difference could be up to 58 bits, which results into a large alignment logic. To address this issue, we illustrate empirically that at least 8 bits of alignment logic are required to maintain inference accuracy. We present novel optimizations based on the above observations to reduce the FP arithmetic hardware overheads. Our empirical results, based on simulation and hardware implementation, show significant reduction in FP16 overhead. Over typical mixed precision implementation, the proposed architecture achieves area improvements of up to 25\% in TFLOPS/$mm^2$ and up to 46\% in TOPS/$mm^2$ with power efficiency improvements of up to 40\% in TFLOPS/W and up to 63\% in TOPS/W.