Córais

Research Group

 

Scalar Arithmetic Multiple Data: Customizable Precision for Deep Neural Networks

Andrew Anderson, David Gregg
26th IEEE Symposium on Computer Arithmetic, ARITH 2019, Kyoto, Japan, 2019
https://doi.org/10.1109/ARITH.2019.00018

Quantization of weights and activations in Deep Neural Networks (DNNs) is a powerful technique for network compression, and has enjoyed significant attention and success. However, much of the inference-time benefit of quantization is accessible only through customized hardware accelerators or with an FPGA implementation of quantized arithmetic.

Building on prior work, we show how to construct very fast implementations of arbitrary bit-precise signed and unsigned integer operations using a software technique which logically embeds a vector architecture with custom bit-width lanes in fixed-width scalar arithmetic. At the strongest level of quantization, our approach yields a maximum speedup of ~6x on an x86 platform, and ~10x on an ARM platform versus quantization to native 8-bit integers.