Efficient and Compact Representations of Deep Neural Networks via Entropy Coding

Abstract

Matrix operations are nowadays central in many Machine Learning techniques, including in particular Deep Neural Networks (DNNs), whose core of any inference is represented by a sequence of dot product operations. An increasingly emerging problem is how to efficiently engineer their storage and operations. In this article we propose two new lossless compression schemes for real-valued matrices, supporting efficient vector-matrix multiplications in the compressed format, and specifically suitable for DNNs compression. Exploiting several recent studies that use weight pruning and quantization techniques to reduce the complexity of DNN inference, our schemes are expressly designed to benefit from both, that is from input matrices characterized by low entropy. In particular, our solutions are able to take advantage from the depth of the model, and the deeper the model, the higher the efficiency. Moreover, we derived space upper bounds for both variants in terms of the source entropy. Experiments show that our tools favourably compare in terms of energy and space efficiency against state-of-the-art matrix compression approaches, including Compressed Linear Algebra (CLA) and Compressed Shared Elements Row (CSER), the latter explicitly proposed in the context of DNN compression.

Publication
IEEE Access
Avatar
Giosuè Cataldo Marinò
Research contractor
Avatar
Flavio Furia
Research contractor
Avatar
Dario Malchiodi
Associate professor

Professor of Data analytics and member of the unimi team

Avatar
Marco Frasca
Assistant professor

Researcher in Machine Learning and AI of the UNIMI unit