Exploiting Subword Permutations to Maximize CNN Compute Performance and Efficiency

verfasst von
Michael Beyer, Sven Gesper, Andre Guntoro, Guillermo Paya-Vaya, Holger Blume
Abstract

Neural networks (NNs) are quantized to decrease their computational demands and reduce their memory foot-print. However, specialized hardware is required that supports computations with low bit widths to take advantage of such optimizations. In this work, we propose permutations on subword level that build on top of multi-bit-width multiply-accumulate operations to effectively support low bit width computations of quantized NNs. By applying this technique, we extend the data reuse and further improve compute performance for convolution operations compared to simple vectorization using SIMD (single-instruction-multiple-data). We perform a design space exploration using a cycle accurate simulation with MobileNet and VGG16 on a vector-based processor. The results show a speedup of up to 3.7 × and a reduction of up to 1.9 × for required data transfers. Additionally, the control overhead for orchestrating the computation is decreased by up to 3.9 ×.

Organisationseinheit(en)
Institut für Mikroelektronische Systeme
Externe Organisation(en)
Robert Bosch GmbH
Technische Universität Braunschweig
Typ
Aufsatz in Konferenzband
Seiten
61-68
Anzahl der Seiten
8
Publikationsdatum
2023
Publikationsstatus
Veröffentlicht
Peer-reviewed
Ja
ASJC Scopus Sachgebiete
Hardware und Architektur, Computernetzwerke und -kommunikation
Elektronische Version(en)
https://doi.org/10.1109/ASAP57973.2023.00023 (Zugang: Geschlossen)