A Study on Hardware-Aware Training Techniques for Feedforward Artificial Neural Networks

Sajjad Parvin, Mustafa Altun

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper presents hardware-aware training techniques for efficient hardware implementation of feedforward artificial neural networks (ANNs). Firstly, an investigation is done on the effect of the weight initialization on the hardware implementation of the trained ANN on a chip. We show that our unorthodox initialization technique can result in better area efficiency in comparison to the state-of-art weight initialization techniques. Secondly, we propose training based on large floating-point values. This means the training algorithm at the end finds a weight-set consisting of integer numbers by just ceiling/flooring of the large floating-point values. Thirdly, the large floating-point training algorithm is integrated with a weight and bias value approximation module to approximate a weight-set while optimizing an ANN for accuracy, to find an efficient weight-set for hardware realization. This integrated module at the end of training generates a weight-set that has a minimum hardware cost for that specific initialized weight-set. All the introduced algorithms are included in our toolbox called ZAAL. Then, the trained ANNs are realized on hardware under constant multiplication design using parallel and time-multiplexed architectures using TSMC 40nm technology in Cadence.

Original languageEnglish
Title of host publicationProceedings - 2021 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2021
PublisherIEEE Computer Society
Pages406-411
Number of pages6
ISBN (Electronic)9781665439466
DOIs
Publication statusPublished - Jul 2021
Event20th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2021 - Tampa, United States
Duration: 7 Jul 20219 Jul 2021

Publication series

NameProceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
Volume2021-July
ISSN (Print)2159-3469
ISSN (Electronic)2159-3477

Conference

Conference20th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2021
Country/TerritoryUnited States
CityTampa
Period7/07/219/07/21

Bibliographical note

Publisher Copyright:
© 2021 IEEE.

Keywords

  • artificial neural networks
  • hardware-aware training
  • parallel and time-multiplexed architecture
  • weightset approximation

Fingerprint

Dive into the research topics of 'A Study on Hardware-Aware Training Techniques for Feedforward Artificial Neural Networks'. Together they form a unique fingerprint.

Cite this