Efficient hardware implementation of artificial neural networks using approximate multiply-accumulate blocks

Mohammadreza Esmali Nojehdeh, Levent Aksoy, Mustafa Altun

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Citations (Scopus)

Abstract

In this paper, we explore efficient hardware implementation of feedforward artificial neural networks (ANNs) using approximate adders and multipliers. We also introduce an approximate multiplier with a simple structure leading to a considerable reduction in the ANN hardware complexity. Due to a large area requirement in a parallel architecture, the ANNs are implemented under the time-multiplexed architecture where computing resources are re-used in the multiply-accumulate (MAC) blocks. The efficient hardware implementation of ANNs is realized by replacing the exact adders and multipliers in the MAC blocks by the approximate ones taking into account the hardware accuracy. Experimental results show that the ANNs designed using the proposed approximate multiplier have smaller area and consume less energy than those designed using previously proposed prominent approximate multipliers. It is also observed that the use of both approximate adders and multipliers yields respectively up to a 64% and 43% reduction in energy consumption and area of the ANN design with a slight decrease in the hardware accuracy when compared to the exact adders and multipliers.

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2020
PublisherIEEE Computer Society
Pages96-101
Number of pages6
ISBN (Electronic)9781728157757
DOIs
Publication statusPublished - Jul 2020
Event19th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2020 - Limassol, Cyprus
Duration: 6 Jul 20208 Jul 2020

Publication series

NameProceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
Volume2020-July
ISSN (Print)2159-3469
ISSN (Electronic)2159-3477

Conference

Conference19th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2020
Country/TerritoryCyprus
CityLimassol
Period6/07/208/07/20

Bibliographical note

Publisher Copyright:
© 2020 IEEE.

Funding

ACKNOWLDGEMENT This work is supported by the TUBITAK-1001 projects #117E078 , #119E507 and Istanbul Technical University BAP

FundersFunder number
Istanbul Technical University BAP
TUBITAK-1001119E507, 117E078

    Fingerprint

    Dive into the research topics of 'Efficient hardware implementation of artificial neural networks using approximate multiply-accumulate blocks'. Together they form a unique fingerprint.

    Cite this