Abstract
In this paper, we propose an efficient method to realize a convolution layer of the convolution neural networks (CNNs). Inspired by the fully-connected neural network architecture, we introduce an efficient computation approach to implement convolution operations. Also, to reduce hardware complexity, we implement convolutional layers under the time-multiplexed architecture where computing resources are re-used in the multiply-accumulate (MAC) blocks. A comprehensive evaluation of convolution layers shows using our proposed method when compared to the conventional MAC-based method results up to 97% and 50% reduction in dissipated power and computation time, respectively.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2021 |
Publisher | IEEE Computer Society |
Pages | 402-405 |
Number of pages | 4 |
ISBN (Electronic) | 9781665439466 |
DOIs | |
Publication status | Published - Jul 2021 |
Event | 20th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2021 - Tampa, United States Duration: 7 Jul 2021 → 9 Jul 2021 |
Publication series
Name | Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI |
---|---|
Volume | 2021-July |
ISSN (Print) | 2159-3469 |
ISSN (Electronic) | 2159-3477 |
Conference
Conference | 20th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2021 |
---|---|
Country/Territory | United States |
City | Tampa |
Period | 7/07/21 → 9/07/21 |
Bibliographical note
Publisher Copyright:© 2021 IEEE.
Funding
ACKNOWLDGEMENT This work is supported by the TUBITAK-1001 projects #119E507 and Istanbul Technical University BAP projects #42446.
Funders | Funder number |
---|---|
TUBITAK-1001 | 119E507 |
Istanbul Teknik Üniversitesi | 42446 |