Abstract
This paper presents techniques and design structures to reduce the time-multiplexed hardware complexity of a feed-forward artificial neural network (ANN). After the weights of ANN are determined in a training phase, in a post-training stage, initially, the minimum quantization value used to convert the floating-point weights to integers is found. Then, the integer weights related to each neuron are tuned to reduce the hardware complexity in the time-multiplexed design avoiding a loss on the ANN accuracy in hardware. Also, at each layer of ANN, the multiplications of integer weights by an input variable at each time are realized under the shift-adds architecture using a minimum number of adders and subtractors. It is observed that the application of the post-training stage yields a significant reduction in area, latency, and energy consumption on the time-multiplexed designs including multipliers. Moreover, the multiplierless design of ANN whose weights are found in the post-training stage leads to a further reduction in area and energy consumption, increasing the latency slightly.
Original language | English |
---|---|
Title of host publication | 2020 IEEE International Symposium on Circuits and Systems, ISCAS 2020 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781728133201 |
Publication status | Published - 2020 |
Event | 52nd IEEE International Symposium on Circuits and Systems, ISCAS 2020 - Virtual, Online Duration: 10 Oct 2020 → 21 Oct 2020 |
Publication series
Name | Proceedings - IEEE International Symposium on Circuits and Systems |
---|---|
Volume | 2020-October |
ISSN (Print) | 0271-4310 |
Conference
Conference | 52nd IEEE International Symposium on Circuits and Systems, ISCAS 2020 |
---|---|
City | Virtual, Online |
Period | 10/10/20 → 21/10/20 |
Bibliographical note
Publisher Copyright:© 2020 IEEE
Funding
This work is funded by TUBITAK-1001 project #117E078.
Funders | Funder number |
---|---|
TUBITAK-1001 | 117E078 |