Abstract
In the neural network context, used in a variety of applications, binarised networks, which describe both weights and activations as single-bit binary values, provide computationally attractive solutions. A lightweight binarised neural network system can be constructed using only logic gates and counters together with a two-valued activation function unit. However, binarised neural networks represent the weights and the neuron outputs with only one bit, making them sensitive to bit-flipping errors. Binarised weights and neurons are manipulated by the utilisation of bitstream processing with regard to stochastic computing to cope with this error sensitivity. Stochastic computing is shown to provide robustness for bit errors on data while being built on a hardware structure, whose implementation is simplified by a novel subtraction-free implementation of the neuron activation.
Original language | English |
---|---|
Pages (from-to) | 219-222 |
Number of pages | 4 |
Journal | Electronics Letters |
Volume | 57 |
Issue number | 5 |
DOIs | |
Publication status | Published - Mar 2021 |
Bibliographical note
Publisher Copyright:© 2021 The Authors. Electronics Letters published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Funding
This Ph.D. dissertation is supported by ITU BAP with grant MDK-2018-41532. The study is being collaboratively continued in UCLouvain, ICTEAM during Sercan Aygun's Ph.D.-related research visit in the group lead by Prof. C. De Vleeschouwer. This Ph.D. dissertation is supported by ITU BAP with grant MDK‐2018‐41532. The study is being collaboratively continued in UCLouvain, ICTEAM during Sercan Aygun's Ph.D.‐related research visit in the group lead by Prof. C. De Vleeschouwer.
Funders | Funder number |
---|---|
ITU BAP | |
Bilimsel Araştırma Projeleri Birimi, İstanbul Teknik Üniversitesi | MDK‐2018‐41532 |
Keywords
- Logic circuits
- Logic elements
- Neural net devices
- Neural nets (circuit implementations)