Abstract
Stochastic computing (SC) is an emerging paradigm that offers hardware-efficient solutions for developing low-cost and noise-robust architectures. In SC, deterministic logic systems are employed along with bit-stream sources to process scalar values. However, using long bit-streams introduces challenges, such as increased latency and significant energy consumption. To address these issues, we present an optimization-oriented approach for modeling and sizing new logic gates, which results in optimal latency. The optimization process is automated using hardware-software cooperation by integrating Cadence and MATLAB environments. Initially, we optimize the circuit topology by leveraging the design parameters of two-input basic logic gates. This optimization is performed using a multiobjective approach based on a deep neural network. Subsequently, we employ the proposed gates to demonstrate favorable solutions targeting SC-based operations.
Original language | English |
---|---|
Pages (from-to) | 190-193 |
Number of pages | 4 |
Journal | IEEE Embedded Systems Letters |
Volume | 15 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Dec 2023 |
Bibliographical note
Publisher Copyright:© 2009-2012 IEEE.
Funding
This work was supported in part by the National Science Foundation (NSF) under Grant 2019511; by the Louisiana Board of Regents Support Fund under Grant LEQSF(2020-23)-RD-A-26.
Funders | Funder number |
---|---|
Louisiana Board of Regents Support Fund | LEQSF(2020-23)-RD-A-26 |
National Science Foundation | 2019511 |
Keywords
- Analog optimization
- co-processing
- latency reduction
- stochastic computing (SC)