CRC: Compressed Reservoir Computing on FPGA via Joint HSIC LASSO-based Pruning and Quantization

Authors

  • Atousa Jafari Paderborn University
  • Hassan Ghasemzadeh Mohammadi Reneo Group GmbH
  • Marco Platzner Paderborn University

DOI:

https://doi.org/10.64552/wipiec.v11i1.99

Keywords:

Dataflow accelerator, Echo state network, Pruning, Quantization, Time-series application

Abstract

While reservoir computing (RC) networks offer advantages over traditional recurrent neural net- works in terms of training time and operational cost for time-series applications, deploying them on edge devices still presents significant challenges due to re- source constraints. Network compression, i.e., pruning and quantization, are thus of utmost importance. We propose a Compressed Reservoir Computing (CRC) framework that integrates advanced pruning and quantization techniques to optimize throughput, latency, energy efficiency, and resource utilization for FPGA- based RC accelerators.
We describe the framework with a focus on HSIC LASSO as a novel pruning method that can capture non-linear dependencies between neurons. We validate our framework with time series classification and regression tasks, for which we generate FPGA accelerators. The accelerators achieve a very high throughput of up to 188 Megasamples/s with a latency of 5.32 ns, while reducing resource utilization by 12× and lowering the energy by 10× compared to a baseline hardware implementation, without compromising accuracy.

References

G. Tanaka et al., “Recent advances in physical reservoir computing: A review,” Neural Networks, vol. 115, pp. 100–123, 2019. DOI: https://doi.org/10.1016/j.neunet.2019.03.005

C. Lin et al., “Fpga-based reservoir computing with optimized reservoir node architecture,” in 2022 23rd International Sympo- sium on Quality Electronic Design (ISQED). IEEE, 2022, pp. 1–6. DOI: https://doi.org/10.1109/ISQED54688.2022.9806247

X. Zhang et al., “Appq-cnn: An adaptive cnns inference accel- erator for synergistically exploiting pruning and quantization based on fpga,” IEEE Transactions on Sustainable Computing, 2024. DOI: https://doi.org/10.1109/TSUSC.2024.3382157

H. Wang et al., “Optimizing the echo state network based on mutual information for modeling fed-batch bioprocesses,” Neurocomputing, vol. 225, pp. 111–118, 2017. DOI: https://doi.org/10.1016/j.neucom.2016.11.007

D. Li et al., “Structure optimization for echo state network based on contribution,” Tsinghua Science and Technology, vol. 24, no. 1, pp. 97–105, 2018. DOI: https://doi.org/10.26599/TST.2018.9010049

J. Huang et al., “Semi-supervised echo state network with partial correlation pruning for time-series variables prediction in industrial processes,” Measurement Science and Technology, vol. 34, no. 9, p. 095106, 2023. DOI: https://doi.org/10.1088/1361-6501/acd8dc

S. Liu et al., “Quantized reservoir computing on edge devices for communication applications,” in 2020 IEEE/ACM Symposium on Edge Computing (SEC), 2020, pp. 445–449. DOI: https://doi.org/10.1109/SEC50012.2020.00068

Y. Abe et al., “Spctre: sparsity-constrained fully-digital reser- voir computing architecture on fpga,” International Journal of Parallel, Emergent and Distributed Systems, vol. 39, no. 2, pp. 197–213, 2024. DOI: https://doi.org/10.1080/17445760.2024.2310576

A. Jafari et al., “Ultra-low latency and extreme-throughput echo state neural networks on fpga,” in Applied Reconfigurable Computing. Architectures, Tools, and Applications. Springer Nature Switzerland, 2025, pp. 179–195. DOI: https://doi.org/10.1007/978-3-031-87995-1_11

N. Trouvain et al., “Reservoirpy: an efficient and user-friendly library to design echo state networks,” in International Confer- ence on Artificial Neural Networks. Springer, 2020, pp. 494– 505. DOI: https://doi.org/10.1007/978-3-030-61616-8_40

A. Pappalardo, “Xilinx/brevitas,” 2023. [Online]. Available: https://doi.org/10.5281/zenodo.3333552

Z. Huang et al., “Rethinking the pruning criteria for con- volutional neural network,” Advances in Neural Information Processing Systems, vol. 34, pp. 16 305–16 318, 2021.

H. Ghasemzadeh Mohammadi et al., “Efficient statistical pa- rameter selection for nonlinear modeling of process/performance variation,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 35, no. 12, pp. 1995–2007, 2016. DOI: https://doi.org/10.1109/TCAD.2016.2547908

M. Yamada et al., “High-dimensional feature selection by feature-wise kernelized lasso,” Neural computation, vol. 26, no. 1, pp. 185–207, 2014. DOI: https://doi.org/10.1162/NECO_a_00537

Y. Umuroglu et al., “Streamlined deployment for quantized neural networks,” https://arxiv.org/abs/1709.04060, 2018.

Logicnets: Co-designed neural networks and circuits for extreme-throughput applications,” in 2020 30th Interna- tional Conference on Field-Programmable Logic and Applica- tions (FPL), 2020, pp. 291–297. DOI: https://doi.org/10.1109/FPL50879.2020.00055

A. H. Hadipour et al., “A two-stage approximation methodology for efficient dnn hardware implementation,” in 2025 IEEE 28th International Symposium on Design and Diagnostics of Elec- tronic Circuits and Systems (DDECS), 2025, pp. 119–122 DOI: https://doi.org/10.1109/DDECS63720.2025.11006769

Downloads

Published

2025-09-02

How to Cite

Jafari, A., Ghasemzadeh Mohammadi, H., & Platzner, M. (2025). CRC: Compressed Reservoir Computing on FPGA via Joint HSIC LASSO-based Pruning and Quantization. WiPiEC Journal - Works in Progress in Embedded Computing Journal, 11(1), 4. https://doi.org/10.64552/wipiec.v11i1.99