| Peer-Reviewed

Time-Reduced Model for Multilayer Spiking Neural Networks

Received: 15 January 2023     Accepted: 3 February 2023     Published: 16 February 2023
Views:       Downloads:
Abstract

Spiking neural networks (SNNs) is a type of biological neural network model, which is more biologically plausible and computationally powerful than traditional artificial neural networks (ANNs). SNNs can achieve the same goals as ANNs, and can build a large-scale network structure (i.e. deep spiking neural network) to accomplish complex tasks. However, training deep spiking neural network is difficult due to the non-differentiable nature of spike events, and it requires much computation time during the training period. In this paper, a time-reduced model adopting two methods is presented for reducing the computation time of a deep spiking neural network (i.e. approximating the spike response function by the piecewise linear method, and choosing the suitable number of sub-synapses). The experimental results show that the methods of piecewise linear approximation and choosing the suitable number of sub-synapses is effective. This method can not only reduce the training time but also simplify the network structure. With the piecewise linear approximation method, the half of computation time of the original model can be reduced by at least. With the rule of choosing the number of sub-synapses, the computation time of less than one-tenth of the original model can be reduced for XOR and Iris tasks.

Published in International Journal of Systems Engineering (Volume 7, Issue 1)
DOI 10.11648/j.ijse.20230701.11
Page(s) 1-8
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2023. Published by Science Publishing Group

Keywords

Spiking Neural Network, Computation Time, Linear Approximation, Sub-Synapses

References
[1] J. Wu, Y. Chua, and H. Li, “A biologically plausible speech recognition framework based on spiking neural networks,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1–8.
[2] J. Liu and G. Zhao, “A bio-inspired SOSNN model for object recognition,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1–8.
[3] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition,” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 82–97, 2012.
[4] F. Karim, S. Majumdar, H. Darabi, and S. Chen, “LSTM fully convolutional networks for time series classification.,” IEEE Access, vol. 6, pp. 1662–1669, 2018.
[5] S. Min, B. Lee, and S. Yoon, “Deep learning in bioinformatics,” Brief. Bioinform., vol. 18, no. 5, pp. 851–869, 2017.
[6] W. Maass, “Networks of spiking neurons: The third generation of neural network models,” Neural Networks, vol. 10, no. 9, pp. 1659–1671, 1997.
[7] J. P. Dominguez-morales, Q. Liu, R. James, D. Gutierrez-Galan, A. Jimenez-Fernandez, S. Davidson, and S. Furber, “Deep spiking neural network model for time-variant signals classification : a real-time speech recognition approach,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1–8.
[8] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[9] M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: Training deep neural networks with binary weights during propagations,” arXiv Prepr. arXiv1511.00363, pp. 1–9, 2016.
[10] P. Wang and J. Cheng, “Fixed-point factorized networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3966–3974.
[11] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, “Compressing neural networks with the hashing trick,” in International Conference on International Conference on Machine Learning, 2015, vol. 37, pp. 2285–2294.
[12] O. Booij and H. Tat Nguyen, “A gradient descent rule for spiking neurons emitting multiple spikes,” Inf. Process. Lett., vol. 95, no. 6, pp. 552–558, 2005.
[13] Y. C. Yoon, “LIF and simplified SRM neurons encode signals into spikes via a form of asynchronous pulse sigma-delta modulation,” IEEE Trans. Neural Networks Learn. Syst., vol. 28, no. 5, pp. 1192–1205, 2017.
[14] A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” arXiv Prepr. arXiv1804.08150, pp. 1–18, 2018.
[15] M. Zhang, J. Li, Y. Wang, and G. Gao, “R-tempotron: A robust tempotron learning rule for spike timing-based decisions,” in International Computer Conference on Wavelet Active Media Technology and Information Processing, 2016, pp. 139–142.
[16] I. Sporea and A. Gruning, “Supervised learning in multilayer spiking neural networks,” Neural Comput., vol. 25, no. 2, pp. 473–509, 2013.
[17] N. Soltani and A. J. Goldsmith, “Directed information between connected leaky integrate-and-fire neurons,” IEEE Trans. Inf. Theory, vol. 63, no. 9, pp. 5954–5967, 2017.
[18] Y. Xu, X. Zeng, L. Han, and J. Yang, “A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks,” Neural Networks, vol. 43, no. 4, pp. 99–113, 2013.
[19] Q. Yu, H. Tang, K. C. Tan, and H. Yu, “A brain-inspired spiking neural network model with temporal encoding and learning,” Neurocomputing, vol. 138, no. 11, pp. 3–13, 2014.
[20] Y. Luo, Q. Fu, J. Liu, J. Harkin, L. McDaid, and Y. Cao, “An extended algorithm using an adaptation of momentum and learning rate for spiking neurons emitting multiple spikes,” Int. Work. Artif. Neural Networks, pp. 569–579, 2017.
[21] Q. Kang, B. Huang, and M. Zhou, “Dynamic behavior of artificial Hodgkin-Huxley neuron model subject to additive noise,” IEEE Trans. Cybern., vol. 46, no. 9, pp. 2083–2093, 2016.
Cite This Article
  • APA Style

    Yanjing Li. (2023). Time-Reduced Model for Multilayer Spiking Neural Networks. International Journal of Systems Engineering, 7(1), 1-8. https://doi.org/10.11648/j.ijse.20230701.11

    Copy | Download

    ACS Style

    Yanjing Li. Time-Reduced Model for Multilayer Spiking Neural Networks. Int. J. Syst. Eng. 2023, 7(1), 1-8. doi: 10.11648/j.ijse.20230701.11

    Copy | Download

    AMA Style

    Yanjing Li. Time-Reduced Model for Multilayer Spiking Neural Networks. Int J Syst Eng. 2023;7(1):1-8. doi: 10.11648/j.ijse.20230701.11

    Copy | Download

  • @article{10.11648/j.ijse.20230701.11,
      author = {Yanjing Li},
      title = {Time-Reduced Model for Multilayer Spiking Neural Networks},
      journal = {International Journal of Systems Engineering},
      volume = {7},
      number = {1},
      pages = {1-8},
      doi = {10.11648/j.ijse.20230701.11},
      url = {https://doi.org/10.11648/j.ijse.20230701.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijse.20230701.11},
      abstract = {Spiking neural networks (SNNs) is a type of biological neural network model, which is more biologically plausible and computationally powerful than traditional artificial neural networks (ANNs). SNNs can achieve the same goals as ANNs, and can build a large-scale network structure (i.e. deep spiking neural network) to accomplish complex tasks. However, training deep spiking neural network is difficult due to the non-differentiable nature of spike events, and it requires much computation time during the training period. In this paper, a time-reduced model adopting two methods is presented for reducing the computation time of a deep spiking neural network (i.e. approximating the spike response function by the piecewise linear method, and choosing the suitable number of sub-synapses). The experimental results show that the methods of piecewise linear approximation and choosing the suitable number of sub-synapses is effective. This method can not only reduce the training time but also simplify the network structure. With the piecewise linear approximation method, the half of computation time of the original model can be reduced by at least. With the rule of choosing the number of sub-synapses, the computation time of less than one-tenth of the original model can be reduced for XOR and Iris tasks.},
     year = {2023}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Time-Reduced Model for Multilayer Spiking Neural Networks
    AU  - Yanjing Li
    Y1  - 2023/02/16
    PY  - 2023
    N1  - https://doi.org/10.11648/j.ijse.20230701.11
    DO  - 10.11648/j.ijse.20230701.11
    T2  - International Journal of Systems Engineering
    JF  - International Journal of Systems Engineering
    JO  - International Journal of Systems Engineering
    SP  - 1
    EP  - 8
    PB  - Science Publishing Group
    SN  - 2640-4230
    UR  - https://doi.org/10.11648/j.ijse.20230701.11
    AB  - Spiking neural networks (SNNs) is a type of biological neural network model, which is more biologically plausible and computationally powerful than traditional artificial neural networks (ANNs). SNNs can achieve the same goals as ANNs, and can build a large-scale network structure (i.e. deep spiking neural network) to accomplish complex tasks. However, training deep spiking neural network is difficult due to the non-differentiable nature of spike events, and it requires much computation time during the training period. In this paper, a time-reduced model adopting two methods is presented for reducing the computation time of a deep spiking neural network (i.e. approximating the spike response function by the piecewise linear method, and choosing the suitable number of sub-synapses). The experimental results show that the methods of piecewise linear approximation and choosing the suitable number of sub-synapses is effective. This method can not only reduce the training time but also simplify the network structure. With the piecewise linear approximation method, the half of computation time of the original model can be reduced by at least. With the rule of choosing the number of sub-synapses, the computation time of less than one-tenth of the original model can be reduced for XOR and Iris tasks.
    VL  - 7
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Institute of Education Science Research, Heilongjiang University, Harbin, China

  • Sections