Skip to main content

Acceleration of Convolutional Networks Using Nanoscale Memristive Devices

  • Conference paper
  • First Online:
Engineering Applications of Neural Networks (EANN 2018)

Abstract

We discuss a convolutional neural network for handwritten digit classification and its hardware acceleration as an inference engine using nanoscale memristive devices in the spike domain. We study the impact of device programming variability on the spiking neural network’s (SNN) inference accuracy and benchmark its performance with an equivalent artificial neural network (ANN). We demonstrate optimization strategies to implement these networks with memristive devices with an on-off ratio as low as 10 and only 32 levels of resolution. Further, close to baseline accuracies can be maintained for the networks even if such memristive devices are used to duplicate the pre-determined kernel weights to enable parallel execution of the convolution operation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hubel, D., Wiesel, T.: Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 195(1), 215–243 (1968)

    Article  Google Scholar 

  2. Lecun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  3. Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)

    Google Scholar 

  4. Szegedy, C., et al.: Inception-v4, Inception-ResNet and the impact of residual connections on learning. In: AAAI, vol. 4, p. 12 (2017)

    Google Scholar 

  5. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)

    Article  Google Scholar 

  6. Wang, B., et al.: Firing frequency maxima of fast-spiking neurons in human, monkey, and mouse neocortex. Front. Cell. Neurosci. 10, 239 (2016). 27803650 [pmid]

    Google Scholar 

  7. Han, B., Sengupta, A., Roy, K.: On the energy benefits of spiking deep neural networks: a case study. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 971–976. IEEE (2016)

    Google Scholar 

  8. Merolla, P.A., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)

    Article  Google Scholar 

  9. Qiao, N., et al.: A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses. Front. Neurosci. 9, 141 (2015)

    Article  Google Scholar 

  10. Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)

    Article  Google Scholar 

  11. Kim, S., et al.: NVM neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous In-Situ learning. In: 2015 IEEE International Electron Devices Meeting (IEDM), pp. 17.1.1–17.1.4, December 2015

    Google Scholar 

  12. Burr, G.W., et al.: Neuromorphic computing using non-volatile memory. Adv. Phys. X 2(1), 89–124 (2017)

    MathSciNet  Google Scholar 

  13. Rajendran, B., Alibart, F.: Neuromorphic computing based on emerging memory technologies. IEEE J. Emerg. Sel. Top. Circ. Syst. 6(2), 198–211 (2016)

    Article  Google Scholar 

  14. Kuzum, D., Yu, S., Wong, P.: Synaptic electronics: materials, devices and applications. Nanotechnology 24(38), 382001 (2013)

    Article  Google Scholar 

  15. Jo, S.H., et al.: Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10(4), 1297–1301 (2010). PMID: 20192230

    Article  Google Scholar 

  16. Jackson, B.L., et al.: Nanoscale electronic synapses using phase change devices. J. Emerg. Technol. Comput. Syst. 9(2), 12 (2013)

    Article  Google Scholar 

  17. Burr, G.W., et al.: Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power). In: 2015 IEEE International Electron Devices Meeting (IEDM), pp. 4.4.1–4.4.4, December 2015

    Google Scholar 

  18. Song, L., et al.: PipeLayer: a pipelined ReRAM-based accelerator for deep learning. In: 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 541–552, February 2017

    Google Scholar 

  19. Yakopcic, C., Alom, Z., Taha, T.: Memristor crossbar deep network implementation based on a convolutional neural network. In: International Joint Conference on Neural Networks (2016)

    Google Scholar 

  20. Yakopcic, C., Alom, Z., Taha, T.: Extremely parallel memristor crossbar architecture for convolutional neural network implementation. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1696–1703. IEEE (2017)

    Google Scholar 

  21. Chen, P.Y., et al.: Mitigating effects of non-ideal synaptic device characteristics for on-chip learning. In: 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), November 2015

    Google Scholar 

  22. Babu, A.V., Rajendran, B.: Stochastic deep learning in memristive networks. In: 2017 24th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 214–217, December 2017

    Google Scholar 

  23. Abbott, L.: Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res. Bull. 50, 303–304 (1999)

    Article  Google Scholar 

  24. Kulkarni, S.R., Alexiades, J.M., Rajendran, B.: Learning and real-time classification of hand-written digits with spiking neural networks. In: 2017 24th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 128–131, December 2017

    Google Scholar 

  25. CalderĂłn, A., Roa, S., Victorino, J.: Handwritten digit recognition using convolutional neural networks and Gabor filters. In: Proceedings of International Congress on Computational Intelligence (2003)

    Google Scholar 

  26. Anwani, N., Rajendran, B.: NormAD - normalized approximate descent based supervised learning rule for spiking neurons. In: International Joint Conference on Neural Networks, pp. 1–8, July 2015

    Google Scholar 

  27. Schreiber, S., et al.: A new correlation-based measure of spike timing reliability. Neurocomputing 52, 925–931 (2003)

    Article  Google Scholar 

  28. Kulkarni, S.R., Rajendran, B.: Spiking neural networks for handwritten digit recognition-supervised learning and network optimization. Neural Netw. 103, 118–127 (2018)

    Article  Google Scholar 

  29. Stromatias, E., et al.: Robustness of spiking deep belief networks to noise and reduced bit precision of neuro-inspired hardware platforms. Front. Neurosci. 9, 222 (2015)

    Article  Google Scholar 

  30. Suri, M., et al.: Phase change memory as synapse for ultra-dense neuromorphic systems: application to complex visual pattern extraction. In: 2011 International Electron Devices Meeting, pp. 4.4.1–4.4.4, December 2011

    Google Scholar 

  31. Gokmen, T., Onen, M., Haensch, W.: Training deep convolutional neural networks with resistive cross-point devices. arXiv preprint arXiv:1705.08014 (2017)

  32. Garbin, D., et al.: HfO2-based OxRAM devices as synapses for convolutional neural networks. IEEE Trans. Electron Devices 62(8), 2494–2501 (2015)

    Article  Google Scholar 

  33. Lim, S., et al.: Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices. ArXiv e-prints arXiv:1707.06381v2, July 2017

  34. Boybat, I., et al.: Neuromorphic computing with multi-memristive synapses. ArXiv e-prints, November 2017

    Google Scholar 

  35. Panwar, N., Rajendran, B., Ganguly, U.: Arbitrary spike time dependent plasticity (STDP) in memristor by analog waveform engineering. IEEE Electron Device Lett. 38(6), 740–743 (2017)

    Article  Google Scholar 

Download references

Acknowledgments

This research was supported in part by the CAMPUSENSE project grant from CISCO Systems Inc, the Semiconductor Research Corporation (2016-SD-2717), and the National Science Foundation grant 1710009.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bipin Rajendran .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kulkarni, S.R., Babu, A.V., Rajendran, B. (2018). Acceleration of Convolutional Networks Using Nanoscale Memristive Devices. In: Pimenidis, E., Jayne, C. (eds) Engineering Applications of Neural Networks. EANN 2018. Communications in Computer and Information Science, vol 893. Springer, Cham. https://doi.org/10.1007/978-3-319-98204-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98204-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98203-8

  • Online ISBN: 978-3-319-98204-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics