Skip to main content

Salp Swarm Algorithm (SSA) for Training Feed-Forward Neural Networks

  • Conference paper
  • First Online:
Soft Computing for Problem Solving

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 816))

Abstract

Artificial neural networks (ANNs) have shown efficient results in statistics and computer science applications. Feed-forward neural network (FNN) is the most popular and simplest neural network architecture, capable of solving nonlinearity. In this paper, feed-forward neural networks’ weight and bias figuring using a newly proposed metaheuristic Salp Swarm Algorithm (SSA) are proposed. SSA is a swarm-based metaheuristic inspired by the navigating and foraging behaviour of salp swarm. The performance is evaluated for some of the benchmarked datasets and compared with some well-known metaheuristics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hertz J.: Introduction to the theory of neural computation. Basic Books 1 (1991)

    Google Scholar 

  2. Rumelhart, D.E., Williams, R.J., Hinton, G.E.: Learning internal representations by error propagation. Parallel Distributed Process.: Explorations Microstruct. Cognition 1, 318–362 (1986)

    Google Scholar 

  3. Glover, F.W., Kochenberger, G.A. (eds.) Handbook of metaheuristics, vol. 57. Springer Science & Business Media (2006)

    Google Scholar 

  4. Gao, Q., Lei, K.Q.Y., He, Z.: An Improved Genetic Algorithm and Its Application in Artificial Neural Network, Information, Communications and Signal Processing, 2005. In: Fifth International Conference on, December 06–09, pp. 357–360 (2005)

    Google Scholar 

  5. Tsai, J.T., Chou, J.H., Liu, T.K.: Tuning the structure and parameters of a neural network by using hybrid taguchi-genetic algorithm. IEEE Trans. Neural Networks 17(1) (2006)

    Article  Google Scholar 

  6. Pavlidis, N.G., Tasoulis, D.K., Plagianakos, V.P., Nikiforidis, G., Vrahatis, M.N.: Spiking Neural Network Training Using Evolutionary Algorithms, Neural Networks, 2005. In: IJCNN 05. Proceedings 2005 IEEE International Joint Conference, vol. 4, pp. 2190–2194 (2005)

    Google Scholar 

  7. Mendes, R., Cortez, P., Rocha, M., Neves, J.: Particle swarm for feedforward neural network training. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 1895–1899 (2002)

    Google Scholar 

  8. Meissner, M., Schmuker, M., Schneider, G.: Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioin- Formatics 7, 125 (2006)

    Article  Google Scholar 

  9. Fan, H., Lampinen, J.: A trigonometric mutation operation to differential evolution. J. Global Optim. 27, 105–129 (2003)

    Google Scholar 

  10. Slowik, A., Bialko, M.: Training of artificial neural networks using differential evolution algorithm. Human System Interactions, pp. 60–65 (2008)

    Google Scholar 

  11. Blum, C., Socha, K.: Training feed-forward neural networks with ant colony optimization: an application to pattern classification. In: 5th international conference on, Hybrid Intelligent Systems, 2005. HIS05, p. 6 (2005)

    Google Scholar 

  12. Socha, K., Blum, C.: An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training. Neural Comput. Appl. 16, 235–247 (2007)

    Article  Google Scholar 

  13. Ozturk, C., Karaboga, D.: Hybrid Artificial Bee Colony algorithm for neural network training. In: 2011 IEEE Congress on, Evolutionary Computation (CEC), pp. 84–88 (2011)

    Google Scholar 

  14. Mirjalili, S.: How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl. Intell. 43(1), 150–161 (2015)

    Article  Google Scholar 

  15. Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Software (2017)

    Google Scholar 

  16. Olorunda, O., & Engelbrecht, A.P.: Measuring exploration/exploitation in particle swarms using swarm diversity. In: Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence). IEEE Congress on (pp. 1128–1134). IEEE (2008, June)

    Google Scholar 

  17. Alba, E., Dorronsoro, B.: The exploration/exploitation tradeoff in dynamic cellular genetic algorithms. IEEE Trans. Evolutionary Comput. 9(2), 126–142 (2005)

    Article  Google Scholar 

  18. Crepinsek, M., Liu, S.H., Mernik, M.: Exploration and exploitation in evolutionary algorithms: a survey. ACM Comput. Surveys (CSUR) 45(3), 35 (2013)

    Article  Google Scholar 

  19. Blake, C., Merz, C.J.: UCI Repository of machine learning databases (1998)

    Google Scholar 

  20. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Let a biogeography-based optimizer train your multi-layer perceptron. Inf. Sci. 269, 188–209 (2014)

    Article  MathSciNet  Google Scholar 

  21. Demar, J.: Statistical comparisons of classifiers over multiple data sets. J. Machine Learning Res. 7(Jan), 1–30 (2006)

    Google Scholar 

  22. Garca, S., Fernndez, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Divya Bairathi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bairathi, D., Gopalani, D. (2019). Salp Swarm Algorithm (SSA) for Training Feed-Forward Neural Networks. In: Bansal, J., Das, K., Nagar, A., Deep, K., Ojha, A. (eds) Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol 816. Springer, Singapore. https://doi.org/10.1007/978-981-13-1592-3_41

Download citation

Publish with us

Policies and ethics