Skip to main content

Learning Algorithms for Process Neural Networks

  • Chapter
Process Neural Networks

Part of the book series: Advanced Topics in Science and Technology in China ((ATSTC))

  • 1584 Accesses

Abstract

There are already many mature learning algorithms for the training of traditional neural networks, e.g., Back Propagation algorithm (BP algorithm)[1], Particle Swarm Optimization algorithm (PSO algorithm)[2], Genetic Algorithm (GA)[3], GA-PSO algorithm [4], Quantum Genetic algorithm (QG algorithm)[5], etc. Amongst these algorithms, the broadest and most effective one in application is the error back propagation algorithm (BP algorithm) based on gradient descent and its various improved forms. For training of process neural networks, the inputs and the connection weights of the network can be time-varying functions, the process neuron includes spatial aggregation operators and temporal accumulation operators, and the network can include different types of neurons with different operation rules, i.e. each neuron processes the input information according to its own algorithm. All of these make the mapping mechanism and learning course of the process neural network quite different from those of the traditional neural network. Furthermore, because of the randomness of the form and parameter position of the network connection weight functions, if the form of function class is not restricted or set to belong to some function class in advance, it is difficult to determine these complex parameters by learning from practical samples through network training. In mathematical terms, there is a variety of basis function systems in continuous function space so that the functions in the function space can be expressed as finite item expansions of the basis functions with a certain degree of precision under certain conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cheng H.L., Soon C.P. (2009) An efficient document classification model using an improved back propagation neural network and singular value decomposition. Expert Systems with Applications 36(2):3208–3215

    Article  Google Scholar 

  2. Meissner M., Schmuker M., Schneider G. (2006) Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinformatics 7:125–131

    Article  Google Scholar 

  3. Wang L. (2005) A hybrid genetic algorithm—neural network strategy for simulation optimization. Applied Mathematics and Computation 170(2): 1329–1343

    Article  MATH  MathSciNet  Google Scholar 

  4. Du S.Q., Li W.S., Cao K. (2006) A learning algorithm of artificial neural network based on GA-PSO. In: The Sixth World Congress on Intelligent Control and Automation 1:3633–3637

    Google Scholar 

  5. Xu Z.F., Wang H.W., Wu G.S. (2007) Converse solution of oil recovery ratio based on process neural network and quantum genetic algorithm. Journal of China University of Petroleum: Edition of Natural Science 31(6): 120–126 (in Chinese)

    MathSciNet  Google Scholar 

  6. Estatico C. (2004) A two-steps inexact Newton method for atmospheric remote sensing. In: 2004 IEEE International Workshop on Imaging Systems and Techniques p.66–70

    Google Scholar 

  7. Pan S.T., Chen S.C., Chiu S.H. (2003) A new learning algorithm of neural network for identification of chaotic systems. In: IEEE International Conference on Systems, Man and Cybernetics 2:1316–1321

    Google Scholar 

  8. Battiti R. (1992) First and second order methods for learning: between steepest descent and Newton’s method. Neural Computation 4(2): 141–166

    Article  Google Scholar 

  9. Xu S.H., He X.G. (2004) Learning algorithms of process neural networks based on orthogonal function basis expansion. Chinese Journal of Computers 27(5):645–649 (in Chinese)

    MathSciNet  Google Scholar 

  10. Ji H., Xia S.P., Yu W.X. (2001) An outline of the Fast Fourier Transform Algorithm. Modern Electronic Technique (8): 11–14 (in Chinese)

    Google Scholar 

  11. Wang N.C. (1996) Algorithmic Design of Synchronic and Parallel. Science Press, Beijing (in Chinese)

    Google Scholar 

  12. Schoenberg, I.J. (1946) Contributions to the problem of approximation of equidistant data by analytic function. Quart, Applied Mathematics 4(45–99): 112–141

    MathSciNet  Google Scholar 

  13. Li P.C., Xu S.H. (2005) Training of procedure neural network based on spline function. Computer Engineering and Design 26(4): 1081–1087 (in Chinese)

    Google Scholar 

  14. Xu H.K. (2002) Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society 66(1):240–256

    Article  MATH  MathSciNet  Google Scholar 

  15. He X.G. (1966) Theoretical problem of rational square approximation. Communication on Applied Mathematics and Computation 3(1):31–49 (in Chinese)

    Google Scholar 

  16. He X.G. (1966) Computing method of rational square approximation. Communication on Applied Mathematics and Computation 3(2):90–107 (in Chinese)

    Google Scholar 

  17. He X.G. (1965) The best approximation by segments. Communication on Applied Mathematics and Computation 2(1):21–38 (in Chinese)

    Google Scholar 

  18. He X.G. (1979) Some iterative algorithms of the best approximation by segments and their convergence. Mathematica Numerica Sinica 1(3):244–256 (in Chinese)

    MATH  MathSciNet  Google Scholar 

Download references

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Zhejiang University Press, Hangzhou and Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

(2009). Learning Algorithms for Process Neural Networks. In: Process Neural Networks. Advanced Topics in Science and Technology in China. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73762-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73762-9_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73761-2

  • Online ISBN: 978-3-540-73762-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics