Abstract
There are already many mature learning algorithms for the training of traditional neural networks, e.g., Back Propagation algorithm (BP algorithm)[1], Particle Swarm Optimization algorithm (PSO algorithm)[2], Genetic Algorithm (GA)[3], GA-PSO algorithm [4], Quantum Genetic algorithm (QG algorithm)[5], etc. Amongst these algorithms, the broadest and most effective one in application is the error back propagation algorithm (BP algorithm) based on gradient descent and its various improved forms. For training of process neural networks, the inputs and the connection weights of the network can be time-varying functions, the process neuron includes spatial aggregation operators and temporal accumulation operators, and the network can include different types of neurons with different operation rules, i.e. each neuron processes the input information according to its own algorithm. All of these make the mapping mechanism and learning course of the process neural network quite different from those of the traditional neural network. Furthermore, because of the randomness of the form and parameter position of the network connection weight functions, if the form of function class is not restricted or set to belong to some function class in advance, it is difficult to determine these complex parameters by learning from practical samples through network training. In mathematical terms, there is a variety of basis function systems in continuous function space so that the functions in the function space can be expressed as finite item expansions of the basis functions with a certain degree of precision under certain conditions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Cheng H.L., Soon C.P. (2009) An efficient document classification model using an improved back propagation neural network and singular value decomposition. Expert Systems with Applications 36(2):3208–3215
Meissner M., Schmuker M., Schneider G. (2006) Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinformatics 7:125–131
Wang L. (2005) A hybrid genetic algorithm—neural network strategy for simulation optimization. Applied Mathematics and Computation 170(2): 1329–1343
Du S.Q., Li W.S., Cao K. (2006) A learning algorithm of artificial neural network based on GA-PSO. In: The Sixth World Congress on Intelligent Control and Automation 1:3633–3637
Xu Z.F., Wang H.W., Wu G.S. (2007) Converse solution of oil recovery ratio based on process neural network and quantum genetic algorithm. Journal of China University of Petroleum: Edition of Natural Science 31(6): 120–126 (in Chinese)
Estatico C. (2004) A two-steps inexact Newton method for atmospheric remote sensing. In: 2004 IEEE International Workshop on Imaging Systems and Techniques p.66–70
Pan S.T., Chen S.C., Chiu S.H. (2003) A new learning algorithm of neural network for identification of chaotic systems. In: IEEE International Conference on Systems, Man and Cybernetics 2:1316–1321
Battiti R. (1992) First and second order methods for learning: between steepest descent and Newton’s method. Neural Computation 4(2): 141–166
Xu S.H., He X.G. (2004) Learning algorithms of process neural networks based on orthogonal function basis expansion. Chinese Journal of Computers 27(5):645–649 (in Chinese)
Ji H., Xia S.P., Yu W.X. (2001) An outline of the Fast Fourier Transform Algorithm. Modern Electronic Technique (8): 11–14 (in Chinese)
Wang N.C. (1996) Algorithmic Design of Synchronic and Parallel. Science Press, Beijing (in Chinese)
Schoenberg, I.J. (1946) Contributions to the problem of approximation of equidistant data by analytic function. Quart, Applied Mathematics 4(45–99): 112–141
Li P.C., Xu S.H. (2005) Training of procedure neural network based on spline function. Computer Engineering and Design 26(4): 1081–1087 (in Chinese)
Xu H.K. (2002) Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society 66(1):240–256
He X.G. (1966) Theoretical problem of rational square approximation. Communication on Applied Mathematics and Computation 3(1):31–49 (in Chinese)
He X.G. (1966) Computing method of rational square approximation. Communication on Applied Mathematics and Computation 3(2):90–107 (in Chinese)
He X.G. (1965) The best approximation by segments. Communication on Applied Mathematics and Computation 2(1):21–38 (in Chinese)
He X.G. (1979) Some iterative algorithms of the best approximation by segments and their convergence. Mathematica Numerica Sinica 1(3):244–256 (in Chinese)
Rights and permissions
Copyright information
© 2009 Zhejiang University Press, Hangzhou and Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
(2009). Learning Algorithms for Process Neural Networks. In: Process Neural Networks. Advanced Topics in Science and Technology in China. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73762-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-73762-9_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73761-2
Online ISBN: 978-3-540-73762-9
eBook Packages: Computer ScienceComputer Science (R0)