Abstract
In this paper, a stable backpropagation algorithm is used to train an online evolving radial basis function neural network. Structure and parameters learning are updated at the same time in our algorithm, we do not make difference in structure learning and parameters learning. It generates groups with an online clustering. The center is updated to achieve the center is near to the incoming data in each iteration, so the algorithm does not need to generate a new neuron in each iteration, i.e., the algorithm does not generate many neurons and it does not need to prune the neurons. We give a time varying learning rate for backpropagation training in the parameters. We prove the stability of the proposed algorithm.
Similar content being viewed by others
References
Angelov PP, Filev DP (2004a) A approach to online identification of Takagi-Sugeno fuzzy models. IEEE Trans Syst Man Cybern 32(1):484–498
Angelov PP, Filev DP (2004b) Flexible models with evolving structure. Int J Intell Syst 19(4):327–340
Angelov PP, Filev DP (2005) Simpl_eTS: a simplified method for learning evolving Takagi-Sugeno fuzzy models. In: The international conference on fuzzy systems, pp 1068–1072
Angelov P, Zhou X (2006) Evolving fuzzy systems from data streams in real-time. In: International symposium on evolving fuzzy systems, pp 29–35
Angelov P, Ramezany R, Zhou X (2008) Autonomous novelty detection and object tracking in video streams using evolving clustering and Takagi-Sugeno type neuro-fuzzy system. In: IEEE World Congress on computational intelligence, pp 1457–1464
Angelov P, Filev D, Kasabov N (2010) Editorial. Evol Syst 1:1–2
Chiu SL (1994) Fuzzy Model Identification based on cluster estimation. J Intell Fuzzy Syst 2(3):267–278
Hassibi D, Stork DG (1993) Second order derivatives for network pruning. In: Advances in neural information processing, vol 5. Morgan Kaufmann, Los Altos, pp 164–171
Hilera JR, Martines VJ (1995) Redes Neuronales Artificiales, Fundamentos, Modelos y Aplicaciones. Adison Wesley Iberoamericana, USA
Iglesias JA, Angelov P, Ledezma A, Sanchis A (2010) Evolving classification of agents’ behaviors: a general approach. Evol Syst 3
Jang JSR, Sun CT (1997) Neuro-fuzzy and soft computing. Prentice Hall, Englewood Cliffs
Juang CF, Lin CT (1998) An on-line self constructing nural fuzzy inference network and its applications. IEEE Trans Fuzzy Syst 6(1):12–32
Juang CF, Lin CT (1999) A recurrent self-organizing fuzzy inference network. IEEE Trans Neural Netw 10(4):828–845
Kasabov N (2001) Evolving fuzzy nural networks for supervised/unsupervised online knowledge-based learning. IEEE Trans Syst Man Cybern 31(6):902–918
Leng G, McGinnity TM, Prasad G (2005) An approach for online extraction of fuzzy rules using a self-organising fuzzy neural network. Fuzzy Sets Syst 150:211–243
Lin CT (1994) Neural fuzzy control systems with structure and parameter learning. World Scientific, New York
Lughofer E, Angelov P (2009) Detecting and reacting on drifts and shifts in on-line data streams with evolving fuzzy systems. In: International Fuzzy Systems Association World Congress, pp 931–937
Mitra S, Hayashi Y (2000) Neuro-fuzzy rule generation: survey in soft computing framework. IEEE Trans Neural Netw 11(3):748–769
Rivals I, Personnaz L (2003) Neural network construction and selection in non linear modelling. IEEE Trans Neural Netw 14(4):804–820
Rong HJ, Sundararajan N, Huang GB, Saratchandran P (2006) Sequential adaptive fuzzy inference system (SAFIS) for nonlinear system identification and prediction. Fuzzy Sets Syst 157(9):1260–1275
Soleimani H, Lucas C, Araabi BN (2010) Recursive Gath–Geva clustering as a basis for evolving neuro-fuzzy modeling. Evol Syst 1:59–71
Tzafestas SG, Zikidis KC (2001) On-line neuro-fuzzy ART-based structure and parameter learning TSK model. IEEE Trans Syst Man Cybern 31(5):797–803
Wang LX (1997) A course in fuzzy systems and control. Prentice Hall, Englewood Cliffs
Acknowledgments
The authors are grateful with the editor and with the reviewers for their valuable comments and insightful suggestions, which can help to improve this research significantly. The authors thank the Secretaria de Investigación y Posgrado, the Comisión de Operación y Fomento de Actividades Académicas del IPN, and the Consejo Nacional de Ciencia y Tecnologia for their help in this research.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Theorem 1
We select the following Lyapunov function L 1(k − 1) as:
By updating (13), we have:
Now we calculate ΔL 1(k − 1):
Substituting (12) into the last term of (21) and using (14) gives:
Using the case \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta}\) of dead zone (14), then \(\eta (k-1)={\frac{\eta _{0}}{1+q(k-1)}}>0:\)
With \(\zeta ^{2}\left( k-1\right) \leq \overline{\zeta }^{2}:\)
From the dead-zone, \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta _{0}}\) and η(k − 1) > 0, ΔL 1(k − 1) ≤ 0. L 1(k) is bounded. If \(e^{2}\left( k-1\right) <\frac{\overline{\zeta }^{2}}{1-\eta _{0}},\) from (14) we know η(k − 1) = 0, all of weights are not changed, they are bounded, so L 1(k) is bounded.
When \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta _{0}},\) summarize (22) from 2 to T:
Since L 1(T) is bounded and using \(\eta (k-1)=\frac{\eta _{0}}{1+q(k-1)}>0:\)
Because \(e^{2}\left( k-1\right) \geq \frac{\overline{\zeta }^{2}}{1-\eta }, \left( \frac{\eta _{0}}{1+q(k-1)}\right) \left[ \left( 1-\eta _{0}\right) e^{2}\left( k-1\right) -\overline{\zeta }^{2}\right] \geq 0,\) so:
Because L 1(k − 1) is bounded, so q(k − 1) < ∞, and as \(\frac{\eta _{0}}{1+q(k-1)}>0:\)
That is (15). When \(e^{2}\left( k-1\right) <\frac{\overline{\zeta }^{2}}{\left[ 1-\eta _{0}\right]},\) it is already in this zone.
Rights and permissions
About this article
Cite this article
de Jesús Rubio, J., Vázquez, D.M. & Pacheco, J. Backpropagation to train an evolving radial basis function neural network. Evolving Systems 1, 173–180 (2010). https://doi.org/10.1007/s12530-010-9015-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12530-010-9015-9