Abstract
We investigate the computational power of max-min propagation (MMP) neural networks, composed of neurons with maximum (Max) or minimum (Min) activation functions, applied over the weighted sums of inputs. The main results presented are that a single-layer MMP network can represent exactly any pseudo-Boolean function F:{0,1}n → [0,1], and that two-layer MMP neural networks are universal approximators. In addition, it is shown that several well-known fuzzy min-max (FMM) neural networks, such as Simpson's FMM, are representable by MMP neural networks.
Similar content being viewed by others
References
Baturone, I., Huertas, J. L., Barriga, A. and Sánchez-Lozano, S.:Current-mode multiple-input max circuit, Electronic Lett. 30 (1994), 678–680.
Cheney, E. W.:Introduction to Approximation Theory, Chap. 6, Sec. 4, McGraw-Hill, New York, 1966.
Estévez, P. A. and Okabe, Y.:Max-min propagation nets:Learning by delta rule for the Chebyshev norm, In:Proceedings of the IEEE-INNS International Joint Conference on Neural Networks, Vol. 1, pp. 524–527, Nagoya, Japan, 1993.
Estévez, P. A.:Max-Min Propagation Neural Networks:Representation Capabilities, Learning Algorithms and Evolutionary Structuring, Ph. D. Thesis, The University of Tokyo, Tokyo, Japan, 1995.
Estévez, P. A. and Nakano, R.:Hierarchical mixture of experts and max-min propagation neural networks, In:Proceedings of the IEEE International Conference on Neural Networks, Vol. 1, pp. 651–656, Perth, Australia, 1995.
Gabrys, B. and Bargiela, A.:General fuzzy min-max neural network for clustering and classification, IEEE Trans. Neural Networks, 11 (3) (2000), 769–783.
Gallant, S. I.:Neural Network Learning and Expert Systems, MIT Press, Cambridge, MA, 1993.
Hajnal, A., Maass, W., Pudlák, P., Szegedy, M. and Turán, G.:Threshold circuits of bounded depth, J. Comput. and Syst. Sci. 46 (1993), 129–154.
Hassoun, M. H. and Nabha, A. M.:Implementation of O(n)complexity max/min circuits for fuzzy and connectionist computing, In:Proceedings of the IEEE International Confer-ence on Neural Networks, pp. 998–1003, March 1993.
Leshno, M., Lin, V. Y., Pinkus, A. and Schocken, S.:Multilayer feedforward network with a non-polynomial activation function can approximate any function, Neural Networks 6 (1993), 861–867.
Likas, A.:Reinforcement learning using the stochastic fuzzy min-max neural network, Neural Process. Lett. 13 (3) (2001), 213–220.
Lippman, R. P.:An introduction to computing with neural nets, IEEE ASSP Magazine, 4 (1987), 4–22.
Maass, W.:Bounds for the computational power and learning complexity of analog neural nets, SIAM J. Comput. 26 (1997), 708–732.
Maass, W.:On the computational power of winner-take-all, Neural Comput. 12 (2000), 2519–2535.
Maass, W.:Neural computation with winner-take-all as the only nonlinear operation, In: S. A. Solla, T. K. Leen, and K.-R. Müller (eds), Adv. Neural Inf. Process. Syst. 12, The MIT Press, Cambridge, MA, pp. 293–299, 2000.
Machado, R. J., Barbosa, V. C. and Neves, P. A.:Learning in the combinatorial neural model, IEEE Trans. Neural Networks, 9 (5) (1998), 831–847.
Rizzi, A., Panella, M. and Mascioli, F. M. F.:Adaptive resolution min-max classifiers, IEEE Trans. Neural Networks, 13 (2) (2002), 402–414.
Scarselli, F. and Tsoi, A. C.:Universal approximation using feedforward neural networks: A survey of some existing methods and some new results, Neural Networks, 11 (1) (1998), 15–37.
Síma, J.:The Computational Theory of Neural Networks. Technical Report No. 823, Institute of Computer Science, Academy of Sciences of the Czech Republic, 2000.
Simpson, P. K.:Fuzzy min-max neural networks–Part 1:Classification, IEEE Trans. Neural Networks, 3 (5) (1992), 776–786.
Simpson, P. K.:Fuzzy min-max neural networks–Part 2:Clustering, IEEE Trans. Fuzzy Syst. 3 (5) (1993), 32–45.
Siu, K. Y., Roychowdhury, V. P. and Kailath, T.:Depth-size tradeoffs for neural compu-tation, IEEE Trans. Computers, 40 (1991), 1402–1412.
Siu, K. Y., Roychowdhury, V. P. and Kailath, T.:Discrete Neural Computation: A Theoretical Foundation, Prentice Hall, Englewood Cliffs, NJ, 1995.
Teow, L. N. and Loe, K. F.:Effective learning in recurrent max-min neural networks, Neural Networks, 11 (3) (1998), 535–547.
Urahama, K. and Nagao, T.:K-winners-take-all circuit with O(n)complexity, IEEE Trans. Neural Networks, 6 (1995), 776–778.
Yu, A. J., Giese, M. A. and Poggio, T. A.:Biophysiologically plausible implementations of the maximum operation, Neural Comput. 14 (2002), 2857–2881.
Zhang, X. and Hang, C. C.:The min-max function differentiation and training of fuzzy neural networks, IEEE Trans. Neural Networks, 7 (5) (1996), 1139–1149.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Estévez, P.A., Okabe, Y. On the Computational Power of Max-Min Propagation Neural Networks. Neural Processing Letters 19, 11–23 (2004). https://doi.org/10.1023/B:NEPL.0000016837.13436.d3
Issue Date:
DOI: https://doi.org/10.1023/B:NEPL.0000016837.13436.d3