Abstract
The algorithm extracts propositional rules from a labeled data set. The constituent parts of a rule are the features of the labeled data-set, each accompanied by an appropriate interval of activation and a label denoting the class. Initially, the input space is partitioned using tiles. The algorithm tries to compose the largest possible orthogonal intervals out of tiles. After the creation of intervals for each feature the rule receives credit for its classification ability. This credit will be used to improve the rule. We have obtained encouraging results on 5 different classification problems: the iris data set, the concentric data, the four gaussians, the pima-indians set and the image segmentation data set.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A. Tickle R. Andrews M. Golea J. Dietrich, “The truth will come to light: Directions and challenges in extracting the knowledge embedded withing trained artificial neural networks” IEEE Transactions on Neural Networks, vol. 9, pp. 1057–1068, 1998.
R. S. Sutton A. G. Barto, Reinforcement Learning. The MIT Press, 1998.
V. Cherkassky, Learning from Data, ch. 6. John Wiley & Sons, INC., 1998.
R. S. Sutton, “Implementation details of the TD(⋋) procedure for the case of vector predictors and backpropagation,” Tech. Rep. 87-509.1, GTE Laboratories Incorporated, Aug 1989.
“ftp.ics.uci.edu/pub/machine-learning-data bases, ftp.dice.ucl.ac.be/pub/neural-nets/elena/databases/artificial/concentric/.”
R. Munos A. Moore, “Barycentric interpolators for continous space & time reinforcement learning” Neural Information Processing Systems, 1998.
J. Santamaria R. Sutton A. Ram, “Experiments with reinforcement learning in problems with continuous state and action spaces” Adaptive behavior, vol. 6, no. 2, pp. 163–217, 1997.
M. Sato S. Ishii, “Reinforcement learning based on on-line em algorithm” Advances in Neural Information Processing Systems, vol. 11, 1999.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Wien
About this paper
Cite this paper
Vogiatzis, D., Stafylopatis, A. (2001). Reinforcement Learning for Rule Generation. In: Kůrková, V., Neruda, R., Kárný, M., Steele, N.C. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6230-9_23
Download citation
DOI: https://doi.org/10.1007/978-3-7091-6230-9_23
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-83651-4
Online ISBN: 978-3-7091-6230-9
eBook Packages: Springer Book Archive