Skip to main content

Reinforcement Learning for Rule Generation

  • Conference paper
Artificial Neural Nets and Genetic Algorithms
  • 293 Accesses

Abstract

The algorithm extracts propositional rules from a labeled data set. The constituent parts of a rule are the features of the labeled data-set, each accompanied by an appropriate interval of activation and a label denoting the class. Initially, the input space is partitioned using tiles. The algorithm tries to compose the largest possible orthogonal intervals out of tiles. After the creation of intervals for each feature the rule receives credit for its classification ability. This credit will be used to improve the rule. We have obtained encouraging results on 5 different classification problems: the iris data set, the concentric data, the four gaussians, the pima-indians set and the image segmentation data set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Tickle R. Andrews M. Golea J. Dietrich, “The truth will come to light: Directions and challenges in extracting the knowledge embedded withing trained artificial neural networks” IEEE Transactions on Neural Networks, vol. 9, pp. 1057–1068, 1998.

    Article  Google Scholar 

  2. R. S. Sutton A. G. Barto, Reinforcement Learning. The MIT Press, 1998.

    Google Scholar 

  3. V. Cherkassky, Learning from Data, ch. 6. John Wiley & Sons, INC., 1998.

    Google Scholar 

  4. R. S. Sutton, “Implementation details of the TD(⋋) procedure for the case of vector predictors and backpropagation,” Tech. Rep. 87-509.1, GTE Laboratories Incorporated, Aug 1989.

    Google Scholar 

  5. “ftp.ics.uci.edu/pub/machine-learning-data bases, ftp.dice.ucl.ac.be/pub/neural-nets/elena/databases/artificial/concentric/.”

    Google Scholar 

  6. R. Munos A. Moore, “Barycentric interpolators for continous space & time reinforcement learning” Neural Information Processing Systems, 1998.

    Google Scholar 

  7. J. Santamaria R. Sutton A. Ram, “Experiments with reinforcement learning in problems with continuous state and action spaces” Adaptive behavior, vol. 6, no. 2, pp. 163–217, 1997.

    Article  Google Scholar 

  8. M. Sato S. Ishii, “Reinforcement learning based on on-line em algorithm” Advances in Neural Information Processing Systems, vol. 11, 1999.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Wien

About this paper

Cite this paper

Vogiatzis, D., Stafylopatis, A. (2001). Reinforcement Learning for Rule Generation. In: Kůrková, V., Neruda, R., Kárný, M., Steele, N.C. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6230-9_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6230-9_23

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83651-4

  • Online ISBN: 978-3-7091-6230-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics