Skip to main content

Speeding-Up Pittsburgh Learning Classifier Systems: Modeling Time and Accuracy

  • Conference paper
Parallel Problem Solving from Nature - PPSN VIII (PPSN 2004)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3242))

Included in the following conference series:

Abstract

Windowing methods are useful techniques to reduce the computational cost of Pittsburgh-style genetic-based machine learning techniques. If used properly, they additionally can be used to improve the classification accuracy of the system. In this paper we develop a theoretical framework for a windowing scheme called ILAS, developed previously by the authors. The framework allows us to approximate the degree of windowing we can apply to a given dataset as well as the gain in run-time. The framework sets the first stage for the development of a larger methodology with several types of learning strategies in which we can apply ILAS, such as maximizing the learning performance of the system, or achieving the maximum run-time reduction without significant accuracy loss.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press (1975)

    Google Scholar 

  2. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Publishing Company, Inc., Reading (1989)

    MATH  Google Scholar 

  3. Smith, S.F.: Flexible learning of problem solving heuristics through adaptive search. In: Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Los Altos, CA, pp. 421–425. Morgan Kaufmann, San Francisco (1983)

    Google Scholar 

  4. Soule, T., Foster, J.A.: Effects of code growth and parsimony pressure on populations in genetic programming. Evolutionary Computation 6, 293–309 (1998)

    Article  Google Scholar 

  5. Bacardit, J., Garrell, J.M.: Incremental learning for pittsburgh approach classifier systems. In: Proceedings of the “Segundo Congreso Español de MetaheurĂ­sticas, Algoritmos Evolutivos y Bioinspirados, pp. 303–311 (2003)

    Google Scholar 

  6. Bacardit, J., Garrell, J.M.: Comparison of training set reduction techniques for pittsburgh approach genetic classifier systems. In: Proceedings of the “X Conferencia de la AsociaciĂ³n Española para la Inteligencia Artificial, CAEPIA 2003 (2003)

    Google Scholar 

  7. DeJong, K.A., Spears, W.M., Gordon, D.F.: Using genetic algorithms for concept learning. Machine Learning 13, 161–188 (1993)

    Article  Google Scholar 

  8. FĂ¼rnkranz, J.: Integrative windowing. Journal of Artificial Intelligence Research 8, 129–164 (1998)

    MATH  Google Scholar 

  9. SalamĂ³, M., Golobardes, E.: Hybrid deletion policies for case base maintenance. In: Proceedings of FLAIRS 2003, pp. 150–154 (2003)

    Google Scholar 

  10. Bacardit, J., Garrell, J.M.: Evolving multiple discretizations with adaptive intervals for a pittsburgh rule-based learning classifier system. In: CantĂº-Paz, E., Foster, J.A., Deb, K., Davis, L., Roy, R., O’Reilly, U.-M., Beyer, H.-G., Kendall, G., Wilson, S.W., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A., Schultz, A., Dowsland, K.A., Jonoska, N., Miller, J., Standish, R.K. (eds.) GECCO 2003. LNCS, vol. 2724, pp. 1818–1831. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  11. Rivest, R.L.: Learning decision lists. Machine Learning 2, 229–246 (1987)

    MathSciNet  Google Scholar 

  12. Bacardit, J., Garrell, J.M.: Bloat control and generalization pressure using the minimum description length principle for a pittsburgh approach learning classifier system. In: Kovacs, T., Llorà, X., Takadama, K., Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 2003. LNCS (LNAI), vol. 4399, pp. 59–79. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  13. Blake, C., Keogh, E., Merz, C.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/mlearn/MLRepository.html

  14. Wilson, S.W.: Classifier fitness based on accuracy. Evolutionary Computation 3, 149–175 (1995)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bacardit, J., Goldberg, D.E., Butz, M.V., LlorĂ , X., Garrell, J.M. (2004). Speeding-Up Pittsburgh Learning Classifier Systems: Modeling Time and Accuracy. In: Yao, X., et al. Parallel Problem Solving from Nature - PPSN VIII. PPSN 2004. Lecture Notes in Computer Science, vol 3242. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30217-9_103

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30217-9_103

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23092-2

  • Online ISBN: 978-3-540-30217-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics