Skip to main content

Relational Learning and Boosting

  • Chapter
Relational Data Mining

Abstract

Boosting, a methodology for constructing and combining multiple classifiers, has been found to lead to substantial improvements in predictive accuracy. Although boosting was formulated in a propositional learning context, the same ideas can be applied to first-order learning (also known as inductive logic programming). Boosting is used here with a system that learns relational definitions of functions. Results show that the magnitude of the improvement, the additional computational cost, and the occasional negative impact of boosting all resemble the corresponding observations for propositional learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Bain. Learning Logical Exceptions in Chess. PhD Thesis. University of Strathclyde, Glasgow, 1994

    Google Scholar 

  2. S. Bell and S. Weber. On the close logical relationship between FOIL and the frameworks of Helft and Plotkin. In Proceedings of the Third International Workshop on Inductive Logic Programming, pages 127–147. Jozef Stefan Institute, Ljubljana, Slovenia, 1993.

    Google Scholar 

  3. F. Bergadano and D. Gunetti. An interactive system to learn functional logic programs. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1044–1049. Morgan Kaufmann, San Mateo, CA, 1993.

    Google Scholar 

  4. L. Breiman. Bagging predictors. Machine Learning, 24: 123–140,1996.

    MathSciNet  MATH  Google Scholar 

  5. L. Breiman. Bias, variance, and arcing classifiers. Technical Report 460. Statistics Department, University of California, Berkeley, CA, 1996.

    Google Scholar 

  6. W. Buntine. A Theory of Learning Classification Rules. PhD Thesis. University of Technology, Sydney, Australia, 1990.

    Google Scholar 

  7. R. M. Cameron-Jones and J. R. Quinlan. Avoiding pitfalls when learning recursive theories. Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1050–1057. Morgan Kaufmann, San Mateo, CA, 1993.

    Google Scholar 

  8. W. W. Cohen. Recovering software specifications with inductive logic programming. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 142–148. AAAI Press, Menlo Park, CA, 1994.

    Google Scholar 

  9. W. W. Cohen. Text categorization and relational learning. In Proceedings of the Twelfth International Conference on Machine Learning, pages 124–132. Morgan Kaufmann, San Francisco, CA, 1995.

    Google Scholar 

  10. T. G. Dietterich. Machine learning research: four current directions. AI Magazine, 18: 97–136, 1997.

    Google Scholar 

  11. T. G. Dietterich, M. J. Kearns, and Y. Mansour. Applying the weak learning framework to understand and improve C4.5. In Proceedings of the Thirteenth International Conference on Machine Learning, pages 96–104. Morgan Kaufmann, San Francisco, CA, 1996.

    Google Scholar 

  12. B. Dolšak and S. Muggleton. The application of inductive logic programming to finite element mesh design. In S. Muggleton, editor, Inductive Logic Programming, pages 453–472. Academic Press, London, 1992.

    Google Scholar 

  13. F. Esposito, D. Malerba, G. Semeraro, and M. Pazzani. A machine learning approach to document understanding. In Proceedings of the Second International Workshop on Multistrategy Learning, pages 276–292. George Mason University, Fairfax, VA, 1993.

    Google Scholar 

  14. C. Feng. Inducing temporal fault diagnostic rules from a qualitative model. In S. Muggleton, editor, Inductive Logic Programming, pages 473–493. Academic Press, London, 1992.

    Google Scholar 

  15. Y. Freund and R. E. Schapire. A decision-theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55: 119–139, 1997.

    Article  MathSciNet  MATH  Google Scholar 

  16. V. Klingspor, K. Morik, and A. Rieger. Learning concepts from sensor data of a mobile robot. Machine Learning, 23: 305–332, 1996.

    Google Scholar 

  17. N. Lavraž, I. Weber, D. Zupaniž, D. Kazakov, O. Stepankova, and S. Dzeroski. ILPNET repositories on WWW: inductive logic programming systems, datasets and bibliography. Artificial Intelligence Communications, 9: 1–50, 1996.

    Google Scholar 

  18. C. X. Ling. Learning the past tense of English verbs: the symbolic pattern associator versus connectionist models. Journal of Artificial Intelligence Research, 1: 209–229, 1994.

    Google Scholar 

  19. N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108: 212–261, 1994.

    Article  MathSciNet  MATH  Google Scholar 

  20. R. S. Michalski and G. Tecuci, editors. Machine Learning: A Multistrategy Approach. Morgan Kaufmann, San Francisco, CA, 1994.

    Google Scholar 

  21. R. J. Mooney and M. E. Califf. Induction of first-order decision lists: results on learning the past tense of English verbs. Journal of Artificial Intelligence Research, 3: 1–24, 1995.

    Google Scholar 

  22. S. Muggleton. Inverse entailment and Progol. New Generation Computing, 13: 245–286, 1995.

    Article  Google Scholar 

  23. S. Muggleton and C. Feng. Efficient induction of logic programs. In S. Muggleton, editor, Inductive Logic Programming, pages 281–298. Academic Press, London, 1992.

    Google Scholar 

  24. S. Muggleton, R. D. King, and M. J. Sternberg. Protein secondary structure prediction using logic-based machine learning. Protein Engineering, 5:6 46–657, 1992.

    Google Scholar 

  25. M. Pazzani and D. Kibler. The utility of knowledge in inductive learning. Machine Learning, 9: 57–94, 1992.

    Google Scholar 

  26. J. R. Quinlan. Discovering rules by induction from large collections of examples. In D. Michie, editor, Expert Systems in the Micro Electronic Age. Edinburgh University Press, Edinburgh, 1979.

    Google Scholar 

  27. J. R. Quinlan. Learning logical definitions from relations. Machine Learning, 5: 239–266, 1990.

    Google Scholar 

  28. J. R. Quinlan. C4–5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.

    Google Scholar 

  29. J. R. Quinlan. Past tenses of verbs and first-order learning. In Proceedings of the Seventh Australian Joint Conference on Artificial Intelligence, pages 13–20. World Scientific, Singapore, 1994.

    Google Scholar 

  30. J. R. Quinlan. Boosting, bagging, and C4.5. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, pages 725–730. AAAI Press, Menlo Park, CA, 1996.

    Google Scholar 

  31. J. R. Quinlan. Learning first-order definitions of functions. Journal of Artificial Intelligence Research, 5: 139–161, 1996.

    MATH  Google Scholar 

  32. J. R. Quinlan and R. M. Cameron-Jones. Induction of logic programs: FOIL and related systems. New Generation Computing, 13: 287–312, 1995.

    Article  Google Scholar 

  33. C. Rouveirol. Flattening and saturation: two representation changes for generalization. Machine Learning, 14: 219–232, 1994.

    Article  MATH  Google Scholar 

  34. R. E. Schapire. The strength of weak learnability. Machine Learning, 5: 197–227, 1990.

    Google Scholar 

  35. A. Srinivasan, S. Muggleton, M. J. E. Sternberg, and R. D. King. Theories for mutagenicity: A study in first-order and feature-based induction. Artificial Intelligence, 85: 277–299, 1996.

    Article  Google Scholar 

  36. J. M. Zelle and R. J. Mooney. Combining FOIL and EBG to speed-up logic programs. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1106–1111. Morgan Kaufmann, San Mateo, CA, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Quinlan, R. (2001). Relational Learning and Boosting. In: Džeroski, S., Lavrač, N. (eds) Relational Data Mining. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-04599-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-04599-2_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-07604-6

  • Online ISBN: 978-3-662-04599-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics