Skip to main content

Mind Change Complexity of Learning Logic Programs

  • Conference paper
  • First Online:
Computational Learning Theory (EuroCOLT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1572))

Included in the following conference series:

Abstract

The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts.

Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data.

More precisely, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m > 0, the class of languages defined by formal systems of length ≤ m:

  • is identifiable in the limit from positive data with a mind change bound of ωm;

  • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of ω × m.

The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly-covering programs, and Krishna Rao’s depth-bounded linearly-moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.

The research of Arun Sharma is supported by Australian Research Council Grant A49803051.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Ambainis, R. Freivalds, and C. Smith. General inductive inference types based on linearly-ordered sets. In Proceedings of Symposium on Theoretical Aspects of Computer Science, volume 1046 of Lecture Notes in Computer Science, pages 243–253. Springer-Verlag, 1996.

    Google Scholar 

  2. A. Ambainis, S. Jain, and A. Sharma. Ordinal mind change complexity of language identification. In S. Ben-David, editor, Third European Conference on Computational Learning Theory, volume 1208 of Lecture Notes in Artificial Intelligence, pages 301–315. Springer-Verlag, 1997.

    Google Scholar 

  3. A. Ambainis. Power of procrastination in inductive inference: How it depends on used ordinal notations. In Paul Vitènyi, editor, Second European Conference on Computational Learning Theory, volume 904 of Lecture Notes in Artificial Intelligence, pages 99–111. Springer-Verlag, 1995.

    Google Scholar 

  4. D. Angluin. Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21:46–62, 1980.

    Article  MATH  MathSciNet  Google Scholar 

  5. D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.

    Article  MATH  MathSciNet  Google Scholar 

  6. S. Arikawa. Elementary formal systems and formal languages simple formal systems. Memoirs of the Faculty of Science, Kyushu University Seties A, 24:47–75, 1970.

    MATH  MathSciNet  Google Scholar 

  7. H. Arimura and T. Shinohara. Inductive inference of Prolog programs with linear data dependency from positive data. In H. Jaakkola, H. Kangassalo, T. Kitahashi, and A. Markus, editors, Proc. Information Modelling and Knowledge Bases V, pages 365–375. IOS Press, 1994.

    Google Scholar 

  8. S. Arikawa, T. Shinohara, and A. Yamamoto. Learning elementary formal systems. Theoretical Computer Science, 95:97–113, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  9. F. Bergadano and G. Gunetti. Inductive Logic Programming: from Machine Learning to Software Engineering. MIT Press, 1996.

    Google Scholar 

  10. J. Bārzdiņš and K. Podnieks. The theory of inductive inference. In Second Symposium on Mathematical Foundations of Computer Science, pages 9–15. Math. Inst. of the Slovak Academy of Sciences, 1973.

    Google Scholar 

  11. J. Case, S. Jain, and M. Suraj. Not-so-nearly-minimal-size program inference. In K. Jantke and S. Lange, editors, Algorithmic Learning for Knowledge-Based Systems, volume 961 of Lecture Notes in Artificial Intelligence, pages 77–96. Springer-Verlag, 1995.

    Google Scholar 

  12. W.W. Cohen. PAC-Learning non-recursive Prolog clauses. Artificial Intelligence, 79:1–38, 1995.

    Article  MATH  MathSciNet  Google Scholar 

  13. W.W. Cohen. PAC-Learning recursive logic programs: Efficient algorithms. Journal of Artificial Intelligence Research, 2:501–539, 1995.

    MATH  Google Scholar 

  14. W.W. Cohen. PAC-Learning recursive logic programs: Negative results. Journal of Artificial Intelligence Research, 2:541–573, 1995.

    MATH  Google Scholar 

  15. J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.

    Article  MATH  MathSciNet  Google Scholar 

  16. S. Dzeroski, S. Muggleton, and S. Russell. PAC-Learnability of constrained nonrecursive logic programs. In Proceedings of the Third International Workshop on Computational Learning Theory and Natural Learning Systems, 1992. Wisconsin, Madison.

    Google Scholar 

  17. S. Dzeroski, S. Muggleton, and S. Russell. PAC-Learnability of determinate logic programs. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pages 128–135. ACM Press, July 1992.

    Google Scholar 

  18. L. De Raedt and S Dzeroski. First-order jk-clausal theories are PAC-learnable. Artificial Intelligence, 70:375–392, 1994.

    Article  MATH  MathSciNet  Google Scholar 

  19. A. Frisch and C.D. Page. Learning constrained atoms. In Proceedings of the Eighth International Workshop on Machine Learning. Morgan Kaufmann, 1991.

    Google Scholar 

  20. R. Freivalds and C. Smith. On the role of procrastination in machine learning. Information and Computation, pages 237–271, 1993.

    Google Scholar 

  21. E.M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.

    Article  MATH  Google Scholar 

  22. D. Hausler. Learning conjunctive concepts in structural domains. Machine Learning, 4(1), 1989.

    Google Scholar 

  23. S. Jain and A. Sharma. Elementary formal systems, intrinsic complexity, and procrastination. Information and Computation, 132:65–84, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  24. R. Khardon. Learning first order universal Horn expressions. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 154–165. ACM Press, 1998.

    Google Scholar 

  25. J-U. Kietz. Some computational lower bounds for the computational complexity of inductive logic programming. In Proceedings of the 1993 European Conference on Machine Learning, 1993. Vienna.

    Google Scholar 

  26. S. Kleene. Notations for ordinal numbers. Journal of Symbolic Logic, 3:150–155, 1938.

    Article  MATH  Google Scholar 

  27. M. Krishna Rao. A class of Prolog programs inferable from positive data. In A. Arikawa and A. Sharma, editors, Algorithmic Learning Theory: Seventh International Workshop (ALT’ 96), volume 1160 of Lecture Notes in Artificial Intelligence, pages 272–284. Springer-Verlag, 1996.

    Google Scholar 

  28. M. Krishna Rao and A. Sattar. Learning from entailment of logic programs with local variables. In M. Richter, C. Smith, R. Wiehagen, and T. Zeugmann, editors, Algorithmic Learning Theory: Ninth International Workshop (ALT’’97), Lecture Notes in Artificial Intelligence. Springer-Verlag, 1998. To appear.

    Google Scholar 

  29. N. LavaraČ and S. Džeroski. Inductive Logic Programming. Ellis Horwood, New York, 1994.

    Google Scholar 

  30. J.W. Lloyd. Foundation of Logic Programming (Second Edition). Springer, New York, 1987.

    Google Scholar 

  31. S. Muggleton and L. De Raedt. Inductive Logic Programming: Theory and Methods. Journal of Logic Programming, 19(20):629–679, 1994.

    Article  MathSciNet  Google Scholar 

  32. S Miyano, A. Shinohara, and T. Shinohara. Which classes of elementary formal systems are polynomial-time learnable? In Proceedings of the Second Workshop on Algorithmic Learning Theory, pages 139–150, 1991.

    Google Scholar 

  33. S. Miyano, A. Shinohara, and T. Shinohara. Learning elementary formal systems and an application to discovering motifs in proteins. Technical Report RIFIS-TRCS-37, RIFIS, Kyushu University, 1993.

    Google Scholar 

  34. W. Maass and Gy. Tur7#x00E1;n. On learnability and predicate logic. Technical Report NC-TR-96-023, NeuroCOLT Technical Report, 1996.

    Google Scholar 

  35. S.H. Nienhuys-Cheng and R. de Wolf. Foundations of Inductive Logic Programming. LNAI Tutorial 1228. Springer-Verlag, 1997.

    Google Scholar 

  36. G. Plotkin. Automatic Methods of Inductive Inference. PhD thesis, University of Edinburgh, 1971.

    Google Scholar 

  37. H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967. Reprinted, MIT Press 1987.

    Google Scholar 

  38. G. Sacks. Higher Recursion Theory. Springer-Verlag, 1990.

    Google Scholar 

  39. E. Shapiro. Inductive inference of theories from facts. Technical Report 192, Computer Science Department, Yale University, 1981.

    Google Scholar 

  40. T. Shinohara. Inductive inference of monotonic formal systems from positive data. New Generation Computing, 8:371–384, 1991.

    Article  MATH  Google Scholar 

  41. T. Shinohara. Rich classes inferable from positive data: Length-bounded elementary formal systems. Information and Computation, 108:175–186, 1994.

    Article  MATH  MathSciNet  Google Scholar 

  42. R. Smullyan. Theory of Formal Systems, Annals of Mathematical Studies, No. 47. Princeton, NJ, 1961.

    Google Scholar 

  43. A. Sharma, F. Stephan, and Y. Ventsov. Generalized notions of mind change complexity. In Proceedings of the Tenth Annual Conference on Computational Learning Theory, pages 96–108. ACM Press, 1997.

    Google Scholar 

  44. K. Wright. Identification of unions of languages drawn from an identifiable class. In R. Rivest, D. Haussler, and M. Warmuth, editors, Proceedings of the Second Annual Workshop on Computational Learning Theory, pages 328–333. Morgan Kaufmann, 1989.

    Google Scholar 

  45. A. Yamamoto. Generalized unification as background knowledge in learning logic programs. In K. Jantke, S. Kobayashi, E. Tomita, and T. Yokomori, editors, Algorithmic Learning Theory: Fourth International Workshop (ALT’ 93), volume 744 of Lecture Notes in Artificial Intelligence, pages 111–122. Springer-Verlag, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jain, S., Sharma, A. (1999). Mind Change Complexity of Learning Logic Programs. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_16

Download citation

  • DOI: https://doi.org/10.1007/3-540-49097-3_16

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65701-9

  • Online ISBN: 978-3-540-49097-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics