Abstract
The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully co-learned iff the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L.
The capabilities of co-learning are investigated in dependence on the choice of the hypothesis space, and compared to language learning in the limit, conservative learning and finite learning from positive data. Class preserving learning (L has to be co-learned with respect to some suitably chosen enumeration of all and only the languages from L), class comprising learning (L has to be co-learned with respect to some hypothesis space containing at least all the languages from L), and absolute co-learning (L has to be co-learned with respect to all class preserving hypothesis spaces for L) are distinguished.
Our results are manyfold. First, it is shown that co-learning is exactly as powerful as learning in the limit provided the hypothesis space is appropriately chosen. However, while learning in the limit is insensitive to the particular choice of the hypothesis space, the power of co-learning crucially depends on it. Therefore the properties a hypothesis space should have in order to be suitable for co-learning are studied. Finally, a sufficient conditions for absolute co-learnability is derived, and it is separated from finite learning.
The first author was supported by the grant No. 93.599 from the Latvian Science Council.
Preview
Unable to display preview. Download preview PDF.
References
Angluin, D. (1980), Finding patterns common to a set of strings, Journal of Computer and System Sciences, 21, 46–62.
Angluin, D. (1980), Inductive inference of formal languages from positive data, Information and Control 45, 117–135.
Freivalds, R., Karpinski, M., and Smith, C.H. (1994), Co-learning of total recursive functions, in “Proc. 7th Ann. ACM Conf. Computational Learning Theory,” pp. 190–197, ACM Press, New York.
Freivalds, R., Gobleja, D., Karpinski, M., and Smith, C.H. (1994), Co-learnability and FIN-identifiability of enumerable classes of total recursive functions, in “Proc. 4th International Workshop on Analogical and Inductive Inference — AII94,” Lecture Notes in Artificial Intelligence 872, pp. 100–105, Springer-Verlag, Berlin.
Freivalds, R., and Zeugmann, T. (1995), Co-learning of recursive languages from positive data, RIFIS-TR-CS-110, RIFIS, Kyushu University 33, April 20.
Gold, E.M. (1967), Language identification in the limit, Information and Control 10, 447–474.
Kearns, M., and Pitt, L. (1989), A polynomial-time algorithm for learning k-variable pattern languages from examples, in “Proc. 2nd Ann. Workshop on Computational Learning Theory,” pp. 57–71, Morgan Kaufmann Publ. Inc., San Mateo.
Kummer, M. (1995), A learning-theoretic characterization of classes of recursive functions, Information Processing Letters 54, 205–211.
Lange, S., and Wiehagen, R. (1991), Polynomial-time inference of arbitrary pattern languages, New Generation Computing 8, 361–370.
Lange, S., Wiehagen, R. and Zeugmann, T. (1996), Learning by Erasing, RIFIS-TR-CS-122, RIFIS, Kyushu University 33, February 13.
Lange, S., and Zeugmann, T. (1993), Monotonic versus non-monotonic language learning, in “Proceedings 2nd International Workshop on Nonmonotonic and Inductive Logic, December 1991, Reinhardsbrunn,” (G. Brewka, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence Vol. 659, pp. 254–269, Springer-Verlag, Berlin.
Lange, S., and Zeugmann, T. (1993), Language learning in dependence on the space of hypotheses, in “Proc. 6th Ann. ACM Conf. Computational Learning Theory,” pp. 127–136, ACM Press, New York.
Lange, S., and Zeugmann, T. (1993), Learning recursive languages with bounded mind changes, International Journal of Foundations of Computer Science 4, 157–178.
Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT-Press, Cambridge, Massachusetts.
Rogers, H.Jr. (1967), “Theory of Recursive Functions and Effective Computability”, McGraw-Hill, New York.
Sato, M., and Umayahara, K. (1992), Inductive inferability for formal languages from positive data, IEICE Transactions on Information and Systems E-75D, 415–419.
Shinohara, T. (1982), Polynomial time inference of extended regular pattern languages, in “Proc. RIMS Symposia on Software Science and Engineering,” Kyoto, Lecture Notes in Computer Science 147, pp. 115–127, Springer-Verlag, Berlin.
Wiehagen, R., and Zeugmann, T. (1994), Ignoring data may be the only way to learn efficiently, Journal of Theoretical and Experimental Artificial Intelligence 6, 131–144.
Zeugmann, T. (1995), Lange and Wiehagen's pattern language learning algorithm: An average-case analysis with respect to its total learning time, RIFIS-TR-CS-111, RIFIS, Kyushu University 33, April 20, 1995.
Zeugmann, T., Lange, S., and Kapur, S. (1995), Characterizations of monotonic and dual monotonic language learning, Information and Computation 120, No. 2, 1995, 155–173.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Freivalds, R., Zeugmann, T. (1996). Co-learning of recursive languages from positive data. In: Bjørner, D., Broy, M., Pottosin, I.V. (eds) Perspectives of System Informatics. PSI 1996. Lecture Notes in Computer Science, vol 1181. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62064-8_12
Download citation
DOI: https://doi.org/10.1007/3-540-62064-8_12
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-62064-8
Online ISBN: 978-3-540-49637-3
eBook Packages: Springer Book Archive