Abstract
Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem.
The present paper deals with learnability by erasing of indexed families of languages from both positive data as well as positive and negative data. This refers to the following scenario. A family L of target languages and a hypothesis space for it are specified. The learner is fed eventually all positive examples (all labeled examples) of an unknown target language L chosen from L. The target language L is learned by erasing if the learner erases some set of possible hypotheses and the least hypothesis never erased correctly describes L.
The capabilities of learning by erasing are investigated in dependence on the requirement of what sets of hypotheses have to be or may be erased, and in dependence of the choice of the hypothesis space.
Class preserving learning by erasing (L has to be learned w.r.t. some suitably chosen enumeration of all and only the languages from L), class comprising learning by erasing (L has to be learned w.r.t. some hypothesis space containing at least all the languages from L), and absolute learning by erasing (L has to be learned w.r.t. all class preserving hypothesis spaces for L) are distinguished.
For all these models of learning by erasing necessary and sufficient conditions for learnability are presented. A complete picture of all separations and coincidences of the learning by erasing models is derived. Learning by erasing is compared with standard models of language learning such as learning in the limit, finite learning and conservative learning The exact location of these types within the hierarchy of the learning by erasing models is established.
A full version of this paper appeared as Learning by Erasing, RIFIS Technical Report RIFIS-TR-CS-122, RIFIS, Kyushu University 33, February 13, 1996; http://www.rifis.kyushuu.ac.jp/thomas/treport.html
Preview
Unable to display preview. Download preview PDF.
References
Angluin, D. (1980), Inductive inference of formal languages from positive data, Information and Control 45, 117–135.
Baliga, G., Case, J., and Jain, S. (1996), Synthesizing enumeration techniques for language learning, eCOLT, eC-TR-96-003.
Blum, M. (1967), A machine-independent theory of the complexity of recursive functions, Journal of the ACM 14, 322–336.
Freivalds, R., Karpinski, M., and Smith, C.H. (1994), Co-learning of total recursive functions, in “Proc. 7th Annual ACM Conference on Computational Learning Theory,” pp. 190–197, ACM Press, New York.
Freivalds, R., Gobleja, D., Karpinski, M., and Smith, C.H. (1994), Colearnability and FIN-identifiability of enumerable classes of total recursive functions, in “Proc. 4th Int. Workshop on Analogical and Inductive Inference, AII'94,” LNAI Vol. 872, pp. 100–105, Springer-Verlag, Berlin.
Freivalds, R., and Zeugmann. T. (1995), Co-learning of recursive languages from positive data, RIFIS-TR-CS-110, RIFIS, Kyushu University 33.
Gold, E.M. (1967), Language identification in the limit, Information and Control 10, 447–474.
Kapur, S., and Bilardi, G. (1995), Language learning without overgeneralization, Theoretical Computer Science 141, 151–162.
Kummer, M. (1995), A learning-theoretic characterization of classes of recursive functions, Information Processing Letters 54, 205–211.
Lange, S., Wiehagen, R., and Zeugmann, T. (1996), Learning by erasing, RIFIS-TR-CS-122, RIFIS, Kyushu University 33.
Lange, S., and Zeugmann, T. (1994), Characterization of language learning from informant under various monotonicity constraints, Journal of Experimental & Theoretical Artificial Intelligence 6, 73–94.
Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT Press, Cambridge, Massachusetts.
Rogers, H. Jr. (1967), “Theory of Recursive Functions and Effective Computability”, McGraw-Hill, New York.
Sato, M., and Umayahara, K. (1992), Inductive inferability for formal languages from positive data, IEICE Transactions on Information and Systems E-75D, 415–419.
Selivanov, V.L. (1976). Enumerations of families of general recursive functions, Algebra and Logic 15, 128–141.
Zeugmann, T., and Lange, S. (1995), A guided tour across the boundaries of learning recursive languages, in “Algorithmic Learning for Knowledge-Based Systems,” LNAI Vol. 961, pp. 193–262, Springer-Verlag, Berlin.
Zeugmann, T., Lange, S., and Kapur, S. (1995), Characterizations of monotonic and dual monotonic language learning, Information and Computation 120, 155–173.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lange, S., Wiehagen, R., Zeugmann, T. (1996). Learning by erasing. In: Arikawa, S., Sharma, A.K. (eds) Algorithmic Learning Theory. ALT 1996. Lecture Notes in Computer Science, vol 1160. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61863-5_49
Download citation
DOI: https://doi.org/10.1007/3-540-61863-5_49
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-61863-8
Online ISBN: 978-3-540-70719-6
eBook Packages: Springer Book Archive