Skip to main content

Finding Outpoints in Noisy Binary Sequences — A Revised Empirical Evaluation

  • Conference paper
Advanced Topics in Artificial Intelligence (AI 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1747))

Included in the following conference series:

Abstract

Kearns et al. (1997) in an earlier paper presented an empirical evaluation of model selection methods on a specialized version of the segmentation problem. The inference task was the estimation of a predefined Boolean function on the real interval [0,1] from a noisy random sample. Three model selection methods based on the Guaranteed Risk Minimization, Minimum Description Length (MDL) Principle and Cross Validation were evaluated on samples with varying noise levels. The authors concluded that, in general, none of the methods was superior to the others in terms of predictive accuracy. In this paper we identify an inefficiency in the MDL approach as implemented by Kearns et al. and present an extended empirical evaluation by including a revised version of the MDL method and another approach based on the Minimum Message Length (MML) principle.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kearns, M., Mansour, Y., Ng, A. Y., Ron, D., “An Experimental and Theoretical Comparison of Model Selection Methods”, Machine Learning, 27, 7–50, 1997.

    Article  Google Scholar 

  2. Wallace, C.S. and Boulton, D. M., “An information measure for classification”, Computer Journal, 2:11, 195–209, 1968.

    Google Scholar 

  3. Wallace, C.S. and Freeman, P.R., “Estimation and Inference by Compact Coding”, Journal of the Royal Statistical Society, B,49: 240–252, 1987.

    Google Scholar 

  4. Rissanen, J., “Stochastic Complexity and Modeling”, Annals of Statistics, 14, 1080–1100, 1986.

    MATH  Google Scholar 

  5. Vapnik, V., Statistical Learning Theory, Springer, New York, 1995.

    MATH  Google Scholar 

  6. Viswanathan, M. and Wallace, C. S., “A Note on the Comparison of Polynomial Selection Methods”, in Artificial Intelligence and Statistics 99, Morgan Kaufmann, 169–177, 1999.

    Google Scholar 

  7. Wallace, C. S. and Dowe, D. L., “Minimum Message Length and Kolmogorov Complexity”, to appear, Computer Journal.

    Google Scholar 

  8. Stone, M., “Cross-validatory Choice and Assessment of Statistical Predictions”, Journal of the Royal Statistical Society, B,36, 111–147, 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Viswanathan, M., Wallace, C.S., Dowe, D.L., Korb, K.B. (1999). Finding Outpoints in Noisy Binary Sequences — A Revised Empirical Evaluation. In: Foo, N. (eds) Advanced Topics in Artificial Intelligence. AI 1999. Lecture Notes in Computer Science(), vol 1747. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46695-9_34

Download citation

  • DOI: https://doi.org/10.1007/3-540-46695-9_34

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66822-0

  • Online ISBN: 978-3-540-46695-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics