Skip to main content

Extended Stochastic Complexity and Minimax Relative Loss Analysis

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1720))

Included in the following conference series:

Abstract

We are concerned with the problem of sequential prediction using a given hypothesis class of continuously-many prediction strategies. An effective performance measure is the minimax relative cumulative loss (RCL), which is the minimum of the worst-case difference between the cumulative loss for any prediction algorithm and that for the best assignment in a given hypothesis class. The purpose of this paper is to evaluate the minimax RCL for general continuous hypothesis classes under general losses. We first derive asymptotical upper and lower bounds on the minimax RCL to show that they match (k/2c) lnm within error of o(lnm) where k is the dimension of parameters for the hypothesis class, m is the sample size, and c is the constant depending on the loss function. We thereby show that the cumulative loss attaining the minimax RCL asymptotically coincides with the extended stochastic complexity (ESC), which is an extension of Rissanen’s stochastic complexity (SC) into the decision-theoretic scenario. We further derive non-asymptotical upper bounds on the minimax RCL both for parametric and nonparametric hypothesis classes. We apply the analysis into the regression problem to derive the least worst-case cumulative loss bounds to date.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cesa-Bianchi, N., and Lugosi, G., Minimax regret under log loss for general classes of experts, in Proc. of COLT’99, (1999).

    Google Scholar 

  2. Freund, Y., Predicting a binary sequence almost as well as the optimal biased coin,” in Proc. of COLT’96, 89–98 (1996).

    Google Scholar 

  3. Haussler, D., Kivinen, J., and Warmuth, M., Tight worst-case loss bounds for predicting with expert advice, Computational Learning Theory: EuroCOLT’95, Springer, 69–83 (1995).

    Google Scholar 

  4. Kivinen, J., and Warmuth, M., “Exponentiated gradient versus gradient descent for linear predictors,” UCSC-CRL-94-16, 1994.

    Google Scholar 

  5. Opper, M., and Haussler, D., Worst case prediction over sequence under log loss, in Proc. of IMA Workshop in Information, Coding, and Distribution, Springer, 1997.

    Google Scholar 

  6. Rissanen, J., Stochastic complexity, J. R. Statist. Soc. B, vol.49,3, 223–239 (1987).

    MATH  MathSciNet  Google Scholar 

  7. Rissanen, J., Stochastic Complexity in Statistical Inquiry, World Scientific, Singapore, 1989.

    MATH  Google Scholar 

  8. Rissanen, J., Fisher information and stochastic complexity, IEEE Trans. on Inf. Theory, vol.IT-42,1, 40–47 (1996).

    Article  MathSciNet  Google Scholar 

  9. Shtarkov, Y.M., Universal sequential coding of single messages, Probl. Inf. Transmission., 23(3):3–17 (1987).

    MathSciNet  Google Scholar 

  10. Vovk, V.G., Aggregating strategies, in Proc. of COLT’90, Morgan Kaufmann, 371–386(1990).

    Google Scholar 

  11. Vovk, V.G., Competitive on-line linear regression, in Proc. of Advances in NIPS’98, MIT Press, 364–370(1998).

    Google Scholar 

  12. Yamanishi, K., A decision-theoretic extension of stochastic complexity and its applications to learning, IEEE Tran. on Inf. Theory, IT-44, 1424–1439(1998).

    Article  MathSciNet  Google Scholar 

  13. Yamanishi, K., Minimax relative loss analysis for sequential prediction algorithms using parametric hypotheses, in Proc. of COLT’98, ACM Press, pp:32–43(1998).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yamanishi, K. (1999). Extended Stochastic Complexity and Minimax Relative Loss Analysis. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-46769-6_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66748-3

  • Online ISBN: 978-3-540-46769-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics