Skip to main content

When Is There a Free Matrix Lunch?

  • Conference paper
Learning Theory (COLT 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4539))

Included in the following conference series:

  • 3216 Accesses

Abstract

The “no-free lunch theorems“ essentially say that for any two algorithms A and B, there are “as many“ targets (or priors over targets) for which A has lower expected loss than B as vice-versa. This can be made precise for certain loss functions [WM97]. This note concerns itself with cases where seemingly harder matrix versions of the algorithms have the same on-line loss bounds as the corresponding vector versions. So it seems that you get a free “matrix lunch“ (Our title is however not meant to imply that we have a technical refutation of the no-free lunch theorems).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arora, S., Kale, S.: A combinatorial primal-dual approach to semidefinite programs. In: Proc. 39th Annual ACM Symposium on Theory of Computing, ACM, New York (2007)

    Google Scholar 

  2. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to Boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  3. Kivinen, J., Warmuth, M.K.: Additive versus exponentiated gradient updates for linear prediction. Information and Computation 132(1), 1–64 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  4. Tsuda, K., Rätsch, G., Warmuth, M.K.: Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of Machine Learning Research 6, 995–1018 (2005)

    Google Scholar 

  5. Warmuth, M.K.: Winnowing subspaces. Unpublished manuscript (February 2007)

    Google Scholar 

  6. Warmuth, M., Kuzmin, D.: A Bayesian probability calculus for density matrices. In: Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence UAI06, Springer, Heidelberg (2006)

    Google Scholar 

  7. Warmuth, M.K., Kuzmin, D.: Online variance minimization. In: Lugosi, G., Simon, H.U. (eds.) COLT 2006. LNCS (LNAI), vol. 4005, Springer, Heidelberg (2006)

    Google Scholar 

  8. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transaction for Evolutionary Computation, 1(67) (1997)

    Google Scholar 

  9. Warmuth, M.K., Vishwanathan, S.V.N.: Leaving the span. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, Springer, Heidelberg (2005) Journal version: http://www.cse.ucsc.edu/~manfred/pubs/span.pdf

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Nader H. Bshouty Claudio Gentile

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Warmuth, M.K. (2007). When Is There a Free Matrix Lunch?. In: Bshouty, N.H., Gentile, C. (eds) Learning Theory. COLT 2007. Lecture Notes in Computer Science(), vol 4539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72927-3_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72927-3_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72925-9

  • Online ISBN: 978-3-540-72927-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics