Skip to main content

Accuracy and Specificity Trade-off in \(k\)-nearest Neighbors Classification

  • Conference paper
  • First Online:
Computer Vision -- ACCV 2014 (ACCV 2014)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9004))

Included in the following conference series:

  • 2567 Accesses

Abstract

The \(k\)-NN rule is a simple, flexible and widely used non-parametric decision method, also connected to many problems in image classification and retrieval such as annotation and content-based search. As the number of classes increases and finer classification is considered (e.g. specific dog breed), high accuracy is often not possible in such challenging conditions, resulting in a system that will often suggest a wrong label. However, predicting a broader concept (e.g. dog) is much more reliable, and still useful in practice. Thus, sacrificing certain specificity for a more secure prediction is often desirable. This problem has been recently posed in terms of accuracy-specificity trade-off. In this paper we study the accuracy-specificity trade-off in \(k\)-NN classification, evaluating the impact of related techniques (posterior probability estimation and metric learning). Experimental results show that a proper combination of \(k\)-NN and metric learning can be very effective and obtain good performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The ILSVCR65 dataset, hierarchy and the DARTS source code are available at http://www.image-net.org/projects/hedging/.

  2. 2.

    http://www.cse.wustl.edu/~kilian/code/lmnn/lmnn.html.

  3. 3.

    http://code.google.com/p/boosting/.

  4. 4.

    http://www.cs.utexas.edu/~pjain/itml/.

  5. 5.

    In [3] SVM achieves higher classification accuracy using spatial pyramid and 100K-dim features, in contrast to the 50-dim features (no spatial pyramid) used in our experiments.

References

  1. Fergus, R., Bernal, H., Weiss, Y., Torralba, A.: Semantic label sharing for learning with many categories. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 762–775. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  2. Griffin, G., Perona, P.: Learning and using taxonomies for fast visual categorization. In: CVPR (2008)

    Google Scholar 

  3. Deng, J., Krause, J., Berg, A.C., Li, F.F.: Hedging your bets: optimizing accuracy-specificity trade-offs in large scale visual recognition. In: CVPR, pp. 3450–3457 (2012)

    Google Scholar 

  4. Hwang, S.J., Grauman, K., Sha, F.: Learning a tree of metrics with disjoint visual features. In: NIPS, pp. 621–629 (2011)

    Google Scholar 

  5. Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR 10, 207–244 (2009)

    MATH  Google Scholar 

  6. Shen, C., Kim, J., Wang, L., van den Hengel, A.: Positive semidefinite metric learning using boosting-like algorithms. JMLR 13, 1007–1036 (2012)

    MATH  Google Scholar 

  7. Kulis, B.: Metric learning: a survey. Found. Trends Mach. Learn. 5, 287–364 (2013)

    Article  Google Scholar 

  8. Fukunaga, K., Hostetler, L.: k-nearest-neighbor bayes-risk estimation. IEEE Trans. Inform. Theory 21, 285–293 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  9. Atiya, A.F.: Estimating the posterior probabilities using the k-nearest neighbor rule. Neural Comput. 17, 731–740 (2005)

    Article  MATH  Google Scholar 

  10. Platt, J.: Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In: Smola, A.J., Bartlett, P., Scholkopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers, pp. 61–74. MIT Press, Cambridge (1999)

    Google Scholar 

  11. Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: ITML, pp. 209–216 (2007)

    Google Scholar 

  12. Wang, J., Yang, J., Yu, K., Lv, F., Huang, T.S., Gong, Y.: Locality-constrained linear coding for image classification. In: CVPR, pp. 3360–3367 (2010)

    Google Scholar 

  13. Budanitsky, A., Hirst, G.: Evaluating wordnet-based measures of lexical semantic relatedness. Comput. Linguist. 32, 13–47 (2006)

    Article  MATH  Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China: 61322212, 61035001 and 61350110237, in part by the Key Technologies R&D Program of China: 2012BAH18B02, in part by National Hi-Tech Development Program (863 Program) of China: 2014AA015202, and in part by the Chinese Academy of Sciences Fellowships for Young International Scientists: 2011Y1GB05.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luis Herranz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Herranz, L., Jiang, S. (2015). Accuracy and Specificity Trade-off in \(k\)-nearest Neighbors Classification. In: Cremers, D., Reid, I., Saito, H., Yang, MH. (eds) Computer Vision -- ACCV 2014. ACCV 2014. Lecture Notes in Computer Science(), vol 9004. Springer, Cham. https://doi.org/10.1007/978-3-319-16808-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16808-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16807-4

  • Online ISBN: 978-3-319-16808-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics