Skip to main content

Matrix Completion

  • Chapter
  • First Online:
Neural Networks and Statistical Learning
  • 4478 Accesses

Abstract

The recovery of a data matrix from a subset of its entries is an extension of compressed sensing and sparse approximation. This chapter introduces matrix completion and matrix recovery. The ideas are also extended to tensor factorization and completion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Acar, E., Dunlavy, D. M., Kolda, T. G., & Morup, M. (2011). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1), 41–56.

    Article  Google Scholar 

  2. Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. Advances in neural information processing systems (Vol. 20, pp. 243–272).

    Google Scholar 

  3. Ashraphijuo, M., & Wang, X. (2017). Fundamental conditions for low-CP-rank tensor completion. Journal of Machine Learning Research, 18, 1–29.

    MathSciNet  MATH  Google Scholar 

  4. Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15, 1373–1396.

    Article  MATH  Google Scholar 

  5. Bhaskar, S. A. (2016). Probabilistic low-rank matrix completion from quantized measurements. Journal of Machine Learning Research, 17, 1–34.

    MathSciNet  MATH  Google Scholar 

  6. Bhojanapalli, S., & Jain, P. (2014). Universal matrix completion. In Proceedings of the 31st International Conference on Machine Learning (pp. 1881–1889). Beijing, China.

    Google Scholar 

  7. Cai, T., & Zhou, W.-X. (2013). A max-norm constrained minimization approach to 1-bit matrix completion. Journal of Machine Learning Research, 14, 3619–3647.

    MathSciNet  MATH  Google Scholar 

  8. Cai, J.-F., Candes, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.

    Article  MathSciNet  MATH  Google Scholar 

  9. Candes, E. J., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.

    Article  Google Scholar 

  10. Candes, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.

    Article  MathSciNet  MATH  Google Scholar 

  11. Candes, E. J., & Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2053–2080.

    Article  MathSciNet  MATH  Google Scholar 

  12. Candes, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM, 58(3), 1–37.

    Article  MathSciNet  MATH  Google Scholar 

  13. Cao, Y., & Xie, Y. (2015). Categorical matrix completion. In Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) (pp. 369–372). Cancun, Mexico.

    Google Scholar 

  14. Carroll, J. D., & Chang, J.-J. (1970). Analysis of individual differences in multidimensional scaling via an \(N\)-way generalization of Eckart-Young decomposition. Psychometrika, 35(3), 283–319.

    Article  MATH  Google Scholar 

  15. Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2009). Sparse and low-rank matrix decompositions. In Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing (pp. 962–967). Monticello, IL.

    Google Scholar 

  16. Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2), 572–596.

    Article  MathSciNet  MATH  Google Scholar 

  17. Chen, Y. (2015). Incoherence-optimal matrix completion. IEEE Transactions on Information Theory, 61(5), 2909–2923.

    Article  MathSciNet  MATH  Google Scholar 

  18. Chen, Y., & Chi, Y. (2014). Robust spectral compressed sensing via structured matrix completion. IEEE Transactions on Information Theory, 60(10), 6576–6601.

    Article  MathSciNet  MATH  Google Scholar 

  19. Chen, C., He, B., & Yuan, X. (2012). Matrix completion via an alternating direction method. IMA Journal of Numerical Analysis, 32(1), 227–245.

    Article  MathSciNet  MATH  Google Scholar 

  20. Chen, Y., Jalali, A., Sanghavi, S., & Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7), 4324–4337.

    Article  Google Scholar 

  21. Chen, Y., Bhojanapalli, S., Sanghavi, S., & Ward, R. (2015). Completing any low-rank matrix, provably. Journal of Machine Learning Research, 16, 2999–3034.

    MathSciNet  MATH  Google Scholar 

  22. Costantini, R., Sbaiz, L., & Susstrunk, S. (2008). Higher order SVD analysis for dynamic texture synthesis. IEEE Transactions on Image Processing, 17(1), 42–52.

    Article  MathSciNet  Google Scholar 

  23. Davenport, M. A., Plan, Y., van den Berg, E., & Wootters, M. (2014). 1-bit matrix completion. Information and Inference, 3, 189–223.

    Article  MathSciNet  MATH  Google Scholar 

  24. De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). On the best rank-1 and rank-(R1,R2,...,RN) approximation of high-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4), 1324–1342.

    Google Scholar 

  25. De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.

    Article  MathSciNet  MATH  Google Scholar 

  26. Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781.

    Article  Google Scholar 

  27. Eriksson, A., & van den Hengel, A. (2012). Efficient computation of robust weighted low-rank matrix approximations using the \(L_1\) norm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1681–1690.

    Article  Google Scholar 

  28. Fan, J., & Chow, T. W. S. (2018). Non-linear matrix completion. Pattern Recognition, 77, 378–394.

    Article  Google Scholar 

  29. Fazel, M. (2002). Matrix rank minimization with applications. Ph.D. thesis, Stanford University.

    Google Scholar 

  30. Foygel, R., & Srebro, N. (2011). Concentration-based guarantees for low-rank matrix reconstruction. In JMLR: Workshop and Conference Proceedings (Vol. 19, pp. 315–339).

    Google Scholar 

  31. Foygel, R., Shamir, O., Srebro, N., & Salakhutdinov, R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Advances in neural information processing systems (Vol. 24, pp. 2133–2141).

    Google Scholar 

  32. Gandy, S., Recht, B., & Yamada, I. (2011). Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 1–19.

    Article  MathSciNet  MATH  Google Scholar 

  33. Goldfarb, D., & Qin, Z. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications, 35(1), 225–253.

    Article  MathSciNet  MATH  Google Scholar 

  34. Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3), 1548–1566.

    Article  MathSciNet  MATH  Google Scholar 

  35. Guo, K., Liu, L., Xu, X., Xu, D., & Tao, D. (2018). Godec+: Fast and robust low-rank matrix decomposition based on maximum correntropy. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2323–2336.

    Article  MathSciNet  Google Scholar 

  36. Harshman, R. A. (1970). Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics (Vol. 16, pp. 1–84).

    Google Scholar 

  37. Hastie, T., Mazumder, R., Lee, J. D., & Zadeh, R. (2015). Matrix completion and low-rank SVD via fast alternating least squares. Journal of Machine Learning Research, 16, 3367–3402.

    MathSciNet  MATH  Google Scholar 

  38. He, X., Cai, D., Yan, S., & Zhang, H.-J. (2005). Neighborhood preserving embedding. In Proceedings of the 10th IEEE International Conference on Computer Vision (pp. 1208–1213). Beijing, China.

    Google Scholar 

  39. He, X., Yan, S., Hu, Y., Niyogi, P., & Zhang, H. J. (2005). Face recognition using Laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3), 328–340.

    Article  Google Scholar 

  40. Hillar, C. J., & Lim, L.-H. (2013). Most tensor problems are NP-hard. Journal of the ACM, 60(6), Article No. 45, 39 p.

    Google Scholar 

  41. Hu, R.-X., Jia, W., Huang, D.-S., & Lei, Y.-K. (2010). Maximum margin criterion with tensor representation. Neurocomputing, 73, 1541–1549.

    Article  Google Scholar 

  42. Hu, Y., Zhang, D., Ye, J., Li, X., & He, X. (2013). Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9), 2117–2130.

    Article  Google Scholar 

  43. Jain, P., & Oh, S. (2014). Provable tensor factorization with missing data. In Advances in neural information processing systems (Vol. 27, pp. 1431–1439).

    Google Scholar 

  44. Jain, P., Netrapalli, P., & S. Sanghavi, (2013). Low-rank matrix completion using alternating minimization. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (pp. 665–674).

    Google Scholar 

  45. Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464). Montreal, Canada.

    Google Scholar 

  46. Ke, Q., & Kanade, T. (2005). Robust \(L_1\) norm factorization in the presence of outliers and missing data by alternative convex programming. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 739–746). San Diego, CA.

    Google Scholar 

  47. Keshavan, R. H., Montanari, A., & Oh, S. (2010). Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6), 2980–2998.

    Article  MathSciNet  MATH  Google Scholar 

  48. Khan, S. A., & Kaski, S. (2014). Bayesian multi-view tensor factorization. In T. Calders, F. Esposito, E. Hullermeier, & R. Meo (Eds.), Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 656-671). Berlin: Springer.

    Google Scholar 

  49. Kilmer, M. E., Braman, K., Hao, N., & Hoover, R. C. (2013). Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM Journal on Matrix Analysis and Applications, 34(1), 148–172.

    Article  MathSciNet  MATH  Google Scholar 

  50. Kim, Y.-D., & Choi, S. (2007). Nonnegative Tucker decomposition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Minneapolis, MN.

    Google Scholar 

  51. Kim, E., Lee, M., Choi, C.-H., Kwak, N., & Oh, S. (2015). Efficient \(l_1\)-norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method. IEEE Transactions on Neural Networks and Learning Systems, 26(2), 237–251.

    Article  MathSciNet  Google Scholar 

  52. Kolda, T. G., & Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3), 455–500.

    Article  MathSciNet  MATH  Google Scholar 

  53. Komodakis, N., & Tziritas, G. (2006). Image completion using global optimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 417–424). New York, NY.

    Google Scholar 

  54. Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.

    Article  Google Scholar 

  55. Krishnamurthy, A., & Singh, A. (2013). Low-rank matrix and tensor completion via adaptive sampling. Advances in neural information processing systems (Vol. 26, pp. 836–844).

    Google Scholar 

  56. Krishnamurthy, A., & Singh, A. (2014). On the power of adaptivity in matrix completion and approximation. arXiv preprint arXiv:1407.3619.

  57. Lafond, J., Klopp, O., Moulines, E., & Salmon, J. (2014). Probabilistic low-rank matrix completion on finite alphabets. Advances in neural information processing systems (Vol. 27, pp. 1727–1735). Cambridge: MIT Press.

    Google Scholar 

  58. Lai, Z., Xu, Y., Yang, J., Tang, J., & Zhang, D. (2013). Sparse tensor discriminant analysis. IEEE Transactions on Image Processing, 22(10), 3904–3915.

    Article  MathSciNet  MATH  Google Scholar 

  59. Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.

    Article  MathSciNet  MATH  Google Scholar 

  60. Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG-09-2215. Champaign, IL: Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

    Google Scholar 

  61. Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., & Ma, Y. (2009). Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILU-ENG-09-2214. Champaign, IL: University of Illinois at Urbana-Champaign.

    Google Scholar 

  62. Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In Proceedings of the 25th International Conference on Machine Learning (pp. 663–670). Haifa, Israel.

    Google Scholar 

  63. Liu, J., Musialski, P., Wonka, P., & Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 208–220.

    Article  Google Scholar 

  64. Liu, Y., Jiao, L. C., Shang, F., Yin, F., & Liu, F. (2013). An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion. Neural Networks, 48, 8–18.

    Article  MATH  Google Scholar 

  65. Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013c). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.

    Article  Google Scholar 

  66. Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2008). MPCA: Multilinear principal component analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 18–39.

    Article  Google Scholar 

  67. Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2009). Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning. IEEE Transactions on Neural Networks, 20(11), 1820–1836.

    Article  Google Scholar 

  68. Luo, Y., Tao, D., Ramamohanarao, K., & Xu, C. (2015). Tensor canonical correlation analysis for multi-view dimension reduction. IEEE Transactions on Knowledge and Data Engineering, 27(11), 3111–3124.

    Article  Google Scholar 

  69. Mackey, L., Talwalkar, A., & Jordan, M. I. (2015). Distributed matrix completion and robust factorization. Journal of Machine Learning Research, 16, 913–960.

    MathSciNet  MATH  Google Scholar 

  70. Mu, C., Huang, B., Wright, J., & Goldfarb, D. (2014). Square deal: Lower bounds and improved relaxations for tensor recovery. In JMLR W&CP: Proceedings of the 31st International Conference on Machine Learning (Vol. 32). Beijing, China.

    Google Scholar 

  71. Negahban, S., & Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13, 1665–1697.

    MathSciNet  MATH  Google Scholar 

  72. Oseledets, I. V. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5), 2295–2317.

    Article  MathSciNet  MATH  Google Scholar 

  73. Panagakis, Y., Kotropoulos, C., & Arce, G. R. (2010). Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 18(3), 576–588.

    Article  Google Scholar 

  74. Pitaval, R.-A., Dai, W., & Tirkkonen, O. (2015). Convergence of gradient descent for low-rank matrix approximation. IEEE Transactions on Information Theory, 61(8), 4451–4457.

    Article  MathSciNet  MATH  Google Scholar 

  75. Qi, Y., Comon, P., & Lim, L.-H. (2016). Uniqueness of nonnegative tensor approximations. IEEE Transactions on Information Theory, 62(4), 2170–2183.

    Article  MathSciNet  MATH  Google Scholar 

  76. Recht, B. (2011). A simpler approach to matrix completion. Journal of Machine Learning Research, 12, 3413–3430.

    MathSciNet  MATH  Google Scholar 

  77. Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.

    Article  MathSciNet  MATH  Google Scholar 

  78. Rennie, J. D. M., & Srebro, N. (2005). Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference of Machine Learning (pp. 713–719). Bonn, Germany.

    Google Scholar 

  79. Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2323–2326.

    Article  Google Scholar 

  80. Salakhutdinov, R., & Srebro, N. (2010). Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In J. LaFerty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 23, pp. 2056–2064). Cambridge: MIT Press.

    Google Scholar 

  81. Shamir, O., & Shalev-Shwartz, S. (2014). Matrix completion with the trace norm: Learning, bounding, and transducing. Journal of Machine Learning Research, 15, 3401–3423.

    MathSciNet  MATH  Google Scholar 

  82. Sorber, L., Van Barel, M., & De Lathauwer, L. (2013). Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposition, decomposition in rank-\((L_r, L_r, 1)\) terms, and a new generalization. SIAM Journal on Optimization, 23(2), 695–720.

    Article  MathSciNet  MATH  Google Scholar 

  83. Srebro, N., & Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory (COLT) (pp. 545–560). Berlin: Springer.

    Google Scholar 

  84. Srebro, N., Rennie, J. D. M., & Jaakkola, T. S. (2004). Maximum-margin matrix factorization. Advances in neural information processing systems (Vol. 17, pp. 1329–1336).

    Google Scholar 

  85. Sun, W., Huang, L., So, H. C., & Wang, J. (2019). Orthogonal tubal rank-1 tensor pursuit for tensor completion. Signal Processing, 157, 213–224.

    Article  Google Scholar 

  86. Takacs, G., Pilaszy, I., Nemeth, B., & Tikk, D. (2009). Scalable collaborative filtering approaches for large recommender systems. Journal of Machine Learning Research, 10, 623–656.

    Google Scholar 

  87. Tan, H., Cheng, B., Wang, W., Zhang, Y.-J., & Ran, B. (2014). Tensor completion via a multi-linear low-n-rank factorization model. Neurocomputing, 1(33), 161–169.

    Article  Google Scholar 

  88. Tao, D., Li, X., Wu, X., Hu, W., & Maybank, S. J. (2007). Supervised tensor learning. Knowledge and Information Systems, 13(1), 1–42.

    Article  Google Scholar 

  89. Tao, D., Li, X., Wu, X., & Maybank, S. J. (2007). General tensor discriminant analysis and gabor features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), 1700–17015.

    Article  Google Scholar 

  90. Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323.

    Article  Google Scholar 

  91. Toh, K.-C., & Yun, S. (2010). An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6(3), 615–640.

    MathSciNet  MATH  Google Scholar 

  92. Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3), 279–311.

    Article  MathSciNet  Google Scholar 

  93. Tucker, L. R., & Harris, C. W. (1963). Implication of factor analysis of three-way matrices for measurement of change. In C. W. Harris (Ed.), Problems in measuring change (pp. 122–137). Madison: University Wisconsin Press.

    Google Scholar 

  94. Vasilescu, M. A. O., & Terzopoulos, D. (2002). Multilinear analysis of image ensembles: Tensorfaces. In Proceedigs of European Conference on Computer Vision, LNCS (Vol. 2350, pp. 447–460). Copenhagen, Denmark. Berlin: Springer.

    Google Scholar 

  95. Wang, W., Aggarwal, V., & Aeron, S. (2016). Tensor completion by alternating minimization under the tensor train (TT) model. arXiv:1609.05587.

  96. Wong, R. K. W., & Lee, T. C. M. (2017). Matrix completion with noisy entries and outliers. Journal of Machine Learning Research, 18, 1–25.

    MathSciNet  MATH  Google Scholar 

  97. Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in neural information processing systems (Vol. 22, pp. 2080–2088). Vancouver, Canada.

    Google Scholar 

  98. Xu, D., Yan, S., Tao, D., Zhang, L., Li, X., & Zhang, H. (2006). Human gait recognition with matrix representation. IEEE Transactions on Circuits and Systems for Video Technology, 16(7), 896–903.

    Article  Google Scholar 

  99. Xu, D., Yan, S., Zhang, L., Lin, S., Zhang, H., & Huang, T. S. (2008). Reconstruction and recogntition of tensor-based objects with concurrent subspaces analysis. IEEE Transactions on Circuits and Systems for Video Technology, 18(1), 36–47.

    Article  Google Scholar 

  100. Yin, M., Cai, S., & Gao, J. (2013). Robust face recognition via double low-rank matrix recovery for feature extraction. In Proceedings of IEEE International Conference on Image Processing (pp. 3770–3774). Melbourne, Australia.

    Google Scholar 

  101. Yin, M., Gao, J., & Lin, Z. (2016). Laplacian regularized low-rank representation and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3), 504–517.

    Article  Google Scholar 

  102. Yokota, T., Zhao, Q., & Cichocki, A. (2016). Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20), 5423–5436.

    Article  MathSciNet  MATH  Google Scholar 

  103. Zafeiriou, S. (2009). Discriminant nonnegative tensor factorization algorithms. IEEE Transactions on Neural Networks, 20(2), 217–235.

    Article  MATH  Google Scholar 

  104. Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning (pp. 33–40). Bellevue, WA.

    Google Scholar 

  105. Zhou, Y., Wilkinson, D., Schreiber, R., & Pan, R. (2008). Large-scale parallel collaborative filtering for the netix prize. In Proceedings of the 4th International Conference on Algorithmic Aspects in Information and Management (pp. 337–348). Berlin: Springer.

    Google Scholar 

  106. Zhou, G., Cichocki, A., Zhao, Q., & Xie, S. (2015). Efficient nonnegative tucker decompositions: Algorithms and uniqueness. IEEE Transactions on Image Processing, 24(12), 4990–5003.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer-Verlag London Ltd., part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Du, KL., Swamy, M.N.S. (2019). Matrix Completion. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_19

Download citation

Publish with us

Policies and ethics