Abstract
The recovery of a data matrix from a subset of its entries is an extension of compressed sensing and sparse approximation. This chapter introduces matrix completion and matrix recovery. The ideas are also extended to tensor factorization and completion.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Acar, E., Dunlavy, D. M., Kolda, T. G., & Morup, M. (2011). Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1), 41–56.
Argyriou, A., Evgeniou, T., & Pontil, M. (2007). Multi-task feature learning. Advances in neural information processing systems (Vol. 20, pp. 243–272).
Ashraphijuo, M., & Wang, X. (2017). Fundamental conditions for low-CP-rank tensor completion. Journal of Machine Learning Research, 18, 1–29.
Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15, 1373–1396.
Bhaskar, S. A. (2016). Probabilistic low-rank matrix completion from quantized measurements. Journal of Machine Learning Research, 17, 1–34.
Bhojanapalli, S., & Jain, P. (2014). Universal matrix completion. In Proceedings of the 31st International Conference on Machine Learning (pp. 1881–1889). Beijing, China.
Cai, T., & Zhou, W.-X. (2013). A max-norm constrained minimization approach to 1-bit matrix completion. Journal of Machine Learning Research, 14, 3619–3647.
Cai, J.-F., Candes, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.
Candes, E. J., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.
Candes, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6), 717–772.
Candes, E. J., & Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5), 2053–2080.
Candes, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM, 58(3), 1–37.
Cao, Y., & Xie, Y. (2015). Categorical matrix completion. In Proceedings of IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) (pp. 369–372). Cancun, Mexico.
Carroll, J. D., & Chang, J.-J. (1970). Analysis of individual differences in multidimensional scaling via an \(N\)-way generalization of Eckart-Young decomposition. Psychometrika, 35(3), 283–319.
Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2009). Sparse and low-rank matrix decompositions. In Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing (pp. 962–967). Monticello, IL.
Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., & Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2), 572–596.
Chen, Y. (2015). Incoherence-optimal matrix completion. IEEE Transactions on Information Theory, 61(5), 2909–2923.
Chen, Y., & Chi, Y. (2014). Robust spectral compressed sensing via structured matrix completion. IEEE Transactions on Information Theory, 60(10), 6576–6601.
Chen, C., He, B., & Yuan, X. (2012). Matrix completion via an alternating direction method. IMA Journal of Numerical Analysis, 32(1), 227–245.
Chen, Y., Jalali, A., Sanghavi, S., & Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7), 4324–4337.
Chen, Y., Bhojanapalli, S., Sanghavi, S., & Ward, R. (2015). Completing any low-rank matrix, provably. Journal of Machine Learning Research, 16, 2999–3034.
Costantini, R., Sbaiz, L., & Susstrunk, S. (2008). Higher order SVD analysis for dynamic texture synthesis. IEEE Transactions on Image Processing, 17(1), 42–52.
Davenport, M. A., Plan, Y., van den Berg, E., & Wootters, M. (2014). 1-bit matrix completion. Information and Inference, 3, 189–223.
De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). On the best rank-1 and rank-(R1,R2,...,RN) approximation of high-order tensors. SIAM Journal on Matrix Analysis and Applications, 21(4), 1324–1342.
De Lathauwer, L., De Moor, B., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21(4), 1253–1278.
Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781.
Eriksson, A., & van den Hengel, A. (2012). Efficient computation of robust weighted low-rank matrix approximations using the \(L_1\) norm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9), 1681–1690.
Fan, J., & Chow, T. W. S. (2018). Non-linear matrix completion. Pattern Recognition, 77, 378–394.
Fazel, M. (2002). Matrix rank minimization with applications. Ph.D. thesis, Stanford University.
Foygel, R., & Srebro, N. (2011). Concentration-based guarantees for low-rank matrix reconstruction. In JMLR: Workshop and Conference Proceedings (Vol. 19, pp. 315–339).
Foygel, R., Shamir, O., Srebro, N., & Salakhutdinov, R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Advances in neural information processing systems (Vol. 24, pp. 2133–2141).
Gandy, S., Recht, B., & Yamada, I. (2011). Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inverse Problems, 27(2), 1–19.
Goldfarb, D., & Qin, Z. (2014). Robust low-rank tensor recovery: Models and algorithms. SIAM Journal on Matrix Analysis and Applications, 35(1), 225–253.
Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3), 1548–1566.
Guo, K., Liu, L., Xu, X., Xu, D., & Tao, D. (2018). Godec+: Fast and robust low-rank matrix decomposition based on maximum correntropy. IEEE Transactions on Neural Networks and Learning Systems, 29(6), 2323–2336.
Harshman, R. A. (1970). Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Working Papers in Phonetics (Vol. 16, pp. 1–84).
Hastie, T., Mazumder, R., Lee, J. D., & Zadeh, R. (2015). Matrix completion and low-rank SVD via fast alternating least squares. Journal of Machine Learning Research, 16, 3367–3402.
He, X., Cai, D., Yan, S., & Zhang, H.-J. (2005). Neighborhood preserving embedding. In Proceedings of the 10th IEEE International Conference on Computer Vision (pp. 1208–1213). Beijing, China.
He, X., Yan, S., Hu, Y., Niyogi, P., & Zhang, H. J. (2005). Face recognition using Laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3), 328–340.
Hillar, C. J., & Lim, L.-H. (2013). Most tensor problems are NP-hard. Journal of the ACM, 60(6), Article No. 45, 39 p.
Hu, R.-X., Jia, W., Huang, D.-S., & Lei, Y.-K. (2010). Maximum margin criterion with tensor representation. Neurocomputing, 73, 1541–1549.
Hu, Y., Zhang, D., Ye, J., Li, X., & He, X. (2013). Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9), 2117–2130.
Jain, P., & Oh, S. (2014). Provable tensor factorization with missing data. In Advances in neural information processing systems (Vol. 27, pp. 1431–1439).
Jain, P., Netrapalli, P., & S. Sanghavi, (2013). Low-rank matrix completion using alternating minimization. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (pp. 665–674).
Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464). Montreal, Canada.
Ke, Q., & Kanade, T. (2005). Robust \(L_1\) norm factorization in the presence of outliers and missing data by alternative convex programming. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 739–746). San Diego, CA.
Keshavan, R. H., Montanari, A., & Oh, S. (2010). Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6), 2980–2998.
Khan, S. A., & Kaski, S. (2014). Bayesian multi-view tensor factorization. In T. Calders, F. Esposito, E. Hullermeier, & R. Meo (Eds.), Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 656-671). Berlin: Springer.
Kilmer, M. E., Braman, K., Hao, N., & Hoover, R. C. (2013). Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM Journal on Matrix Analysis and Applications, 34(1), 148–172.
Kim, Y.-D., & Choi, S. (2007). Nonnegative Tucker decomposition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Minneapolis, MN.
Kim, E., Lee, M., Choi, C.-H., Kwak, N., & Oh, S. (2015). Efficient \(l_1\)-norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method. IEEE Transactions on Neural Networks and Learning Systems, 26(2), 237–251.
Kolda, T. G., & Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3), 455–500.
Komodakis, N., & Tziritas, G. (2006). Image completion using global optimization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 417–424). New York, NY.
Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.
Krishnamurthy, A., & Singh, A. (2013). Low-rank matrix and tensor completion via adaptive sampling. Advances in neural information processing systems (Vol. 26, pp. 836–844).
Krishnamurthy, A., & Singh, A. (2014). On the power of adaptivity in matrix completion and approximation. arXiv preprint arXiv:1407.3619.
Lafond, J., Klopp, O., Moulines, E., & Salmon, J. (2014). Probabilistic low-rank matrix completion on finite alphabets. Advances in neural information processing systems (Vol. 27, pp. 1727–1735). Cambridge: MIT Press.
Lai, Z., Xu, Y., Yang, J., Tang, J., & Zhang, D. (2013). Sparse tensor discriminant analysis. IEEE Transactions on Image Processing, 22(10), 3904–3915.
Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1), 73–99.
Lin, Z., Chen, M., Wu, L., & Ma, Y. (2009). The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG-09-2215. Champaign, IL: Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.
Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., & Ma, Y. (2009). Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Technical Report UILU-ENG-09-2214. Champaign, IL: University of Illinois at Urbana-Champaign.
Liu, G., Lin, Z., & Yu, Y. (2010). Robust subspace segmentation by low-rank representation. In Proceedings of the 25th International Conference on Machine Learning (pp. 663–670). Haifa, Israel.
Liu, J., Musialski, P., Wonka, P., & Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 208–220.
Liu, Y., Jiao, L. C., Shang, F., Yin, F., & Liu, F. (2013). An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion. Neural Networks, 48, 8–18.
Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., & Ma, Y. (2013c). Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 171–184.
Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2008). MPCA: Multilinear principal component analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 18–39.
Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2009). Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning. IEEE Transactions on Neural Networks, 20(11), 1820–1836.
Luo, Y., Tao, D., Ramamohanarao, K., & Xu, C. (2015). Tensor canonical correlation analysis for multi-view dimension reduction. IEEE Transactions on Knowledge and Data Engineering, 27(11), 3111–3124.
Mackey, L., Talwalkar, A., & Jordan, M. I. (2015). Distributed matrix completion and robust factorization. Journal of Machine Learning Research, 16, 913–960.
Mu, C., Huang, B., Wright, J., & Goldfarb, D. (2014). Square deal: Lower bounds and improved relaxations for tensor recovery. In JMLR W&CP: Proceedings of the 31st International Conference on Machine Learning (Vol. 32). Beijing, China.
Negahban, S., & Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13, 1665–1697.
Oseledets, I. V. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5), 2295–2317.
Panagakis, Y., Kotropoulos, C., & Arce, G. R. (2010). Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 18(3), 576–588.
Pitaval, R.-A., Dai, W., & Tirkkonen, O. (2015). Convergence of gradient descent for low-rank matrix approximation. IEEE Transactions on Information Theory, 61(8), 4451–4457.
Qi, Y., Comon, P., & Lim, L.-H. (2016). Uniqueness of nonnegative tensor approximations. IEEE Transactions on Information Theory, 62(4), 2170–2183.
Recht, B. (2011). A simpler approach to matrix completion. Journal of Machine Learning Research, 12, 3413–3430.
Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.
Rennie, J. D. M., & Srebro, N. (2005). Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference of Machine Learning (pp. 713–719). Bonn, Germany.
Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2323–2326.
Salakhutdinov, R., & Srebro, N. (2010). Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In J. LaFerty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 23, pp. 2056–2064). Cambridge: MIT Press.
Shamir, O., & Shalev-Shwartz, S. (2014). Matrix completion with the trace norm: Learning, bounding, and transducing. Journal of Machine Learning Research, 15, 3401–3423.
Sorber, L., Van Barel, M., & De Lathauwer, L. (2013). Optimization-based algorithms for tensor decompositions: Canonical polyadic decomposition, decomposition in rank-\((L_r, L_r, 1)\) terms, and a new generalization. SIAM Journal on Optimization, 23(2), 695–720.
Srebro, N., & Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory (COLT) (pp. 545–560). Berlin: Springer.
Srebro, N., Rennie, J. D. M., & Jaakkola, T. S. (2004). Maximum-margin matrix factorization. Advances in neural information processing systems (Vol. 17, pp. 1329–1336).
Sun, W., Huang, L., So, H. C., & Wang, J. (2019). Orthogonal tubal rank-1 tensor pursuit for tensor completion. Signal Processing, 157, 213–224.
Takacs, G., Pilaszy, I., Nemeth, B., & Tikk, D. (2009). Scalable collaborative filtering approaches for large recommender systems. Journal of Machine Learning Research, 10, 623–656.
Tan, H., Cheng, B., Wang, W., Zhang, Y.-J., & Ran, B. (2014). Tensor completion via a multi-linear low-n-rank factorization model. Neurocomputing, 1(33), 161–169.
Tao, D., Li, X., Wu, X., Hu, W., & Maybank, S. J. (2007). Supervised tensor learning. Knowledge and Information Systems, 13(1), 1–42.
Tao, D., Li, X., Wu, X., & Maybank, S. J. (2007). General tensor discriminant analysis and gabor features for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), 1700–17015.
Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2319–2323.
Toh, K.-C., & Yun, S. (2010). An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6(3), 615–640.
Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3), 279–311.
Tucker, L. R., & Harris, C. W. (1963). Implication of factor analysis of three-way matrices for measurement of change. In C. W. Harris (Ed.), Problems in measuring change (pp. 122–137). Madison: University Wisconsin Press.
Vasilescu, M. A. O., & Terzopoulos, D. (2002). Multilinear analysis of image ensembles: Tensorfaces. In Proceedigs of European Conference on Computer Vision, LNCS (Vol. 2350, pp. 447–460). Copenhagen, Denmark. Berlin: Springer.
Wang, W., Aggarwal, V., & Aeron, S. (2016). Tensor completion by alternating minimization under the tensor train (TT) model. arXiv:1609.05587.
Wong, R. K. W., & Lee, T. C. M. (2017). Matrix completion with noisy entries and outliers. Journal of Machine Learning Research, 18, 1–25.
Wright, J., Ganesh, A., Rao, S., Peng, Y., & Ma, Y. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Advances in neural information processing systems (Vol. 22, pp. 2080–2088). Vancouver, Canada.
Xu, D., Yan, S., Tao, D., Zhang, L., Li, X., & Zhang, H. (2006). Human gait recognition with matrix representation. IEEE Transactions on Circuits and Systems for Video Technology, 16(7), 896–903.
Xu, D., Yan, S., Zhang, L., Lin, S., Zhang, H., & Huang, T. S. (2008). Reconstruction and recogntition of tensor-based objects with concurrent subspaces analysis. IEEE Transactions on Circuits and Systems for Video Technology, 18(1), 36–47.
Yin, M., Cai, S., & Gao, J. (2013). Robust face recognition via double low-rank matrix recovery for feature extraction. In Proceedings of IEEE International Conference on Image Processing (pp. 3770–3774). Melbourne, Australia.
Yin, M., Gao, J., & Lin, Z. (2016). Laplacian regularized low-rank representation and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3), 504–517.
Yokota, T., Zhao, Q., & Cichocki, A. (2016). Smooth PARAFAC decomposition for tensor completion. IEEE Transactions on Signal Processing, 64(20), 5423–5436.
Zafeiriou, S. (2009). Discriminant nonnegative tensor factorization algorithms. IEEE Transactions on Neural Networks, 20(2), 217–235.
Zhou, T., & Tao, D. (2011). GoDec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning (pp. 33–40). Bellevue, WA.
Zhou, Y., Wilkinson, D., Schreiber, R., & Pan, R. (2008). Large-scale parallel collaborative filtering for the netix prize. In Proceedings of the 4th International Conference on Algorithmic Aspects in Information and Management (pp. 337–348). Berlin: Springer.
Zhou, G., Cichocki, A., Zhao, Q., & Xie, S. (2015). Efficient nonnegative tucker decompositions: Algorithms and uniqueness. IEEE Transactions on Image Processing, 24(12), 4990–5003.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer-Verlag London Ltd., part of Springer Nature
About this chapter
Cite this chapter
Du, KL., Swamy, M.N.S. (2019). Matrix Completion. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-7452-3_19
Download citation
DOI: https://doi.org/10.1007/978-1-4471-7452-3_19
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-7451-6
Online ISBN: 978-1-4471-7452-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)