Skip to main content

Factored Trace Lasso Based Linear Regression Methods: Optimizations and Applications

  • Conference paper
  • First Online:
Cognitive Systems and Signal Processing (ICCSIP 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1397))

Included in the following conference series:

  • 1493 Accesses

Abstract

Consider that matrix trace lasso regularized convex \(\ell _p\)-norm with \(p=1, 2\) regression methods usually have the higher computational complexity due to the singular value decomposition (SVD) of larger size matrix in big data and information processing. By factoring the matrix trace lasso into the squared sum of two Frobenius-norm, this work studies the solutions of both adaptive sparse representation (ASR) and correlation adaptive subspace segmentation (CASS), respectively. Meanwhile, the derived models involve multi-variable nonconvex functions with at least two equality constraints. To solve them efficiently, we devise the nonconvex alternating direction multiplier methods (NADMM) with convergence analysis satisfying the Karush-Kuhn-Tucher (KKT) conditions. Finally, numerical experiments to the subspace clustering can show the less timing consumptions than CASS and the nearby performance of our proposed method when compared with the existing segmentation methods like SSC, LRR, LSR and CASS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The detailed convergence proofs of optimization algorithms for the case \(p=1\), i.e., \(\ell _1\)FTLR, and \(p=2\), i.e., \(\ell _2\)FTLR, of this work will be appeared in the extended manuscript.

  2. 2.

    https://khoury.northeastern.edu/home/eelhami/codes.htm.

  3. 3.

    https://zhouchenlin.github.io/lrr(motion-face).zip.

  4. 4.

    https://github.com/canyilu/LSR.

  5. 5.

    https://github.com/canyilu/LibADMM.

References

  1. Grave, E., Obozinski, G., Bach, F.: Trace Lasso: a trace norm regularization for correlated designs. In: Proceeding of Neural Information Processing System (NeurIPS), pp. 2187–2195 (2011)

    Google Scholar 

  2. Shang, F., Cheng, J., Liu, Y., Luo, Z., Lin, Z.: Bilinear factor matrix norm minimization for robust PCA: algorithms and applications. IEEE Trans. Pattern Anal. Mach. Intell. 40(9), 2066–2080 (2018)

    Article  Google Scholar 

  3. Zhang, H., Yang, J., Xie, J., Qian, J., Zhang, B.: Weighted sparse coding regularized nonconvex matrix regression for robust face recognition. Inf. Sci. 394–395, 1–17 (2017)

    MathSciNet  Google Scholar 

  4. Yang, J., Luo, L., Qian, J., Tai, Y., Zhang, F., Xu, Y.: Nuclear norm based matrix regression with applications to face recognition with occlusion and illumination changes. IEEE Trans. Pattern Anal. Mach. Intell. 39(1), 156–171 (2017)

    Article  Google Scholar 

  5. Bouwmans, T., Sobral, A., Javed, S., Jung, S.K., Zahzah, E.H.: Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset. Comput. Sci. Rev. 23, 1–71 (2017)

    Article  Google Scholar 

  6. Wright, J., Yang, A., Ganesh, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Article  Google Scholar 

  7. Zhang, L., Yang, M., Feng, X.: Sparse representation or collaborative representation: Which helps face recognition? In: Proceeding of International Conference on Computer and Vision (ICCV), pp. 471–478 (2011)

    Google Scholar 

  8. Elhamifar, E., Vidal, R.: Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2765–2781 (2013)

    Article  Google Scholar 

  9. Lu, C.-Y., Min, H., Zhao, Z.-Q., Zhu, L., Huang, D.-S., Yan, S.: Robust and efficient subspace segmentation via least squares regression. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 347–360. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33786-4_26

    Chapter  Google Scholar 

  10. Wang, J., Lu, C., Wang, M., Li, P., Yan, S., Hu, X.: Robust face recognition via adaptive sparse representation. IEEE Trans. Cybern. 44(12), 2368–2378 (2014)

    Article  Google Scholar 

  11. Lu, C., Feng, J., Lin, Z., Yan, S.: Correlation adaptive subspace segmentation by trace lasso. In: Proceeding of International Conference Computer Vision (ICCV), pp. 1345–1352 (2013)

    Google Scholar 

  12. Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)

    Article  Google Scholar 

  13. Srebro, N., Rennie, J., Jaakkola, T.S.: Maximum-margin matrix factorization. In: Proceeding of Neural Information Processing System (NeurIPS), pp. 1329–1336 (2004)

    Google Scholar 

  14. Zhang, H., Yang, J., Shang, F., Gong, C., Zhang, Z.: LRR for subspace segmentation via tractable Schatten-\(p\) norm minimization and factorization. IEEE Trans. Cyber. 49(5), 1722–1734 (2019)

    Article  Google Scholar 

  15. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends. Mach. Learn. 3(1), 1–122 (2011)

    Article  Google Scholar 

  16. Lin, Z., Chen, M., Wu, L., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)

  17. Lin, Z., Liu, R., Su, Z.: Linearized alternating direction method with adaptive penalty for low-rank representation. In: Proceeding Neural Information Processing System (NeurIPS), pp. 612–620 (2011)

    Google Scholar 

  18. Liu, R., Lin, Z., Su, Z.: Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. In: Proceeding of Asian Conference Machine Learning (ACML), pp. 116–132 (2013)

    Google Scholar 

  19. Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155, 57–79 (2014). https://doi.org/10.1007/s10107-014-0826-5

    Article  MathSciNet  MATH  Google Scholar 

  20. Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multiblock variables. SIAM J. Optim. 25(3), 1478–1497 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgment

The authors would like to the anonymous reviewers for their valuable comments. This work was supported in part by the National Natural Science Fund for Distinguished Young Scholars under Grant 61725301, in part by the National Natural Science Foundation of China (Major Program) under Grant 61590923, in part by the China Postdoctoral Science Foundation under Grant 2019M651415 and 2020T130191, in part by the National Science Fund of China under Grant 61973124, and Grant 61906067, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 19KJB510022, and in part by the Research Start-up Funds for the Introduction of High-level Talents at Jiangsu Police Institute under Grant JSPIGKZ.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenli Du .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, H., Du, W., Liu, X., Zhang, B., Qian, F. (2021). Factored Trace Lasso Based Linear Regression Methods: Optimizations and Applications. In: Sun, F., Liu, H., Fang, B. (eds) Cognitive Systems and Signal Processing. ICCSIP 2020. Communications in Computer and Information Science, vol 1397. Springer, Singapore. https://doi.org/10.1007/978-981-16-2336-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-2336-3_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-2335-6

  • Online ISBN: 978-981-16-2336-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics