Skip to main content

Double Weighted Low-Rank Representation and Its Efficient Implementation

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11440))

Included in the following conference series:

  • 2233 Accesses

Abstract

To overcome the limitations of existing low-rank representation (LRR) methods, i.e., the error distribution should be known a prior and the leading rank components might be over penalized, this paper proposes a new low-rank representation based model, namely double weighted LRR (DWLRR), using two distinguished properties on the concerned representation matrix. The first characterizes various distributions of the residuals into an adaptively learned weighting matrix for more flexibility of noise resistance. The second employs a parameterized rational penalty as well as a weighting vector s to reveal the importance of different rank components for better approximation to the intrinsic subspace structure. Moreover, we derive a computationally efficient algorithm based on the parallel updating scheme and automatic thresholding operation. Comprehensive experimental results conducted on image clustering demonstrate the robustness and efficiency of DWLRR compared with other state-of-the-art models.

Supported by organization x.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liu, G., Lin, Z., Yan, S., Sun, J., Xu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)

    Article  Google Scholar 

  2. Kim, E., Lee, M., Oh, S.: Robust elastic-net subspace representation. IEEE Trans. Image Process. 25(9), 4245–4259 (2016)

    MathSciNet  MATH  Google Scholar 

  3. Lu, C.-Y., Min, H., Zhao, Z.-Q., Zhu, L., Huang, D.-S., Yan, S.: Robust and efficient subspace segmentation via least squares regression. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 347–360. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33786-4_26

    Chapter  Google Scholar 

  4. Ding, Z., Fu, F.: Dual low-rank decompositions for robust cross-view learning. IEEE Trans. Image Process. 28(1), 194–204 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  5. Zhang, Z., Li, F., Zhao, M., Zhang, L., Yan, S.: Robust neighborhood preserving projection by nuclear/l2,1-norm regularization for image feature extraction. IEEE Trans. Image Process. 26(4), 1607–1622 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  6. Peng, X., Yu, Z., Yi, Z., Tang, H.: Constructing the L2-graph for robust subspace learning and subspace clustering. IEEE Trans. Cybern. 47(4), 1053–1066 (2017)

    Article  Google Scholar 

  7. Peng, C., Kang, Z., Yang, M., Cheng, Q.: Feature selection embedded subspace clustering. IEEE Signal Process. Lett. 23(7), 1018–1022 (2016)

    Article  Google Scholar 

  8. Chen, J., Yang, J.: Robust subspace segmentation via low-rank representation. IEEE Trans. Cybern. 44(8), 1432–1445 (2014)

    Article  Google Scholar 

  9. Zheng, J., Yang, P., Chen, S., Shen, G., Wang, W.: Iterative re-constrained group sparse face recognition with adaptive weights learning. IEEE Trans. Image Process. 26(5), 2408–2423 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  10. Zhang, Z., Li, F., Zhao, M., Zhang, L., Yan, S.: Joint low-rank and sparse principal feature coding for enhanced robust representation and visual classification. IEEE Trans. Image Process. 25(6), 2429–2443 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Yin, M., Gao, J., Lin, Z.: Laplacian regularized low-rank representation and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 504–517 (2016)

    Article  Google Scholar 

  12. Peng, X., Lu, C., Yi, Z., Tang, H.: Connections between nuclear-norm and frobenius-norm-based representations. IEEE Trans. Neural Netw. Learn. Syst. 29(1), 218–224 (2018)

    Article  MathSciNet  Google Scholar 

  13. Lanza, A., Morigi, S., Selesnick, I., Sgallari, F.: Nonconvex nonsmooth optimization via convex-nonconvex majorization-minimization. Numer. Math. 136(2), 343–381 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  14. Lu, C., Tang, J., Yan, S., Lin, Z.: Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans. Image Process. 25(2), 829–839 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gu, S., Xie, Q., Meng, D., Zuo, W., Feng, X., Zhang, L.: Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 121(2), 183–208 (2017)

    Article  Google Scholar 

  16. Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., Zhang, L.: Weighted schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 25(10), 4842–4857 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Peng, C., Kang, Z., Cheng, Q.: Integrating feature and graph learning with low-rank representation. Neurocomputing 249, 106–116 (2017)

    Article  Google Scholar 

  18. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Yao, Q., Kwok, J., Gao, F., Chen, W., Liu, T.: Efficient inexact proximal gradient algorithm for nonconvex problems. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 3308–3314. Melbourne (2017)

    Google Scholar 

  20. Yao, Q., Kwok, J., Zhong, W.: Fast low-rank matrix learning with nonconvex regularization. In: International Conference on Data Mining, pp. 539–548. IEEE, Atlantic (2015)

    Google Scholar 

  21. Li, Y., Yu, W.: Fast randomized singular value thresholding for low-rank optimization. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 376–391 (2018)

    Article  MathSciNet  Google Scholar 

  22. Hu, H., Lin, Z., Feng, J., Zhou, J.: Smooth representation clustering. In: Conference on Computer Vision and Pattern Recognition, pp. 3834–3841. IEEE, Columbus (2014)

    Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China (61602413) and Natural Science Foundation of Zhejiang Province (LY19F030016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ping Yang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 58 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, J., Lou, K., Yang, P., Chen, W., Wang, W. (2019). Double Weighted Low-Rank Representation and Its Efficient Implementation. In: Yang, Q., Zhou, ZH., Gong, Z., Zhang, ML., Huang, SJ. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2019. Lecture Notes in Computer Science(), vol 11440. Springer, Cham. https://doi.org/10.1007/978-3-030-16145-3_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16145-3_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16144-6

  • Online ISBN: 978-3-030-16145-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics