Skip to main content

Sparse Principal Component Analysis via Rotation and Truncation

  • Chapter
  • First Online:
Advances in Principal Component Analysis
  • 2696 Accesses

Abstract

This chapter begins with the motivation of sparse PCA–to improve the physical interpretation of the loadings. Second, we introduce the issues involved in sparse PCA problem that are distinct from PCA problem. Third, we briefly review some sparse PCA algorithms in the literature, and comment their limitations as well as problems unresolved. Forth, we introduce one of the state-of-the-art algorithms, SPCArt Hu et al. (IEEE Trans. Neural Networks Learn. Syst. 27(4):875–890, 2016), including its motivating idea, formulation, optimization solution, and performance analysis. Along with the introduction, we describe how SPCArt addresses the unresolved problems. Fifth, based on the Eckart-Young Theorem, we provide a unified view to a series of sparse PCA algorithms including SPCArt. Finally, we make a concluding remark.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    Theorem 13 is specific to SPCArt, which concerns the important explained variance. The other results are applicable to more general situations: Propositions 611 are applicable to any orthonormal Z, Theorem 12 is applicable to any matrix X. To obtain results specific to SPCArt, some assumptions of the data distribution are needed.

  2. 2.

    [21] did implement this version for rSVD, but using a heuristic approach.

References

  1. Amini, A., Wainwright, M.: High-dimensional analysis of semidefinite relaxations for sparse principal components. Ann. Stat. 37(5B), 2877–2921 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Cadima, J., Jolliffe, I.: Loading and correlations in the interpretation of principle components. J. Appl. Stat. 22(2), 203–214 (1995)

    Article  Google Scholar 

  3. d’Aspremont, A., Bach, F., Ghaoui, L.: Optimal solutions for sparse principal component analysis. J. Mach. Learn. Res. 9, 1269–1294 (2008)

    MathSciNet  MATH  Google Scholar 

  4. d’Aspremont, A., El Ghaoui, L., Jordan, M., Lanckriet, G.: A direct formulation for sparse pca using semidefinite programming. SIAM Rev. 49(3), 434–448 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  5. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal \(\ell \)1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59(6), 797–829 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211–218 (1936)

    Article  MATH  Google Scholar 

  7. Fan, K.: A generalization of Tychonoff’s fixed point theorem. Math. Ann. 142(3), 305–310 (1961)

    Article  MathSciNet  MATH  Google Scholar 

  8. Golub, G., Van Loan, C.: Matrix Computations, vol. 3. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  9. Hu, Z., Pan, G., Wang, Y., Wu, Z.: Sparse principal component analysis via rotation and truncation. IEEE Trans. Neural Networks Learn. Syst. 27(4), 875–890 (2016)

    Article  MathSciNet  Google Scholar 

  10. Jolliffe, I.: Principal Component Analysis. Springer, Berlin (2002)

    MATH  Google Scholar 

  11. Jolliffe, I., Trendafilov, N., Uddin, M.: A modified principal component technique based on the lasso. J. Comput. Graphical Stat. 12(3), 531–547 (2003)

    Article  MathSciNet  Google Scholar 

  12. Jolliffe, I.T.: Rotation of ill-defined principal components. Appl. Stat. pp. 139–147 (1989)

    Google Scholar 

  13. Journée, M., Nesterov, Y., Richtárik, P., Sepulchre, R.: Generalized power method for sparse principal component analysis. J. Mach. Learn. Res. 11, 517–553 (2010)

    MathSciNet  MATH  Google Scholar 

  14. Lai, Z., Xu, Y., Chen, Q., Yang, J., Zhang, D.: Multilinear sparse principal component analysis. IEEE Trans. Neural Networks Learn. Syst. 25(10), 1942–1950 (2014)

    Article  Google Scholar 

  15. Lu, Z., Zhang, Y.: An augmented Lagrangian approach for sparse principal component analysis. Math. Program. 135(1–2), 149–193 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ma, Z.: Sparse principal component analysis and iterative thresholding. Ann. Stat. 41(2), 772–801 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Mackey, L.: Deflation methods for sparse PCA. Adv. Neural Inf. Process. Syst. 21, 1017–1024 (2009)

    Google Scholar 

  18. Moghaddam, B., Weiss, Y., Avidan, S.: Generalized spectral bounds for sparse LDA. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 641-648. ACM, New York (2006)

    Google Scholar 

  19. Moghaddam, B., Weiss, Y., Avidan, S.: Spectral bounds for sparse PCA: exact and greedy algorithms. Adv. Neural Inf. Process. Syst. 18, 915 (2006)

    Google Scholar 

  20. Paul, D., Johnstone, I.M.: Augmented sparse principal component analysis for high dimensional data. arXiv preprint arXiv:1202.1242, (2012)

  21. Shen, H., Huang, J.: Sparse principal component analysis via regularized low rank matrix approximation. J. Multivar. Anal. 99(6), 1015–1034 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  22. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Royal Stat. Soc. Series B (Methodol.), pp. 267–288 (1996)

    Google Scholar 

  23. Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109(3), 475–494 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  24. Witten, D., Tibshirani, R., Hastie, T.: A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10(3), 515 (2009)

    Article  Google Scholar 

  25. Yuan, X., Zhang, T.: Truncated power method for sparse eigenvalue problems. J. Mach. Learn. Res. 14, 899–925 (2013)

    MathSciNet  MATH  Google Scholar 

  26. Zhang, Y., d’Aspremont, A., Ghaoui, L.: Sparse pca: convex relaxations, algorithms and applications. In: Handbook on Semidefinite, Conic and Polynomial Optimization, pp. 915–940 (2012)

    Google Scholar 

  27. Zhang, Y., Ghaoui, L.E.: Large-scale sparse principal component analysis with application to text data. In: Advances in Neural Information Processing Systems (2011)

    Google Scholar 

  28. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Royal Stat. Soc.: Series B (Stat. Methodol.) 67(2), 301–320 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  29. Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. J. Comput. Graphical Stat. 15(2), 265–286 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gang Pan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hu, Z., Pan, G., Wang, Y., Wu, Z. (2018). Sparse Principal Component Analysis via Rotation and Truncation. In: Naik, G. (eds) Advances in Principal Component Analysis. Springer, Singapore. https://doi.org/10.1007/978-981-10-6704-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-6704-4_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-6703-7

  • Online ISBN: 978-981-10-6704-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics