Skip to main content

DC Approximation Approach for ℓ0-minimization in Compressed Sensing

  • Conference paper
Advanced Computational Methods for Knowledge Engineering

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 358))

Abstract

In this paper, we study the effectiveness of some non-convex approximations of ℓ0-norm in compressed sensing. Using four continuous non-convex approximations of ℓ0-norm, we reformulate the compressed sensing problem as DC (Difference of Convex functions) programs and then DCA (DC Algorithm) is applied to find the solutions. Computational experiments show the efficiency and the scalability of our method in comparison with other nonconvex approaches such as iterative reweighted schemes (including reweighted ℓ1 and iterative reweighted least-squares algorithms).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Lojasiewicz inequality. Mathematics of Operations Research 35(2), 438–457 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bradley, P.S., Mangasarian, O.L.: Feature Selection via concave minimization and support vector machines. In: Proceeding of International Conference on Machina Learning ICML 1998 (1998)

    Google Scholar 

  3. Chen, S., Donoho, D.L., Saunders, M.: Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing 20(1), 33–61 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  4. Candès, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Transaction Information Theory 52(12), 5406–5425 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Candès, E.J., Wakin, M.B., Boyd, S.: Enhancing Sparsity by Reweighted l1 Minimization. Journal of Fourier Analysis and Applications 14(5), 877–905 (2008); special issue on sparsity

    Google Scholar 

  6. Candès, E.J., Romberg, J., Tao, T.: Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information (2006)

    Google Scholar 

  7. Candés, E.J., Paige, A.: Randall: Highly Robust Error Correction by Convex Programming. IEEE Transactions Information Theory Information Theory 54(7), 2829–2840 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chartrand, R.: Exact Reconstruction of Sparse Signals via Nonconvex Minimization. IEEE Signal Process. Lett. 14(10), 707–710 (2007)

    Article  Google Scholar 

  9. Chartrand, R., Yin, W.: Iteratively Reweighted Algorithms for Compressive Sensing. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (2008)

    Google Scholar 

  10. Daubechies, I., DeVore, R., Fornasier, M., Güntük, C.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63, 1–38 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Donoho, D.L., Xiaoming, H.: Uncertainty principles and ideal atomic decomposition. IEEE Transactions on Information Theory 47(7), 2845–2862 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  13. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Stat. Ass. 96(456), 1348–1360 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  14. Fu, W.J.: Penalized regressions: The bridge versus the Lasso. Journal of Computational and Graphical Statistics 7, 397–416 (1998)

    MathSciNet  Google Scholar 

  15. Foucart, S., Lai, M.: Sparsest solutions of underdetermined linear systems via ℓ q -minimization for 0 < q ≤ 1, Appl. Comput. Harmon. Anal. 26, 395–407 (2009)

    Article  MATH  Google Scholar 

  16. Gasso, G., Rakotomamonjy, A., Canu, S.: Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Transactions on Signal Processing 57(12), 4686–4698 (2009)

    Article  MathSciNet  Google Scholar 

  17. Mohimani, G.H., Babaie-Zadeh, M., Jutten, C.: Fast Sparse Representation Based on Smoothed ℓ0 Norm. In: Davies, M.E., James, C.J., Abdallah, S.A., Plumbley, M.D. (eds.) ICA 2007. LNCS, vol. 4666, pp. 389–396. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  18. Mohimani, H., Babaie-Zadeh, M., Jutten, C.: A fast approach for overcomplete sparse decomposition based on smoothed L0 norm. IEEE Transactions on Signal Processing 57(1), 289–301 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Lai, M.-J., Xu, Y., Yin, W.: Improved Iteratively reweighted least squares for unconstrained smoothed ℓ p minimization. SIAM J. Numer. Anal. 51(2), 927–957 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Pham Dinh, T., Le Thi, H.A.: Convex analysis approach to DC programming: Theory, algorithms and applications. Acta Math. Vietnamica 22(1), 289–357 (1997)

    MathSciNet  MATH  Google Scholar 

  21. Le Thi, H.A., Pham Dinh, T.: DC Optimization Algorithm for Solving The Trust Region Problem. SIAM Journal on Optimization 8(2), 476–505 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  22. Le Thi, H.A., Pham Dinh, T.: The DC (difference of convex functions) Programming and DCA revisited with DC models of real world nonconvex optimization problems. Annals of Operations Research 133, 23–46 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  23. Le Thi, H.A., Van Nguyen, V., Ouchani, S.: Gene Selection for Cancer Classification Using DCA. In: Tang, C., Ling, C.X., Zhou, X., Cercone, N.J., Li, X. (eds.) ADMA 2008. LNCS (LNAI), vol. 5139, pp. 62–72. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  24. Le Thi, H.A., Le Hoai, M., Nguyen, V.V., Pham Dinh, T.: A DC Programming approach for feature selection in support vector machines learning. Adv. Data Analysis and Classification 2(3), 259–278 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  25. Le Thi, H.A.: A new approximation for the ℓ0–norm. Research report LITA EA 3097, University of Lorraine, France (2012)

    Google Scholar 

  26. Le Thi, H.A., Nguyen Thi, B.T., Le, H.M.: Sparse signal recovery by difference of convex functions algorithms. In: Selamat, A., Nguyen, N.T., Haron, H. (eds.) ACIIDS 2013, Part II. LNCS, vol. 7803, pp. 387–397. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  27. Le Thi, H.A., Pham Dinh, T., Le, H.M., Vo, X.T.: DC approximation approaches for sparse optimization. European Journal of Operational Research 244(1), 26–46 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Le, H.M., Le Thi, H.A., Nguyen, M.C.: Sparse Semi-Supervised Support Vector Machines by DC Programming and DCA. Neurocomputing (November 27, 2014), (published online), doi:10.1016/j.neucom.2014.11.051,

    Google Scholar 

  29. Le Thi, H.A., Nguyen, M.C., Pham Dinh, T.: A DC programming approach for finding Communities in networks. Neural Computation 26(12), 2827–2854 (2014)

    Article  MathSciNet  Google Scholar 

  30. Le Thi, H.A., Vo, X.T., Pham Dinh, T.: Feature Selection for linear SVMs under Uncertain Data: Robust optimization based on Difference of Convex functions Algorithms. Neural Networks 59, 36–50 (2014)

    Article  MATH  Google Scholar 

  31. Ong, C.S., Le Thi, H.A.: Learning sparse classifiers with difference of convex functions algorithms. Optimization Methods and Software 28(4), 830–854 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  32. Peleg, D., Meir, R.: A bilinear formulation for vector sparsity optimization. Signal Processing 88(2), 375–389 (2008) ISSN 0165–1684

    Google Scholar 

  33. Rinaldi, F.: Concave programming for finding sparse solutions to problems with convex constraints. Optimization Methods and Software 26(6), 971–992 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  34. Rinaldi, F., Schoen, F., Sciandrone, M.: Concave programming for minimizing the zero-norm over polyhedral sets. Comput. Opt. Appl. 46(3), 467–486 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  35. Rao, B.D., Kreutz-Delgado, K.: An affine scaling methodology for best basis selection. IEEE Trans. Signal Processing 47, 87–200 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  36. Thiao, M., Pham Dinh, T., Le Thi, H.A.: DC Programming Approach for a Class of Nonconvex Programs Involving lo Norm. In: Le Thi, H.A., Bouvry, P., Pham Dinh, T. (eds.) MCO 2008. CCIS, vol. 14, pp. 348–357. Springer, Heidelberg (2008)

    Google Scholar 

  37. Zhang, T.: Some sharp performance bounds for least squares regression with regularization. Ann. Statist. 37, 2109–2144 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  38. Zhang, C., Shao, Y., Tan, J., Deng, N.: Mixed-norm linear support vector machine. Neural Computing and Applications 23(7-8), 2159–2166 (2013)

    Article  Google Scholar 

  39. Zhao, Y., Li, D.: Reweighted l1-Minimization for Sparse Solutions to Underdetermined Linear Systems. SIAM J. Opt. 22(3), 1065–1088 (2012)

    Article  MATH  Google Scholar 

  40. Zou, H.: The adaptive lasso and its oracle properties. J. Amer. Stat. Ass. 101, 1418–1429 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  41. Zou, H., Li, R.: One-step sparse estimates in nonconcave penalized likelihood models. The Annals of Statistics 36(4), 1509–1533 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thi Bich Thuy Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Nguyen, T.B.T., Thi, H.A.L., Le, H.M., Vo, X.T. (2015). DC Approximation Approach for ℓ0-minimization in Compressed Sensing. In: Le Thi, H., Nguyen, N., Do, T. (eds) Advanced Computational Methods for Knowledge Engineering. Advances in Intelligent Systems and Computing, vol 358. Springer, Cham. https://doi.org/10.1007/978-3-319-17996-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-17996-4_4

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-17995-7

  • Online ISBN: 978-3-319-17996-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics