Skip to main content

Advertisement

Log in

Half-Quadratic Image Restoration with a Non-parallelism Constraint

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

The problem of image restoration from blur and noise is studied. By regularization techniques, a solution of the problem is found as the minimum of a primal energy function, which is formed by two terms. The former deals with faithfulness to the data, and the latter is associated with the smoothness constraints. We impose that the obtained results are images piecewise continuous and with thin edges. In correspondence with the primal energy function, there is a dual energy function, which deals with discontinuities implicitly. We present a unified approach of the duality theory, also to consider the non-parallelism constraint. We construct a dual energy function, which is convex and imposes such a constraint. To reconstruct images with Boolean discontinuities, the proposed energy function can be used as an initial approximation in a graduated non-convexity algorithm. The experimental results confirm that such a technique inhibits the formation of parallel lines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Ahookhosh, M., Amini, K., Bahrami, S.: A class of nonmonotone Armijo-type line search method for unconstrained optimization. Optimization 61(4), 387–404 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  2. Allain, M., Idier, J., Goussard, Y.: On global and local convergence of half-quadratic algorithms. IEEE Trans. Image Process. 15, 1130–1142 (2006)

    Article  Google Scholar 

  3. Antoniadis, A., Gijbels, I., Nikolova, M.: Penalized likelihood regression for generalized linear models with non-quadratic penalties. Ann. Inst. Stat. Math. 63, 585–615 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pac. J. Math. 16(1), 1–3 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  5. Astrőm, F.: Color image regularization via channel mixing and half quadratic minimization. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 4007–4011 (2016)

  6. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, 2nd edn. Springer, New York (2006)

    MATH  Google Scholar 

  7. Bai, X., Zhou, F., Xue, B.: Infrared image enhancement through contrast enhancement by using multiscale new top-hat transform. Infrared Phys. Technol. 54(2), 61–69 (2011)

    Article  Google Scholar 

  8. Bauschke, H.H., Lucet, Y.: What is a Fenchel conjugate? Not. Am. Math. Soc. 59(1), 44–46 (2012)

    MathSciNet  MATH  Google Scholar 

  9. Bedini, L., Gerace, I., Salerno, E., Tonazzini, A.: Models and algorithms for edge-preserving image reconstruction. Adv. Imaging Electron Phys. 97, 86–189 (1996)

    Google Scholar 

  10. Bedini, L., Gerace, I., Tonazzini, A.: A deterministic algorithm for reconstruction images with interacting discontinuities. CVGIP Graph. Models Image Process. 56, 109–123 (1994)

    Article  Google Scholar 

  11. Bergmann, R., Chan, R.H., Hielscher, R., Persch, J., Steidl, G.: Restoration of manifold-valued images by half-quadratic minimization. Inverse Probl. Imaging 10, 281–304 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bertero, M., Boccacci, P.: Introduction to Inverse Problems in Imaging. Institute of Physics Publishing, Bristol (1998)

    Book  MATH  Google Scholar 

  13. Black, M., Rangarajan, A.: On the unification of line processes, outlier rejection, and robust statistics with applications to early vision. Int. J. Comput. Vis. 19, 597–608 (1996)

    Article  Google Scholar 

  14. Blake, A.: Comparison of the efficiency of deterministic and stochastic algorithms for visual reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 11, 2–12 (1989)

    Article  MATH  Google Scholar 

  15. Blake, A., Zisserman, A.: Visual Reconstruction. MIT Press, Cambridge (1987)

    Google Scholar 

  16. Boccuto, A., Gerace, I., Pucci, P.: Convex approximation technique for interacting line elements deblurring: a new approach. J. Math. Imaging Vis. 44(2), 168–184 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  17. Boukis, C., Mandic, D.M., Constantinides, A.G., Polymenakos, L.C.: A modified Armijo rule for the online selection of learning rate of the LMS algorithm. Digit. Signal Process. 20, 630–639 (2010)

    Article  Google Scholar 

  18. Borwein, J.M., Vanderwerff, J.D.: Convex Functions: Constructions. Characterizations and Counterexamples. Cambridge University Press, Cambridge (2010)

    MATH  Google Scholar 

  19. Bouman, C., Sauer, K.: A generalized Gaussian image model for edge-preserving MAP estimation. IEEE Trans. Image Process. 2(3), 296–310 (1993)

    Article  Google Scholar 

  20. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  21. Brézis, H.: Functional analysis. Sobolev spaces and partial differential equations. Springer, New York (2011)

    MATH  Google Scholar 

  22. Cavalagli, N., Cluni, F., Gusella, V.: Evaluation of a statistically equivalent periodic unit cell for a quasi-periodic masonry. Int. J. Solids Struct. 50, 4226–4240 (2013)

    Article  Google Scholar 

  23. Charbonnier, P., Blanc-Féraud, L., Aubert, G., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6, 298–311 (1997)

    Article  Google Scholar 

  24. Chen, P.-Y., Selesnick, I.W.: Group-sparse signal denoising: non-convex regularization, convex optimization. IEEE Trans. Signal Process. 62, 3464–3478 (2014)

    Article  MathSciNet  Google Scholar 

  25. Chen, X., Ng, M.K., Zhang, C.: Non-Lipschitz \(l_p\)-regularization and box constrained model for image restoration. IEEE Trans. Image Process. 21(12), 4709–4721 (2012)

    Article  MathSciNet  Google Scholar 

  26. Cluni, F., Costarelli, D., Minotti, A.M., Vinti, G.: Applications of sampling Kantorovich operators to thermographic images for seismic engineering. J. Comput. Anal. Appl. 19(4), 602–617 (2015)

    MathSciNet  MATH  Google Scholar 

  27. Cluni, F., Costarelli, D., Minotti, A.M., Vinti, G.: Enhancement of thermographic images as tool for structural analysis in earthquake engineering. NTD & E Int. 70, 60–72 (2015)

    MATH  Google Scholar 

  28. Cluni, F., Costarelli, D., Minotti, A.M., Vinti, G.: Applications of approximation theory to thermographic images in earthquake engineering. Proc. Appl. Math. Mech. 15, 663–664 (2015)

    Article  MATH  Google Scholar 

  29. Coll, B., Duran, J., Sbert, C.: An algorithm for nonconvex functional minimization and applications to image restoration. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 4547–4551 (2014)

  30. Costarelli, D., Seracini, M., Vinti, G.: Digital image processing algorithms for diagnosis in arterial diseases. Proc. Appl. Math. Mech. 15, 669–670 (2015)

    Article  Google Scholar 

  31. Costarelli, D., Vinti, G.: Approximation by nonlinear multivariate sampling Kantorovich-type operators and applications to image processing. Numer. Funct. Anal. Optim. 34(8), 819–844 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  32. Demoment, G.: Image reconstruction and restoration: overview of common estimation structures and problems. IEEE Trans. Acoust. Speech Signal Process. 37, 2024–2036 (1989)

    Article  Google Scholar 

  33. Ding, Y., Selesnick, I.W.: Artifact-free wavelet denoising: non-convex sparse regularization, convex optimization. IEEE Signal Process. Lett. 22(9), 1364–1368 (2015)

    Article  Google Scholar 

  34. Geman, D., Reynolds, G.: Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 14, 367–383 (1992)

    Article  Google Scholar 

  35. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–740 (1984)

    Article  MATH  Google Scholar 

  36. Geman, D., Yang, C.: Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 4(7), 932–946 (1995)

    Article  Google Scholar 

  37. Gerace, I., Martinelli, F.: On regularization parameters estimation in edge-preserving image reconstruction. LNCS 3196, 1170–1183 (2008)

    MATH  Google Scholar 

  38. Gerace, I., Pandolfi, R., Pucci, P.: A new GNC algorithm for spatial dithering. In: Proceedings of the International TICSP Workshop on Spectral Methods and Multirate Signal Processing, SMMSP2003, Barcelona, Spain, September 13–14, 2003, pp. 109–114 (2003)

  39. Hadamard, J.: Lectures on Cauchy’s Problem in Linear Partial Differential Equations. Yale University Press, Yale (1923)

    MATH  Google Scholar 

  40. He, R., Zheng, W.-S., Tan, T., Sun, Z.: Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 261–275 (2014)

    Article  Google Scholar 

  41. Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)

    MATH  Google Scholar 

  42. Huang, Y.-M., Lu, D.-Y.: A preconditioned conjugate gradient method for multiplicative half-quadratic image restoration. Appl. Math. Comput. 219, 6556–6564 (2013)

    MathSciNet  MATH  Google Scholar 

  43. Idier, J.: Convex half-quadratic criteria and interacting auxiliary variables for image restoration. IEEE Trans. Image Process. 16, 1003–1009 (2001)

    MathSciNet  MATH  Google Scholar 

  44. Jähne, B.: Digital Image Processing. Springer, Berlin (2002)

    Book  MATH  Google Scholar 

  45. Lanza, A., Morigi, S., Selesnik, I.W., Sgallari, F.: Nonconvex nonsmooth optimization via convex–nonconvex majorization-minimization. Numer. Math. 1, 1–39 (2016)

  46. Lanza, A., Morigi, S., Sgallari, F.: Convex image denoising via non-convex regularization. Scale Space Var. Methods Comput. Vis. 9087, 666–677 (2015)

    MathSciNet  MATH  Google Scholar 

  47. Lanza, A., Morigi, S., Sgallari, F.: Convex image denoising via non-convex regularization with parameter selection. J. Math. Imaging Vis. 56, 195–220 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  48. Laporte, L., Flamary, R., Canu, S., Déjean, S., Mothe, J.: Nonconvex regularizations for feature selection in ranking with sparse SVM. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1118–1130 (2014)

    Article  Google Scholar 

  49. Liu, X.-G., Gao, X.-B.: A improvement for GNC method of nonconvex nonsmooth image restoration. Appl. Mech. Mater. 380–384, 1664–1667 (2013)

    Article  Google Scholar 

  50. Liu, X.-G., Gao, X.-B., Xue, Q.: Image restoration combining Tikhonov with different order nonconvex nonsmooth regularizations. In: 2013 Ninth International Conference on Computational Intelligence and Security, pp. 250–254 (2013)

  51. Liu, X.-G., Gao, X.-B.: A method based on the GNC and augmented Lagrangian duality for nonconvex nonsmooth image restoration. Acta Electron. Sin. 42(2), 264–271 (2014)

    Google Scholar 

  52. Marroquin, J., Mitter, S., Poggio, T.: Probabilistic solution of ill-posed problems in computational vision. J. Am. Stat. Assoc. 82, 76–89 (1987)

    Article  MATH  Google Scholar 

  53. Mobahi, H., Fisher, J. W., III.: A theoretical analysis of optimization by Gaussian continuation. In: Wong, W.-K., Lowd, D.(Eds.): Twenty-Ninth Conference on Artificial Intelligence of the Association for the Advancement of Artificial Intelligence (AAAI), Proceedings. Austin, Texas, USA, January 25–30, 2015, pp. 1205–1211 (2015)

  54. Ni, C., Li, Q., Xia, L.Z.: A novel method of infrared image denoising and edge enhancement. Signal Process. 88(6), 1606–1614 (2008)

    Article  MATH  Google Scholar 

  55. Nikolova, M.: Markovian reconstruction using a GNC approach. IEEE Trans. Image Process. 8, 1204–1220 (1999)

    Article  Google Scholar 

  56. Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4(3), 960–991 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  57. Nikolova, M.: Analytical bounds on the minimizers of (nonconvex) regularized least-squares. Inverse Probl. Imaging 1(4), 661–677 (2007)

    Article  MathSciNet  Google Scholar 

  58. Nikolova, M., Chan, R.H.: The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans. Image Process. 16(6), 1623–1627 (2007)

    Article  MathSciNet  Google Scholar 

  59. Nikolova, M., Ng, M.K.: Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput. 27(3), 937–966 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  60. Nikolova, M., Ng, M.K., Tam, C.-P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19(12), 3073–3088 (2010)

    Article  MathSciNet  Google Scholar 

  61. Nikolova, M., Ng, M.K., Tam, C.-P.: On \(\ell _1\) data fitting and concave regularization for image recovery. SIAM J. Sci. Comput. 35(1), A397–A430 (2013)

    Article  MATH  Google Scholar 

  62. Nikolova, M., Ng, M.K., Zhang, S., Ching, W.-K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1(1), 2–25 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  63. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)

    MATH  Google Scholar 

  64. Parekh, A., Selesnick, I.W.: Convex denoising using non-convex tight frame regularization. IEEE Signal Process. Lett. 22(10), 1786–1790 (2015)

    Article  Google Scholar 

  65. Parekh, A., Selesnick, I.W.: Enhanced low-rank matrix approximation. IEEE Signal Process. Lett. 23(4), 493–497 (2016)

    Article  Google Scholar 

  66. Robini, M.C., Magnin, I.E.: Optimization by Stochastic continuation. SIAM J. Imaging Sci. 3(4), 1096–1121 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  67. Robini, M.C., Zhu, Y.: Generic half-quadratic optimization for image reconstruction. SIAM J. Imaging Sci. 8(3), 1752–1797 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  68. Robini, M.C., Zhu, Y., Luo, J.: Edge-preserving reconstruction with contour-line smoothing and non-quadratic data-fidelity. Inverse Probl. Imaging 7(4), 1331–1366 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  69. Robini, M. C., Zhu, Y., Lv, X., Liu, W.: Inexact half-quadratic optimization for image reconstruction. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 3513–3517 (2016)

  70. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  71. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  72. Selesnick, I.W., Parekh, A., Bayram, I.: Convex 1-D total variation denoising with non-convex regularization. IEEE Signal Proc. Lett. 22(2), 141–144 (2015)

    Article  Google Scholar 

  73. Snyder, W., Han, Y.-S., Bilbro, G., Whitaker, R., Pizer, S.: Image relaxation: restoration and feature extraction. IEEE Trans. Pattern Anal. Mach. Intell. 17(6), 620–624 (1995)

    Article  Google Scholar 

  74. Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis, 3rd edn. Springer, Berlin (2002)

    Book  MATH  Google Scholar 

  75. Tikhonov, A.N., Arsenin, V.Y.: Solution of Ill-Posed Problems. V. H. Winston & Sons, Washington (1977)

  76. Tuia, D., Flamary, R., Barlaud, M.: Non-convex regularization in remote sensing. IEEE Trans. Geosci. Remote Sens. 54(11), 6470–6480 (2016)

    Article  Google Scholar 

  77. Vese, L., Chan, T.F.: Reduced Non-convex Functional Approximations for Image Restoration & Segmentation. Department of Mathematics, University of California, Los Angeles (1997)

  78. Wells, P.N.T.: Medical ultrasound: imaging of soft tissue train and elasticity. J. R. Soc. Interface 8(64), 1521–1549 (2011)

    Article  Google Scholar 

  79. Xiao, C., He, Y., Yu, J.: A high-efficiency edge-preserving Bayesian method for image interpolation. In: Wang, Q., Pfahl, D., Raffo, D. (eds.) Making Globally Distributed Software Development—A Success Story, International Conference of Software Process, Proceedings. Leipzig, Germany, May 10–11, 2008, pp. 1042–1046 (2008)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivan Gerace.

Additional information

This work was supported by Dipartimento di Matematica e Informatica, Università degli Studi di Perugia. The author Antonio Boccuto was supported also by the Italian National Group of Mathematical Analysis, Probability and Applications (G.N.A.M.P.A.).

Appendices

Appendix 1

In this appendix, we see how the assumptions in Theorems 4, 5 and 6 generalize the conditions of the other dual theorems presented in the literature.

In the following examples, we take \(\lambda ^2=1\).

(a) Observe that, with the same hypotheses and notations as in Theorem 6, if f is concave and non-decreasing on \({\mathbb {R}}^+_0\), \(f(0)=0\) and f is differentiable on \((0,+\infty )\), then, by de L’Hôpital’s rule, the limit \(\displaystyle {\ell _0=\lim _{t\rightarrow + \infty }\frac{f(t)}{t}}\) is equal to the limit \(\displaystyle {\ell =\lim _{t\rightarrow + \infty }f^{\prime }(t)}\). Moreover note that, since \(g(t)=f(t^2)\) for each \(t\in {\mathbb {R}}^+_0\), we get \(\displaystyle {f^{\prime }(t^2)=\frac{g^{\prime }(t)}{2 t}}\) for each \(t\in (0, +\infty )\), and thus \(\displaystyle {\lim _{t\rightarrow +\infty }\frac{g^{\prime }(t)}{2 t}}= \ell =\ell _0\). In the duality theorems, there are many conditions involving the limit \(\displaystyle {\lim _{t\rightarrow +\infty }\frac{g^{\prime }(t)}{2 t}}\) (see also [23, Theorem 1], [43, 58, 59]). Therefore, these conditions, when f is non-decreasing, concave on \({\mathbb {R}}^+_0\), \(f(0)=0\) and f is differentiable on \((0,+\infty )\), are equivalent to the corresponding ones involving the limit \(\ell _0\).

(b) Theorem 4 is a strict generalization of Theorem 3. Indeed, observe that every function \(g \in {\mathrm{Lip}}_{\mathrm{loc}}({\mathbb {R}})\) is also continuous on \({\mathbb {R}}\), and a fortiori u.s.c. Moreover, given a fixed real number \(\displaystyle { a\in \Bigl ( 0,\frac{1}{2}\Bigr )}\), the functions \(g:{\mathbb {R}} \rightarrow {\mathbb {R}}\), \(f:{\mathbb {R}} \rightarrow {\mathbb {R}} \cup \{ - \infty \}\), defined by \(g(t)=t^{2a}\), \(t\in {\mathbb {R}}\),

$$\begin{aligned} f(t)=\left\{ \begin{array}{l@{\quad }l} t^a, &{} \mathrm{if }\; t \ge 0,\\ -\infty , &{} \mathrm{if }\; t < 0, \end{array}\right. \end{aligned}$$

satisfy the hypotheses (4.1), (4.2) of Theorem 4, (5.1) of Theorem 5 and (6.1) of Theorem 6, but since \(2a<1\), g is not Lipschitz on [0, 1], and hence \(g \not \in {\mathrm{Lip}}_{\mathrm{loc}}({\mathbb {R}})\). Furthermore, with the same notations as in Theorem 7, we get \(B= (0,+\infty )\),

$$\begin{aligned} \eta (b)= & {} \Bigl (\frac{b}{a}\Bigr )^{1/(a-1)},\\ \beta (b)&{=}&\left\{ \begin{array}{l@{\quad }l} \displaystyle {b^{a/(a-1)} (a^{a/(1-a)}{-}a^{1/(1-a)}){=}(1-a) \Bigl (\frac{b}{a}\Bigr )^{a/(a-1)}}, &{} \mathrm{if }\; b > 0,\\ {+}\infty , &{} \mathrm{if }\; b \le 0 \end{array}\right. \end{aligned}$$

(see also [43, Table II (d)]).

Furthermore, another reason for which Theorem 4 strictly extends Theorem 3 is that a concave function f does not need to be differentiable in the complement of a finite set, similarly as seen before.

(c) We now give an example in which the conditions (6.1) and (6.2) of Theorem 6 do not hold (see also [6, Table 3.2, Example 2]). Put

$$\begin{aligned} g(t)=\left\{ \begin{array}{l@{\quad }l} \log (t^2+1), &{} \mathrm{if }\; t \in [-1,1],\\ \displaystyle { \frac{t^2}{2}+\log 2 -\frac{1}{2}}, &{} \mathrm{if }\; t \in (-\infty , 1) \cup (1,+\infty ). \end{array}\right. \end{aligned}$$

It is not difficult to check that g is even, \(g(0)=0\),

$$\begin{aligned} g^{\prime }(t)= \left\{ \begin{array}{l@{\quad }l} \displaystyle {\frac{2t}{t^2+1}}, &{} \mathrm{if }\; t \in [-1,1],\\ t &{} \mathrm{if }\; t \in (-\infty ,-1) \cup (1,+\infty ), \end{array}\right. \end{aligned}$$

and hence \(g\in C^1({\mathbb {R}})\), and a fortiori \(g\in C^1({\mathbb {R}}^+_0)\), g is strictly increasing in \([0, +\infty )\), \(g \in {\mathrm{Lip}}_{\mathrm{loc}}({\mathbb {R}})\). Set

$$\begin{aligned} f(t)=\left\{ \begin{array}{l@{\quad }l} -\infty , &{} \mathrm{if }\; t \in (-\infty ,0), \\ \log (t+1), &{} \mathrm{if }\; t \in [0,1],\\ \displaystyle { \frac{t}{2}+\log 2 -\frac{1}{2}}, &{} \mathrm{if }\; t \in (1,+\infty ). \end{array}\right. \end{aligned}$$

It is not difficult to see that \(f(t)=g(\sqrt{t})\) for every \(t\ge 0\), \(f(0)=0\), f is concave and strictly increasing on \({\mathbb {R}}_0^+\),

$$\begin{aligned} f^{\prime }(t)=\left\{ \begin{array}{l@{\quad }l} \displaystyle {\frac{1}{t+1}}, &{} \mathrm{if }\; t \in [0,1],\\ \\ \displaystyle {\frac{1}{2}}, &{} \mathrm{if }\; t \in (1,+\infty ), \end{array}\right. \end{aligned}$$

\(f\in C^1({\mathbb {R}}^+_0)\), \(B=[1/2,1]\). Moreover, \(\displaystyle {\eta (b)=\frac{1}{b}-1}\) for every \(b\in [1/2,1]\),

$$\begin{aligned} \beta (b)=\left\{ \begin{array}{l@{\quad }l} +\infty , &{} \mathrm{if }\; b < 1/2, \\ 0, &{} \mathrm{if }\; b>1,\\ -\log b - 1 + b , &{} \mathrm{if }\; b \in [1/2,1], \end{array}\right. \end{aligned}$$

and so the condition (6.2) of Theorem 6 is not satisfied. Furthermore, it is not difficult to check that \(\beta \) is decreasing and convex on B.

(d) We now show that, under the same hypotheses and notations as in Theorem 7, in general the set \((f^{\prime })^{-1}(\{b\})\) can have more than one element for some \(b\in B\). For instance, set

$$\begin{aligned} g(t){=}\left\{ \begin{array}{l@{\quad }l} 2 \sqrt{2}(\sqrt{t^2+1}-1), &{} \mathrm{if }\; t \in (-1,1),\\ t^2+3-2\sqrt{2}, &{} \mathrm{if }\; t \in [-\sqrt{2},-1] \cup [1,\sqrt{2}],\\ 2\sqrt{2}\,|t|{+}1{-}2\sqrt{2}, &{} \mathrm{if }\; t \in ({-}\infty ,{-}\sqrt{2}) \cup (\sqrt{2},{+}\infty ). \end{array}\right. \end{aligned}$$

It is not difficult to see that g is even, \(g\not \equiv 0\), \(g(0)=0\), \(f(t)=g(\sqrt{t})\) for each \(t\ge 0\),

$$\begin{aligned} g^{\prime }(t)=\left\{ \begin{array}{l@{\quad }l} \displaystyle { -2 \sqrt{2}}, &{} \mathrm{if }\; t \in (-\infty , -\sqrt{2}), \\ \displaystyle {\frac{2\sqrt{2}\,t}{\sqrt{t^2+1}}}, &{} \mathrm{if }\; t \in (-1,1),\\ 2t &{} \mathrm{if }\; t \in [-\sqrt{2},-1] \cup [1,\sqrt{2}],\\ \displaystyle { 2 \sqrt{2}}, &{} \mathrm{if }\; t \in (\sqrt{2},+\infty ), \end{array}\right. \end{aligned}$$

and so \(g\in C^1({\mathbb {R}})\), and a fortiori \(g\in C^1({\mathbb {R}}^+_0)\). Put

$$\begin{aligned} f(t)=\left\{ \begin{array}{l@{\quad }l} -\infty , &{} \mathrm{if }\; t \in (-\infty ,0), \\ 2 \sqrt{2}(\sqrt{t+1}-1), &{} \mathrm{if }\; t \in [0,1),\\ t+3-2\sqrt{2}, &{} \mathrm{if }\; t \in [1,2],\\ 2\sqrt{2t}+1-2\sqrt{2}, &{} \mathrm{if }\; t \in (2,+\infty ). \end{array}\right. \end{aligned}$$

It is not difficult to check that \(f(t)=g(\sqrt{t})\) for every \(t \ge 0\), \(f(0)=0\), f is concave and strictly increasing on \({\mathbb {R}}_0^+\),

$$\begin{aligned} f^{\prime }(t)=\left\{ \begin{array}{l@{\quad }l} \displaystyle {\frac{\sqrt{2}}{\sqrt{t+1}}}, &{} \mathrm{if }\; t \in [0,1),\\ 1, &{} \mathrm{if }\; t \in [1,2],\\ \displaystyle { \frac{\sqrt{2}}{\sqrt{t}}}, &{} \mathrm{if }\; t \in (2,+\infty ), \end{array}\right. \end{aligned}$$

\(f\in C^1({\mathbb {R}}^+_0)\), \(\overline{a}=\sqrt{2}\), \(B=(0,\sqrt{2}]\). Moreover, note that \(f^{\prime }(t)>1\) if \(t\in [0,1)\) and \(f^{\prime }(t)<1\) if \(t\in (2,+\infty )\). Hence, the function \(h_1(t)=f(t)-t\) is increasing on [0, 1], decreasing on \([2, +\infty [\), \(h_1\) assumes the maximum value on [1, 2], and \(h_1([1,2])=3-2\sqrt{2}\). Furthermore, we get

$$\begin{aligned} \eta (b)=\left\{ \begin{array}{l@{\quad }l} \displaystyle {\frac{2}{b^2} }, &{} \mathrm{if }\; b\in (0,1), \\ \\ \displaystyle {\frac{2}{b^2} - 1 }, &{} \mathrm{if }\; b \in (1, \sqrt{2}], \end{array}\right. \end{aligned}$$

\((f^{\prime })^{-1}(\{1\})=[1,2]\),

$$\begin{aligned} \beta (b)=\left\{ \begin{array}{l@{\quad }l} + \infty , &{} \mathrm{if }\; b \le 0, \\ \\ \displaystyle {\frac{2}{b}+1-2\sqrt{2} }, &{} \mathrm{if }\; b\in (0,1), \\ \\ \displaystyle {\sup _{t\in {\mathbb {R}}} (f(t)-t)=3 - 2 \sqrt{2}}, &{} \mathrm{if }\; b=1,\\ \\ \displaystyle {\frac{2}{b} + b - 2 \sqrt{2} }, &{} \mathrm{if }\; b \in (1, \sqrt{2}), \\ \\ 0, &{} \mathrm{if }\; b \ge \sqrt{2}. \end{array}\right. \end{aligned}$$

It is not difficult to check that \(\beta \) is decreasing and convex on B. Finally, we get that \(\beta (1)\) is well-defined, since \(f(t)-t=3 - 2\sqrt{2}\) for every \(t\in (f^{\prime })^{-1}(\{1\})\).

(e) In this example, the hypotheses of Theorem 4 are satisfied, but g does not fulfil the conditions (5.1) and (6.1), and g is not continuous at 0. Here, the corresponding function \(\beta \) does not fulfil (5.2), but satisfies (6.2), since the implication (6.2) \(\Longrightarrow \) (6.1) does not hold when, for every \(\delta >0\), the function f assumes some negative real values on \([\delta , + \infty )\). Put

$$\begin{aligned} g(t)= & {} \left\{ \begin{array}{l@{\quad }l} 0, &{} \mathrm{if }\; t =0,\\ -t^4 -1, &{} \mathrm{if }\; t \in {\mathbb {R}} \setminus \{ 0 \}, \end{array}\right. \\ \\ f(t)= & {} \left\{ \begin{array}{l@{\quad }l} -\infty , &{} \mathrm{if }\; t \in (-\infty ,0), \\ 0, &{} \mathrm{if }\; t =0,\\ -t^2 -1, &{} \mathrm{if }\; t \in (0,+\infty ). \end{array}\right. \end{aligned}$$

It is not difficult to check that \(f(t)=g(\sqrt{t})\) for any \(t \ge 0\). Note that g is strictly decreasing on \({\mathbb {R}}^+_0\) and \(\displaystyle {\lim _{t\rightarrow +\infty } \frac{f(t)}{t}=-\infty }\). For \(b\in {\mathbb {R}}\), we get

$$\begin{aligned}&\beta (b)= \Bigl (\sup _{t>0}(-bt-t^2-1)\Bigr ) \vee 0\\&\quad = \left\{ \begin{array}{l@{\quad }l} \displaystyle {\frac{b^2}{4}-1}, &{} \mathrm{if }\; b \in (-\infty , -2), \\ \\ 0, &{} \mathrm{if }\; b \in [-2, +\infty ), \end{array}\right. \end{aligned}$$

and thus \(\beta (b)<+\infty \) for every \(b\in {\mathbb {R}}\).

(f) Note that it may happen that g and f take the value \(-\infty \) on some positive real numbers and assume real values on other points of the positive half line, and \(\beta \) is not constant on \({\mathbb {R}}\). For example, set

$$\begin{aligned} g(t)= & {} \left\{ \begin{array}{l@{\quad }l} t^2, &{} \mathrm{if }\; t \in [0,1],\\ -\infty , &{} \mathrm{otherwise}, \end{array}\right. \\ \\ f(t)= & {} \left\{ \begin{array}{l@{\quad }l} t, &{} \mathrm{if }\; t \in [0,1],\\ -\infty , &{} \mathrm{otherwise}. \end{array}\right. \end{aligned}$$

It is not difficult to see that \(f(t)=g(\sqrt{t})\) for each \(t \ge 0\). For \(b\in {\mathbb {R}}\), we get

$$\begin{aligned} \beta (b)= & {} \sup _{t\in [0,1]}(-bt+t) = \left\{ \begin{array}{l@{\quad }l} 1-b, &{} \mathrm{if }\; b \in (-\infty , 1), \\ \\ 0, &{} \mathrm{if }\; b \in [1, +\infty ). \end{array}\right. \end{aligned}$$

(g) Observe that, if f is non-negative in a suitable interval \((0,t_0]\) and \(\displaystyle { \limsup _{t\rightarrow + \infty }\frac{f(t)}{t}>0}\), then, thanks to the last part of Theorem 6, there are some positive real numbers b with \(\beta (b)=+\infty \). This means to impose a positive weight to the values of the finite difference which appear in the expression of the primal energy. This is too restrictive, since in the cliques having strong discontinuities it is not advisable to impose regularity constraints. So, the condition (6.1) is not restrictive for our purposes.

(h) Observe that it may happen that f, g and \(\beta \) satisfy the properties in (a) and (b) of Theorem 4, and that \(g(0) < 0\) and \(\beta (\overline{b})<0\) for some \(\overline{b} \in {\mathbb {R}}\). Indeed, it is enough to take \(g(t)=-1\) for every \(t \in {\mathbb {R}}\), and

$$\begin{aligned} \beta (b)= \left\{ \begin{array}{l@{\quad }l} + \infty , &{} \mathrm{if}\; b < 0, \\ -1, &{} \mathrm{if}\; b \ge 0. \end{array} \right. \end{aligned}$$

Note that, in this case, the conditions of Theorems 5 and 6 are satisfied.

(i) If \(g:{\mathbb {R}} \rightarrow \widetilde{{\mathbb {R}}}\) is convex and even on \({\mathbb {R}}\), \(g(0)\in {\mathbb {R}}\) and g satisfies the condition (4.2), then g is real-valued on \({\mathbb {R}}\) and \(g\in C^1({\mathbb {R}} \setminus \{0\})\) (see also [43, Lemma 3]).

(j) If \(g\in C^1({\mathbb {R}}^+_0)\), then the function \(\beta \) in Theorem 7 coincides with the corresponding one investigated by D. Geman and G. Reynolds in [34]. This function is defined in a closed interval \(B^*\), while, in our setting, \(\beta \) is defined on the whole real line.

(k) The condition of convexity of \(\beta \) is not restrictive. Indeed, since \(\beta \) is l.s.c., then, by [70, Corollary 12.1.1], we get \(\beta ^*= \) (conv \(\beta )^*\). This implies that the functions f, g obtained by starting with \(\beta \) coincide with the corresponding ones constructed by starting with the convex hull of \(\beta \).

Appendix 2

We now sketch the proof of the convexity of the function \(\varphi \) defined in (28) on

$$\begin{aligned} \displaystyle {{\mathbb {R}} \times \left[ -\sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}, \sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}\,\right] }, \end{aligned}$$

where \(\varepsilon \in (0, + \infty )\) and \(\delta \in (0,1)\). Note that \(\varphi \in C^2((-\infty ,0) \times {\mathbb {R}})\) and \(\varphi \in C^2((0,+\infty ) \times {\mathbb {R}})\), since

$$\begin{aligned}&\dfrac{\partial ^2 \varphi }{\partial t_1^2} (t_1,t_2)=\lambda ^2 (\varepsilon t_2^2 +1)(2-\delta )(1-\delta )|t_1|^{-\delta }, \quad t_1 \ne 0,\\&\dfrac{\partial ^2 \varphi }{\partial t_2^2}(t_1,t_2)=2 \lambda ^2 \varepsilon |t_1|^{2-\delta },\\&\dfrac{\partial ^2 \varphi }{\partial t_1 \partial t_2}(t_1,t_2){=}\dfrac{\partial ^2 \varphi }{\partial t_2 \partial t_1} (t_1,t_2){=}2\lambda ^2 \varepsilon t_2(2-\delta )|t_1|^{1{-}\delta }\text {sgn}(t_1), \end{aligned}$$

where the function sgn is defined as in (29). For \(t_1\ne 0\) let

$$\begin{aligned} H(t_1, t_2)= \left( \begin{array}{cc} \lambda ^2 (\varepsilon t_2^2+1)(2-\delta )(1-\delta )|t_1|^{-\delta } &{} 2 \lambda ^2 \text {sgn}(t_1)\varepsilon (2-\delta )t_2|t_1|^{1-\delta } \\ 2\lambda ^2\text {sgn}(t_1)\varepsilon (2-\delta )t_2|t_1|^{1-\delta } &{} 2\lambda ^2\varepsilon |t_1|^{2-\delta } \end{array} \right) \end{aligned}$$

be the Hessian matrix associated with \(\varphi \).

Note that, for every \( (t_1, t_2) \in ({\mathbb {R}} \setminus \{ 0 \}) \times \left[ -\sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}, \sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}\,\right] \), H is positive-semidefinite. Furthermore, for every \(\overline{t_2}\in {\mathbb {R}}\), the equation of the tangent hyperplane at the point \((0, \overline{t_2})\) is

$$\begin{aligned} z=\varphi (0, \overline{t_2})+\dfrac{\partial \varphi }{\partial t_1}(0,\overline{t_2}) (t_1)+\dfrac{\partial \varphi }{\partial t_2}(0,\overline{t_2}) (t_2-\overline{t_2})=0, \end{aligned}$$

and \(\varphi \ge 0\) on \({\mathbb {R}}^2\). Thus, we deduce that \(\varphi \) is convex on

$$\begin{aligned} \displaystyle {{\mathbb {R}} \times \left[ -\sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}, \sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}\,\right] } \end{aligned}$$

(see also [70, Theorem 25.1], [71, Theorem 2.14, (b) and (c)]).

Now we sketch the proof of the convexity of the function

$$\begin{aligned} \psi (x_1,x_2,x_3,x_4,x_5)= \varphi (x_1-2x_2+x_3,x_4-2x_5+x_1) \end{aligned}$$

on \({\mathbb {R}}^5\). We calculate the Hessian matrix \(H(\psi )\) of \(\psi \) when \(x_1-2x_2+x_3 \ne 0\). Denoting by

\(\kappa =\dfrac{\partial ^2 \varphi }{\partial t_1^2}\), \(\upsilon =\dfrac{\partial ^2 \varphi }{\partial t_1 \partial t_2}\) and \(\omega =\dfrac{\partial ^2 \varphi }{\partial t_2^2}\), we get

$$\begin{aligned} H(\psi )=\begin{pmatrix} \phantom {a} \kappa +2\upsilon +\omega \phantom {a} &{} \phantom {a} -2(\kappa +\upsilon ) \phantom {a} &{} \phantom {a} \kappa + \upsilon \phantom {a} &{} \phantom {a} \upsilon +\omega \phantom {a} &{} \phantom {a} -2(\upsilon +\omega ) \phantom {a} \\ -2(\kappa +\upsilon ) &{} 4\kappa &{} -2\kappa &{} -2\upsilon &{} 4\upsilon \\ \kappa +\upsilon &{} -2\kappa &{} \kappa &{} \upsilon &{} -2\upsilon \\ \upsilon +\omega &{} -2\upsilon &{} \upsilon &{} \omega &{} -2\omega \\ -2(\upsilon +\omega ) &{} 4\upsilon &{} -2\upsilon &{} -2\omega &{} 4\omega \end{pmatrix}. \end{aligned}$$

We now show that \(H(\psi )\) is positive-semidefinite at the points \((x_1, \ldots , x_5)\) such that \(x_1 - 2 x_2 +x_3 \ne 0\). To this aim, we use the following result.

Proposition 2

(see also [41, Corollary 7.1.5]) A matrix A of order \(n \times n\) is positive-semidefinite if and only if every principal minor of A is non-negative.

Fist of all, we claim that \(H(\psi )\) has rank at most two. Indeed, we get

$$\begin{aligned}&H(\psi )= \begin{pmatrix} 1 \\ -2 \\ 1 \\ 0 \\ 0 \end{pmatrix} \cdot \begin{pmatrix} \kappa +\upsilon&-2\kappa&\kappa&\upsilon&-2\upsilon \end{pmatrix} + \begin{pmatrix} 1 \\ 0 \\ 0 \\ 1 \\ -2 \end{pmatrix}\\&\quad \cdot \begin{pmatrix} \upsilon +\omega&-2\upsilon&\upsilon&\omega&-2\omega \end{pmatrix}. \end{aligned}$$

Since \(H(\psi )\) is the sum of two matrices of rank one, then we get the claim. Thus, it is enough for our purposes to prove that every principal minor of \(H(\psi )\) of order two is non-negative. We take into account that \(H(\varphi )\) is positive-semidefinite, that is \(\kappa \, \omega - \upsilon ^2 \ge 0\).

The determinants of the principal minors of order two are

$$\begin{aligned}&\begin{vmatrix} \kappa +2\upsilon +\omega&-2(\kappa +\upsilon ) \\ -2(\kappa +\upsilon )&4\kappa \end{vmatrix} = \begin{vmatrix} \kappa +2\upsilon +\omega&-2(\upsilon +\omega ) \\ -2(\upsilon +\omega )&4\omega \end{vmatrix}= \\&\quad =\begin{vmatrix} 4\kappa&-2\upsilon \\ -2\upsilon&\omega \end{vmatrix} =\begin{vmatrix} \kappa&-2\upsilon \\ -2\upsilon&4\omega \end{vmatrix} =4(\kappa \, \omega - \upsilon ^2 )\ge 0 ;\\&\begin{vmatrix} \kappa +2\upsilon +\omega&\kappa +\upsilon \\ \kappa +\upsilon&\kappa \end{vmatrix}= \begin{vmatrix} \kappa +2\upsilon +\omega&\beta +\omega \\ \upsilon +\omega&\omega \end{vmatrix}=\begin{vmatrix} \kappa&\upsilon \\ \upsilon&\omega \end{vmatrix} \\&\quad =\kappa \, \omega - \upsilon ^2 \ge 0 ;\\&\quad \begin{vmatrix} 4\kappa&-2\kappa \\ -2\kappa&\kappa \end{vmatrix} =0; \quad \begin{vmatrix} 4\kappa&4\upsilon \\ 4\upsilon&4\omega \end{vmatrix} =16(\kappa \, \omega - \upsilon ^2 )\ge 0 . \end{aligned}$$

Now, set \(\varPi =\{(x_1,x_2,x_3,x_4,x_5):x_1=2x_2-x_3\}\) and let \(P\in \varPi \). Then, \(\psi (P)=\varphi (0, x_4-2x_5+2x_2-x_3)=0\). It is possible to see that the equation of the hyperplane tangent to \(\psi \) at P is \(x_6=0\), and \(\psi \ge 0\) on \({\mathbb {R}}^5\). Thus, proceeding similarly as above, it is possible to show that \(\psi \) is convex on \({\mathbb {R}}^5\) (see also [70, Theorem 25.1], [71, Theorem 2.14, (b) and (c)]).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boccuto, A., Gerace, I. & Martinelli, F. Half-Quadratic Image Restoration with a Non-parallelism Constraint. J Math Imaging Vis 59, 270–295 (2017). https://doi.org/10.1007/s10851-017-0731-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-017-0731-7

Keywords

Navigation