Skip to main content

Exploiting Sparsity in Solving PDE-Constrained Inverse Problems: Application to Subsurface Flow Model Calibration

  • Chapter
Frontiers in PDE-Constrained Optimization

Abstract

Inverse problems are frequently encountered in many areas of science and engineering where observations are used to estimate the parameters of a system. In several practical applications, the dynamic processes that take place in a physical system are described using a set of partial differential equations (PDEs), which are typically nonlinear and coupled. The inverse problems that arise in those systems ought to be constrained to honour the governing PDEs. In this chapter, we consider high-dimensional PDE-constrained inverse problems in which, because of spatial patterns and correlations in the distribution of physical properties of a system, the underlying parameters tend to reside in (usually unknown) low-dimensional manifolds, thus have sparse (low-rank) representations. The sparsity of the parameters is amenable to an effective and flexible regularization form that can be exploited to improve the solution of such inverse problems. In applications where prior training data are available, sparse manifold learning methods can be adopted to tailor parameter representations to the specific requirements of the prior data. However, a major risk in employing prior training data is the significant uncertainty about the underlying conceptual models and assumptions used to develop the prior. A group-sparsity formulation is discussed for addressing the uncertainty in the prior training data when multiple distinct, but plausible, prior scenarios are encountered. Examples from geosciences application are presented where images of rock material properties are reconstructed from limited nonlinear fluid flow measurements.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aanonsen SI, Nævdal G, Oliver DS, Reynolds AC, Vallès B, et al (2009) The ensemble Kalman filter in reservoir engineering–a review. Spe Journal 14(03):393–412

    Article  Google Scholar 

  2. Aharon M, Elad M, Bruckstein A (2006) rmk-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54(11): 4311–4322

    Article  MATH  Google Scholar 

  3. Ahmed N, Natarajan T, Rao KR (1974) Discrete cosine transform. IEEE transactions on Computers 100(1):90–93

    Article  MathSciNet  MATH  Google Scholar 

  4. Baraniuk RG (2007) Compressive sensing [lecture notes]. IEEE signal processing magazine 24(4):118–121

    Article  Google Scholar 

  5. Berinde R, Gilbert AC, Indyk P, Karloff H, Strauss MJ (2008) Combining geometry and combinatorics: A unified approach to sparse signal recovery. In: Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, IEEE, pp 798–805

    Google Scholar 

  6. Bhark EW, Jafarpour B, Datta-Gupta A (2011) A generalized grid connectivity–based parameterization for subsurface flow model calibration. Water Resources Research 47(6)

    Google Scholar 

  7. Blumensath T, Davies ME (2009) Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis 27(3):265–274

    Article  MathSciNet  MATH  Google Scholar 

  8. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge university press

    Book  MATH  Google Scholar 

  9. Bracewell RN, Bracewell RN (1986) The Fourier transform and its applications, vol 31999. McGraw-Hill New York

    Google Scholar 

  10. Candes EJ (2008) The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 346(9–10):589–592

    Article  MathSciNet  MATH  Google Scholar 

  11. Candès EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE signal processing magazine 25(2):21–30

    Article  Google Scholar 

  12. Carrera J, Neuman SP (1986) Estimation of aquifer parameters under transient and steady state conditions: 1. maximum likelihood method incorporating prior information. Water Resources Research 22(2):199–210

    Article  Google Scholar 

  13. Chandrasekaran V, Recht B, Parrilo PA, Willsky AS (2012) The convex geometry of linear inverse problems. Foundations of Computational mathematics 12(6):805–849

    Article  MathSciNet  MATH  Google Scholar 

  14. Chartrand R, Yin W (2008) Iteratively reweighted algorithms for compressive sensing. In: Acoustics, speech and signal processing, 2008. ICASSP 2008. IEEE international conference on, IEEE, pp 3869–3872

    Google Scholar 

  15. Chen GH, Tang J, Leng S (2008) Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Medical physics 35(2):660–663

    Article  Google Scholar 

  16. Chen S, Doolen GD (1998) Lattice Boltzmann method for fluid flows. Annual review of fluid mechanics 30(1):329–364

    Article  MathSciNet  MATH  Google Scholar 

  17. Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM review 43(1):129–159

    Article  MathSciNet  MATH  Google Scholar 

  18. Chen Y, Oliver DS (2012) Multiscale parameterization with adaptive regularization for improved assimilation of nonlocal observation. Water resources research 48(4)

    Google Scholar 

  19. Chorin AJ (1968) Numerical solution of the Navier-Stokes equations. Mathematics of computation 22(104):745–762

    Article  MathSciNet  MATH  Google Scholar 

  20. Constantin P, Foias C (1988) Navier-stokes equations. University of Chicago Press

    MATH  Google Scholar 

  21. Donoho DL (2006) Compressed sensing. IEEE Transactions on information theory 52(4):1289–1306

    Article  MathSciNet  MATH  Google Scholar 

  22. Efendiev Y, Durlofsky L, Lee S (2000) Modeling of subgrid effects in coarse-scale simulations of transport in heterogeneous porous media. Water Resources Research 36(8):2031–2041

    Article  Google Scholar 

  23. Eldar YC, Kuppinger P, Bolcskei H (2010) Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing 58(6):3042–3054

    Article  MathSciNet  MATH  Google Scholar 

  24. Engl HW, Hanke M, Neubauer A (1996) Regularization of inverse problems, vol 375. Springer Science & Business Media

    Google Scholar 

  25. Feyen L, Caers J (2006) Quantifying geological uncertainty for flow and transport modeling in multi-modal heterogeneous formations. Advances in Water Resources 29(6):912–929

    Article  Google Scholar 

  26. Gavalas G, Shah P, Seinfeld JH, et al (1976) Reservoir history matching by Bayesian estimation. Society of Petroleum Engineers Journal 16(06):337–350

    Article  Google Scholar 

  27. Gholami A (2015) Nonlinear multichannel impedance inversion by total-variation regularization. Geophysics 80(5):R217–R224

    Article  MathSciNet  Google Scholar 

  28. Golmohammadi A, Jafarpour B (2016) Simultaneous geologic scenario identification and flow model calibration with group-sparsity formulations. Advances in Water Resources 92:208–227

    Article  Google Scholar 

  29. Golmohammadi A, Khaninezhad MRM, Jafarpour B (2015) Group-sparsity regularization for ill-posed subsurface flow inverse problems. Water Resources Research 51(10):8607–8626

    Article  Google Scholar 

  30. Golub G, Kahan W (1965) Calculating the singular values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis 2(2):205–224

    Article  MathSciNet  MATH  Google Scholar 

  31. Golub GH, Heath M, Wahba G (1979) Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21(2):215–223

    Article  MathSciNet  MATH  Google Scholar 

  32. Gómez-Hernánez JJ, Sahuquillo A, Capilla J (1997) Stochastic simulation of transmissivity fields conditional to both transmissivity and piezometric data-i. theory. Journal of Hydrology 203(1–4):162–174

    Article  Google Scholar 

  33. Grimstad AA, Mannseth T, Nævdal G, Urkedal H (2003) Adaptive multiscale permeability estimation. Computational Geosciences 7(1):1–25

    Article  MathSciNet  MATH  Google Scholar 

  34. Hansen PC (1992) Analysis of discrete ill-posed problems by means of the l-curve. SIAM review 34(4):561–580

    Article  MathSciNet  MATH  Google Scholar 

  35. Hansen PC (1998) Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion. SIAM

    Book  Google Scholar 

  36. Hill MC, Tiedeman CR (2006) Effective groundwater model calibration: with analysis of data, sensitivities, predictions, and uncertainty. John Wiley & Sons

    Google Scholar 

  37. Jacquard P, et al (1965) Permeability distribution from field pressure data. Society of Petroleum Engineers Journal 5(04):281–294

    Article  Google Scholar 

  38. Jafarpour B, Tarrahi M (2011) Assessing the performance of the ensemble Kalman filter for subsurface flow data integration under variogram uncertainty. Water Resources Research 47(5)

    Google Scholar 

  39. Jafarpour B, McLaughlin DB, et al (2009) Reservoir characterization with the discrete cosine transform. SPE Journal 14(01):182–201

    Article  Google Scholar 

  40. Jenatton R, Obozinski G, Bach F (2010) Structured sparse principal component analysis. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp 366–373

    Google Scholar 

  41. Jolliffe IT (1986) Principal component analysis and factor analysis. In: Principal component analysis, Springer, pp 115–128

    Google Scholar 

  42. Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ, et al (2000) Principles of neural science, vol 4. McGraw-Hill New York

    Google Scholar 

  43. Khaninezhad MM, Jafarpour B (2014) Prior model identification during subsurface flow data integration with adaptive sparse representation techniques. Computational Geosciences 18(1):3–16

    Article  MathSciNet  MATH  Google Scholar 

  44. Khaninezhad MM, Jafarpour B, Li L (2012) Sparse geologic dictionaries for subsurface flow model calibration: Part i. inversion formulation. Advances in Water Resources 39:106–121

    Article  Google Scholar 

  45. Khaninezhad MM, Jafarpour B, Li L (2012) Sparse geologic dictionaries for subsurface flow model calibration: Part ii. robustness to uncertainty. Advances in water resources 39:122–136

    Article  Google Scholar 

  46. Khodabakhshi M, Jafarpour B (2013) A Bayesian mixture-modeling approach for flow-conditioned multiple-point statistical facies simulation from uncertain training images. Water Resources Research 49(1):328–342

    Article  Google Scholar 

  47. Kitanidis PK (1997) Introduction to geostatistics: applications in hydrogeology. Cambridge University Press

    Book  Google Scholar 

  48. Klema V, Laub A (1980) The singular value decomposition: Its computation and some applications. IEEE Transactions on automatic control 25(2):164–176

    Article  MathSciNet  MATH  Google Scholar 

  49. Landis EM (1934) Capillary pressure and capillary permeability. Physiological Reviews 14(3):404–481

    Article  Google Scholar 

  50. Lee J, Kitanidis P (2013) Bayesian inversion with total variation prior for discrete geologic structure identification. Water Resources Research 49(11):7658–7669

    Article  Google Scholar 

  51. Li L, Jafarpour B (2010) A sparse Bayesian framework for conditioning uncertain geologic models to nonlinear flow measurements. Advances in Water Resources 33(9):1024–1042

    Article  Google Scholar 

  52. Liu X, Kitanidis P (2011) Large-scale inverse modeling with an application in hydraulic tomography. Water Resources Research 47(2)

    Google Scholar 

  53. Lochbühler T, Vrugt JA, Sadegh M, Linde N (2015) Summary statistics from training images as prior information in probabilistic inversion. Geophysical Journal International 201(1):157–171

    Article  Google Scholar 

  54. Luo J, Wang W, Qi H (2013) Group sparsity and geometry constrained dictionary learning for action recognition from depth maps. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1809–1816

    Google Scholar 

  55. Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE transactions on pattern analysis and machine intelligence 11(7):674–693

    Article  MATH  Google Scholar 

  56. Marvasti F, Azghani M, Imani P, Pakrouh P, Heydari SJ, Golmohammadi A, Kazerouni A, Khalili M (2012) Sparse signal processing using iterative method with adaptive thresholding (IMAT). In: Telecommunications (ICT), 2012 19th International Conference on, IEEE, pp 1–6

    Google Scholar 

  57. Miller K (1970) Least squares methods for ill-posed problems with a prescribed bound. SIAM Journal on Mathematical Analysis 1(1):52–74

    Article  MathSciNet  MATH  Google Scholar 

  58. Mohimani H, Babaie-Zadeh M, Jutten C (2009) A fast approach for overcomplete sparse decomposition based on smoothed l0 norm. IEEE Transactions on Signal Processing 57(1):289–301

    Article  MathSciNet  MATH  Google Scholar 

  59. Mueller JL, Siltanen S (2012) Linear and nonlinear inverse problems with practical applications. SIAM

    Book  MATH  Google Scholar 

  60. Murray CD, Dermott SF (1999) Solar system dynamics. Cambridge university press

    MATH  Google Scholar 

  61. Needell D, Tropp JA (2009) CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis 26(3):301–321

    Article  MathSciNet  MATH  Google Scholar 

  62. Oliver DS, Chen Y (2011) Recent progress on reservoir history matching: a review. Computational Geosciences 15(1):185–221

    Article  MATH  Google Scholar 

  63. Oliver DS, Reynolds AC, Liu N (2008) Inverse theory for petroleum reservoir characterization and history matching. Cambridge University Press

    Book  Google Scholar 

  64. Patankar S (1980) Numerical heat transfer and fluid flow. CRC press

    MATH  Google Scholar 

  65. Peterson AF, Ray SL, Mittra R, of Electrical I, Engineers E (1998) Computational methods for electromagnetics. IEEE press New York

    Google Scholar 

  66. Resmerita E (2005) Regularization of ill-posed problems in Banach spaces: convergence rates. Inverse Problems 21(4):1303

    Article  MathSciNet  MATH  Google Scholar 

  67. Riva M, Panzeri M, Guadagnini A, Neuman SP (2011) Role of model selection criteria in geostatistical inverse estimation of statistical data-and model-parameters. Water Resources Research 47(7)

    Google Scholar 

  68. Rousset M, Durlofsky L (2014) Optimization-based framework for geological scenario determination using parameterized training images. In: ECMOR XIV-14th European Conference on the Mathematics of Oil Recovery

    Google Scholar 

  69. Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1–4):259–268

    Article  MathSciNet  MATH  Google Scholar 

  70. Sarma P, Durlofsky LJ, Aziz K (2008) Kernel principal component analysis for efficient, differentiable parameterization of multipoint geostatistics. Mathematical Geosciences 40(1): 3–32

    Article  MathSciNet  MATH  Google Scholar 

  71. Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge university press

    Book  MATH  Google Scholar 

  72. Shirangi MG (2014) History matching production data and uncertainty assessment with an efficient TSVD parameterization algorithm. Journal of Petroleum Science and Engineering 113:54–71

    Article  Google Scholar 

  73. Shirangi MG, Durlofsky LJ (2016) A general method to select representative models for decision making and optimization under uncertainty. Computers & Geosciences 96:109–123

    Article  Google Scholar 

  74. Snieder R (1998) The role of nonlinearity in inverse problems. Inverse Problems 14(3):387

    Article  MathSciNet  MATH  Google Scholar 

  75. Strebelle S (2002) Conditional simulation of complex geological structures using multiple-point statistics. Mathematical Geology 34(1):1–21

    Article  MathSciNet  MATH  Google Scholar 

  76. Suzuki S, Caers JK, et al (2006) History matching with an uncertain geological scenario. In: SPE Annual Technical Conference and Exhibition, Society of Petroleum Engineers

    Google Scholar 

  77. Talukder KH, Harada K (2010) Haar wavelet based approach for image compression and quality assessment of compressed image. arXiv preprint arXiv:10104084

    Google Scholar 

  78. Tarantola A (2005) Inverse problem theory and methods for model parameter estimation. SIAM

    Book  MATH  Google Scholar 

  79. Tarantola A, Valette B (1982) Generalized nonlinear inverse problems solved using the least squares criterion. Reviews of Geophysics 20(2):219–232

    Article  MathSciNet  Google Scholar 

  80. Taubman D, Marcellin M (2012) JPEG2000 image compression fundamentals, standards and practice: image compression fundamentals, standards and practice, vol 642. Springer Science & Business Media

    Google Scholar 

  81. Tikhonov A, Arsenin VY (1979) Methods of solving incorrect problems

    Google Scholar 

  82. Tosic I, Frossard P (2011) Dictionary learning. IEEE Signal Processing Magazine 28(2):27–38

    Article  MATH  Google Scholar 

  83. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory 53(12):4655–4666

    Article  MathSciNet  MATH  Google Scholar 

  84. Vo HX, Durlofsky LJ (2014) A new differentiable parameterization based on principal component analysis for the low-dimensional representation of complex geological models. Mathematical Geosciences 46(7):775–813

    Article  MATH  Google Scholar 

  85. Vogel CR (2002) Computational methods for inverse problems. SIAM

    Book  MATH  Google Scholar 

  86. Vrugt JA, Stauffer PH, Wöhling T, Robinson BA, Vesselinov VV (2008) Inverse modeling of subsurface flow and transport properties: A review with new developments. Vadose Zone Journal 7(2):843–864

    Article  Google Scholar 

  87. Yeh WWG (1986) Review of parameter identification procedures in groundwater hydrology: The inverse problem. Water Resources Research 22(2):95–108

    Article  Google Scholar 

  88. Zhou H, Gómez-Hernández JJ, Li L (2012) A pattern-search-based inverse method. Water Resources Research 48(3)

    Google Scholar 

  89. Zhou H, Gómez-Hernández JJ, Li L (2014) Inverse methods in hydrogeology: Evolution and recent trends. Advances in Water Resources 63:22–37

    Article  Google Scholar 

  90. Zimmerman D, Marsily Gd, Gotway CA, Marietta MG, Axness CL, Beauheim RL, Bras RL, Carrera J, Dagan G, Davies PB, et al (1998) A comparison of seven geostatistically based inverse approaches to estimate transmissivities for modeling advective transport by groundwater flow. Water Resources Research 34(6):1373–1413

    Article  Google Scholar 

Download references

Acknowledgements

The content of this chapter is based on research partially funded by the US Department of Energy, Foundation CMG, and American Chemical Society.

Appendix 1: k-SVD Dictionary Learning

The k-SVD algorithm is used to construct learned sparse dictionaries from a training dataset. The algorithm is similar to the k-means clustering method and is designed to find a dictionary \(\boldsymbol {\Phi } \in \mathbb {R}^{n\times k}\) containing k elements that sparsely represent each of the training samples in Un×L = [u1uiuL]. To achieve this goal, the algorithm attempts to solve the following minimization problem:

$$\displaystyle \begin{aligned} \hat{\mathbf{V}},\hat{\boldsymbol{\Phi}}={\text{argmin}}_{\mathbf{V},\boldsymbol{\Phi}}\quad {\sum_{i=1}^{L}{\lVert {\mathbf{u}}_i- \boldsymbol{\Phi}{\mathbf{v}}_i \rVert}_2^2}\quad \quad \text{s.t.,}\quad \quad {\lVert {\mathbf{v}}_i \rVert}_0\leq S \quad \text{for} \quad i\in1:L\end{aligned} $$
(31)

where Vk×L = [v1vivL] are the expansion coefficients corresponding to the training data. Given the NP-hard nature of the problem, the k-SVD algorithm uses a heuristic greedy solution technique by dividing the above optimization problem into two subproblems: (i) sparse coding and (ii) dictionary update. In the sparse coding step, for the current dictionary, a basis pursuit algorithm is used to find the sparse representation for each member of the training dataset. In the dictionary update step, the sparse representation obtained in the first step is fixed and the dictionary elements are updated to reduce the sparse approximation error. These two steps are repeated until convergence. Table 2 summarizes the k-SVD algorithm. Further details about the k-SVD algorithm may be found in [2]. We note that for high-dimensional training data the k-SVD dictionary learning can be computationally expensive. The computational complexity of each iteration of k-SVD is O(L(2nk + S2k + 7Sk + S3 + 4Sn) + 5nk2), where S is the sparsity level. One strategy to improve the computational efficiency of the algorithm includes using segmentation or approximate low-rank representations of the training data (to reduce n).

Table 2 k-SVD algorithm

Appendix 2: IRLS Algorithm

We use the IRLS algorithm [14] to solve the 1-norm regularized least-square minimization problem, that is:

$$\displaystyle \begin{aligned} \underset{\mathbf{v}}{\text{min}} \quad J(\mathbf{v})={\lVert \mathbf{v} \rVert}_1 + \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}\mathbf{v}) \rVert}_2^2\end{aligned} $$
(32)

At iteration n of the IRLS algorithm, the 1-norm is approximated using a weighted 2-norm as follows:

$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i}^{}w_i^{(n)}{v_i^{(n)}}^2+ \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}{\mathbf{v}}^{(n)}) \rVert}_2^2\end{aligned} $$
(33)

where \(w_i^{(n)}=\frac {1}{({v_i^{(n-1)}}^2+\epsilon ^{(n)})^{0.5}}\), (n) stands for the iteration n, and 𝜖(n) is a sequence of small numbers (that converge to zero with increasing n). Using this approximation of the objective function, and a first-order Taylor expansion for g( Φv(n)), the objective function in (33) takes the form:

$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i}^{}w_i^{(n)}{v_i^{(n)}}^2+ \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}{\mathbf{v}}^{(n-1)})- {{\mathbf{G}}_{\mathbf{v}}}^{(n)}({\mathbf{v}}^{(n)}-{\mathbf{v}}^{(n-1)}) \rVert}_2^2 \end{aligned} $$
(34)

Here, Gv(n) is the Jacobian matrix of g(.) with respect to v at v = v(n−1). The updated solution at iteration n can be easily found by taking the derivative of the above convex function w.r.t. v(n) and setting it to zero.

Appendix 3: Group-Sparsity Inversion

The objective function for group-sparsity regularization can be expressed as:

$$\displaystyle \begin{aligned} \underset{\mathbf{v}}{\text{min}} \quad J(\mathbf{v})=\sum_{i=1}^{p}{\lVert {\mathbf{v}}_i \rVert}_2 + \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}\mathbf{v}) \rVert}_2^2\end{aligned} $$
(35)

where the notations are discussed in the text. At iteration n, using the Gauss-Newton method and the first-order Taylor series for g( Φv), the linearized version of the above function takes the form:

$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i=1}^{p}(\sum_{j=1}^{s_i}({v_i^{j}}^{(n)})^{2})^{\frac{1}{2}} + \lambda^2{\lVert \mathbf{d}- \mathbf{g}(\boldsymbol{\Phi}{\mathbf{v}}^{(n-1)})- {{\mathbf{G}}_{\mathbf{v}}}^{(n)}({\mathbf{v}}^{(n)}-{\mathbf{v}}^{(n-1)}) \rVert}_2^2\end{aligned} $$
(36)

where Gv(n) is the Jacobian matrix of g(v), and \({v_i^{j}}\) is the jth basis in the ith group. Denoting Δd(n) = d −g( Φv(n−1)) + Gv(n)v(n−1), (36) can be simplified to:

$$\displaystyle \begin{aligned} \underset{{\mathbf{v}}^{(n)}}{\text{min}} \quad J({\mathbf{v}}^{(n)})=\sum_{i=1}^{p}(\sum_{j=1}^{s_i}({v_i^{j}}^{(n)})^{2})^{\frac{1}{2}} + \lambda^2{\lVert \boldsymbol{\Delta}{\mathbf{d}}^{(n)}-{{\mathbf{G}}_{\mathbf{v}}}^{(n)}{\mathbf{v}}^{(n)} \rVert}_2^2\end{aligned} $$
(37)

The derivative of the regularization term with respect to \({v_i^{j}}^{(n)}\) can be approximated as:

$$\displaystyle \begin{aligned} \frac{{v_i^{j}}^{(n)}}{(\sum_{k=1}^{s_i}({v_i^{k}}^{(n)})^{2})^{\frac{1}{2}}}\approx \frac{{v_i^{j}}^{(n)}}{(\sum_{k=1}^{s_i}({v_i^{k}}^{(n-1)})^{2}+{\epsilon_i}^{(n)})^{\frac{1}{2}}}\end{aligned} $$
(38)

where 𝜖i(n) is a small positive number that is used to avoid zero denominators. Note that \({v_i^{k}}^{(n)}\) in the denominator is approximated as \({v_i^{k}}^{(n-1)}\). Choosing 𝜖 such that 0 < 𝜖i(n) < 𝜖i(n−1) and \( \underset {n\rightarrow \infty }{\text{lim}}{\epsilon _i}^{(n)}=0\), it can be shown that this approximation does not change the solution of the original minimization problem. The iterative solution of (37) can now be derived as:

$$\displaystyle \begin{aligned} ( \boldsymbol{\Lambda}^{(n)}+\alpha { {{\mathbf{G}}_{\mathbf{v}}}^{(n)}}^{T} {{\mathbf{G}}_{\mathbf{v}}}^{(n)}) {\mathbf{v}}^{(n)} = \alpha { {{\mathbf{G}}_{\mathbf{v}}}^{(n)}}^{T}\boldsymbol{\Delta}{\mathbf{d}}^{(n)}\end{aligned} $$
(39)

where α = 2λ2, and Λ(n) is a diagonal matrix with diagonal entries \(\frac {1}{(\sum _{k=1}^{s_i}({v_i^{k}}^{(n-1)})^{2}+{\epsilon _i}^{(n)})^{\frac {1}{2}}}\).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Behnam Jafarpour .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media, LLC, part of Springer Nature

About this chapter

Cite this chapter

Golmohammadi, A., Khaninezhad, MR.M., Jafarpour, B. (2018). Exploiting Sparsity in Solving PDE-Constrained Inverse Problems: Application to Subsurface Flow Model Calibration. In: Antil, H., Kouri, D.P., Lacasse, MD., Ridzal, D. (eds) Frontiers in PDE-Constrained Optimization. The IMA Volumes in Mathematics and its Applications, vol 163. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-8636-1_12

Download citation

Publish with us

Policies and ethics