Abstract
This article is devoted to studying dual regularization method as applied to parametric convex optimal control problem of controlled third boundary-value problem for parabolic equation with boundary control and with equality and inequality pointwise state constraints. These constraints are understood as ones in the Hilbert space \(L_2\). A major advantage of the constraints of the original problem which are understood as ones in \(L_2\) is that the resulting dual regularization algorithm is stable with respect to errors in the input data and leads to the construction of a minimizing approximate solution in the sense of J. Warga. Simultaneously, this dual algorithm yields the corresponding necessary and sufficient conditions for minimizing sequences, namely, the stable, with respect to perturbation of input data, sequential or, in other words, regularized Lagrange principle in nondifferential form and Pontryagin maximum principle for the original problem. Regardless of the fact that the stability or instability of the original optimal control problem, they stably generate a minimizing approximate solutions for it. For this reason, we can interpret these regularized Lagrange principle and Pontryagin maximum principle as tools for direct solving unstable optimal control problems and reducing to them unstable inverse problems.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Optimal boundary control
- Parabolic equation
- Minimizing sequence
- Dual regularization
- Stability
- Pontryagin maximum principle
1 Introduction
Pontryagin maximum principle is the central result of all optimal control theory, including optimal control for differential equations with partial derivatives. Its statement and proof assume, first of all, that the optimal control problem is considered in an ideal situation, when its input data are known exactly. However, in the vast number of important practical problems of optimal control, as well as numerous problems reducing to optimal control problems, the requirement of exact defining input data is very unnatural, and in many undoubtedly interest cases is simply impracticable. In similar problems, we can not, strictly speaking, to take as an approximation to the solution of the initial (unperturbed) problem with the exact input data, a control formally satisfying the maximum principle in the perturbed problem. The reason of such situation lies in the natural instability of optimization problems with respect to perturbation of its input data. As a typical property of optimization problems in general, including constrained ones, instability fully manifests itself in optimal control problems (see., e.g., [1]). As a consequence, the mentioned above instability implies “instability” of the classical optimality conditions, including the conditions in the form of Pontryagin maximum principle. This instability manifests itself in selecting by them of arbitrarily distant “perturbed” optimal elements from their unperturbed counterparts in the case of an arbitrarily small perturbations of the input data. The above applies, in full measure, both to discussed below optimal control problem with pointwise state constraints for linear parabolic equation in divergent form, and to the classical optimality conditions in the form of the Lagrange principle and the Pontryagin maximum principle for this problem.
In this paper we discuss how to overcome the problem of instability of the classical optimality conditions in optimal control problems in the way of applying dual regularization method (see., e.g., [2,3,4]) and simultaneous transition to the concept of minimizing sequence of admissible elements as the main concept of optimization theory. In the role of the last, acts the concept of the minimizing approximate solution in the sense of Warga [5]. The main attention in the paper is given to the discussion of the so-called regularized or, in other words, stable, with respect to perturbation of input data, sequential Lagrange principle in the nondifferential form and Pontryagin maximum principle. Regardless of the stability or instability of the original optimal control problem, they stably generate minimizing approximate solutions for it. For this reason, we can interpret the regularized Lagrange principle and Pontryagin maximum principle that are obtained in the article as tools for direct solving unstable optimal control problems and reducing to them unstable inverse problems [1, 6, 7]. Thus, they contribute to a significant expansion of the range of applicability of the theory of optimal control in which a central role belongs to classic constructions of the Lagrange and Hamilton-Pontryagin functions. Finally, we note that discussed in this article regularized Lagrange principle in the nondifferential form and Pontryagin maximum principle may have another kind, more convenient for applications [7]. Justification of these alternative forms of the regularized Lagrange principle and Pontryagin maximum principle is based on the so-called method of iterative dual regularization [2, 3]. In this case, they take the form of iterative processes with the corresponding stopping rules when the error of input data is fixed and finite. Here these alternative forms are not considered.
2 Statement of Optimal Control Problem
We consider the fixed-time parametric optimal control problem
with equality and inequality pointwise state constraints understood as ones in the Hilbert space \(\mathcal{H}\equiv L_2(Q)\); \(\mathcal{D}\equiv \{u\in L_2(Q_T): \,u(x,t)\in U\,\,\mathrm { for\,a.e.}\,\,(x,t)\in Q_T\}\times \{w\in L_2(S_T): \,w(x,t)\in W\,\,\mathrm {for\,\,a.e.}\,\,(x,t)\in S_T\}\); \(U,\,W\subset \mathbb {R}^1\) are convex compact sets. In this problem, \(p\in \mathcal{H}\) and \(r\in \mathcal{H}\) are parameters; \(g_0^\delta :\,L_2(Q_T)\times L_2(S_T)\) is a continuous convex functional, \(Q\subset \overline{Q}_{\iota ,T}\) is a compact set without isolated points with a nonempty interior, \(\iota \in (0,T)\), \(Q = \mathrm {cl\,int} Q\); and \(z^\delta [\pi ] \in V_2^{1,0}(Q_T)\,\cap \,C(\overline{Q}_T)\) is a weak solution [8, 9] to the third boundary-value problemFootnote 1
corresponding to the pair \(\pi \equiv (u,w)\). The superscript \(\delta \) in the input data of Problem (\(P_{p,r}^\delta \)) indicates that these data are exact (\(\delta =0\)) or perturbed (\(\delta >0\)), i.e., they are specified with an error, \(\delta \in [0,\delta _0]\), where \(\delta _0>0\) is a fixed number.
For definiteness, as target functional we take terminal one
The input data for Problem (\(P_{p,r}^0\)) are assumed to meet the following conditions:
-
(a)
It is true that \(a_{i,j}\in L_\infty (Q_T),\,\,i,j=1,\dots ,n\), \(a^\delta \in L_\infty (Q_T)\), \(\sigma ^\delta \in L_\infty (S_T)\), \(v_0^\delta \in C(\overline{\varOmega })\),
$$ \nu |\xi |^2\le a_{i,j}(x,t)\xi _i\xi _j\le \mu |\xi |^2\quad \forall (x,t)\in Q_T,\quad \nu ,\mu >0, $$$$ a^\delta (x,t)\ge C_0\,\,\text {for a.e. } (x,t)\in Q_T,\,\,\sigma ^\delta (x,t)\ge C_0\,\, \text {for a.e. } (x,t)\in S_T ; $$ -
(b)
It is true that \(\varphi _1^\delta ,\,h^\delta \in L_\infty (Q)\); \(\varphi _2^\delta :\,Q\times \mathbb {R}^1\rightarrow \mathbb {R}^1\) is Lebesgue measurable function that is continuous and convex with respect to z for a.e. \((x,t)\in Q\), \(\varphi _2^\delta (\cdot ,\cdot ,z(\cdot ,\cdot ))\in L_\infty (Q)\) \(\forall z\in C(Q)\); \(G^\delta :\,\varOmega \times \mathbb {R}^1\rightarrow \mathbb {R}^1\) is Lebesgue measurable function that is continuous and convex with respect to z for a.e. \(x\in \varOmega \), \(G^\delta (\cdot ,z(\cdot ,T))\in L_\infty (\varOmega )\) \(\forall z(\cdot ,T)\in C(Q)\);
-
(c)
\(\varOmega \subset \mathbb {R}^n\) be a bounded domain with Lipschitz boundary S.
Assume that the following estimates hold:
where \(C,\,C_M>0\) are independent of \(\delta \); \(S_M^n\equiv \{x\in \mathbb {R}^n:\,|x|<M\}\). Let’s note, that the conditions on the input data of Problem (\(P_{p,r}^\delta \)), and also the estimates of deviations of the perturbed input data from the exact ones can be weakened.
In this paper we use for discussing the main results, related to the stable sequential Lagrange principle and Pontryagin maximum principle in Problem (\(P_{p,r}^0\)), a scheme of studying the similar optimization problems in the papers [10, 11] for a system of controlled ordinary differential equations. In these works, both spaces of admissible controls and spaces, where lie images of the operators that define the pointwise state constraints, represented as Hilbert spaces of square-integrable functions. For this reason, we put the set \(\mathcal{D}\) of admissible controls \(\pi \) into a Hilbert space also, i.e., assume that \(\mathcal{D}\subset Z\equiv L_2(Q_T)\times L_2(S_T)\), \(\Vert \pi \Vert \equiv (\Vert u\Vert _{2,Q_T}^2+\Vert w\Vert _{2,S_T}^2)^{1/2}\). At the same time, we note that the conditions on the input data of Problem (\(P_{p,r}^\delta \)) allow formally to consider that the operators \(g_1^\delta ,\,g_2^\delta \), specifying the state constraints of the problem, act into space \( L_p(Q) \) with any index \(p\in [1,+\infty ]\). However, in this paper, taking into account the above remark, we will put images of these functional operators in the Hilbert space \(L_2(Q)\equiv \mathcal{H}\).
Suppose that Problem (\(P_{p,r}^0\)) has a solution (which is unique if \(g_0^0\) is strictly (strongly) convex). Its solutions are denoted by \(\pi _{p,r}^0\equiv (u_{p,r}^0,w_{p,r}^0)\), and the set of all such solutions is designated as \(U_{p,r}^0\). Define the Lagrange functional, a set of its minimizers and the concave dual problem
Since the Lagrange functional is continuous and convex for any pair \((\lambda ,\mu )\in \mathcal{H}\times \mathcal{H}_+\) and the set \(\mathcal{D}\) is bounded, the dual functional \(V_{p,r}^\delta \), is obviously defined and finite for any \((\lambda ,\mu )\in \mathcal{H}\times \mathcal{H}_+\).
The concept of a minimizing approximate solution in the sense of Warga [5] is of great importance for the design of a dual regularizing algorithm for problem (\(P_{p,r}^0\)). Recall that a minimizing approximate solution is a sequence \(\pi ^i\equiv (u^i,w^i),\, i=1,2,\dots \) such that \(g_0^0(\pi ^i)\le \beta (p,r)+\delta ^i\), \(\pi ^i\in \mathcal{D}_{p, r}^{0,\epsilon ^i}\) for some nonnegative number sequences \(\delta ^i\) and \(\epsilon ^i\), \(i=1,2,\dots \), that converge to zero. Here, \(\beta (p,r)\) is the generalized infimum, i.e., an S-function:
Obviously, in the general situation, \(\beta (p,r)\le \beta _0(p,r)\), where \(\beta _0(p,r)\) is the classical value of the problem. However, in the case of Problem \((P_{p,r}^0)\), we have \(\beta (p,r)=\beta _0(p,r)\). Simultaneously, we may asset that \(\beta :\,L_2(Q)\times L_2(Q)\rightarrow \mathbb {R}^1\cup \{+\infty \}\) is a convex and lower semicontinuous function. Note here that the existence of a minimizing approximate solution in Problem (\(P_{p,r}^0\)) obviously implies its solvability.
From the conditions (a)–(c) and the theorem on the existence of a weak solution of the third boundary-value problem for a linear parabolic equation of the divergent type (see [8, chap. III, Sect. 5] and also [12]), it follows that the direct boundary-value problem (1) and the corresponding adjoint problem are uniquely solvable in \(V^{1,0}_2(Q_T)\).
Proposition 1
For any pair \((u,w)\in L_2(Q_T)\times L_2(S_T)\) and any \(T>0\) the direct boundary-value problem (1) is uniquely solvable in \(V^{1,0}_2(Q_T)\) and the estimate
takes place, where the constant \(C_T\) is independent of \(\delta \ge 0\) and pair \(\pi \equiv (u,w)\in L_2(Q_T)\times L_2(S_T)\). Also the adjoint problem
is uniquely solvable in \(V^{1,0}_2(Q_T)\) for any \(\chi \in L_2(Q_T)\), \(\psi \in L_2(\varOmega )\), \(\omega \in L_2(S_T)\) and any \(T>0\). Its solution is denoted as \(\eta [\chi ,\psi ,\omega ]\). Simultaneously, the estimate
is true, where the constant \(C_T^1\) is independent of \(\delta \ge 0\) and a triple \((\chi ,\psi ,\omega )\).
Simultaneously, from conditions (a)–(c) and the theorems on the existence of a weak (generalized) solution of the third boundary-value problem for a linear parabolic equation of the divergent type (see, e.g., [9]), it follows that the direct boundary-value problem is uniquely solvable in \(V^{1,0}_2(Q_T)\,\cap \,C(\overline{Q}_T)\).
Proposition 2
Let us \(l>n+1\). For any pair \((u,w)\in L_l(Q_T)\times L_l(S_T)\) and any \(T>0\), \(\delta \in [0,\delta _0]\) the direct boundary-value problem (1) is uniquely solvable in \(V^{1,0}_2(Q_T)\cap C(\overline{Q}_T)\) and the estimate
takes place, where the constant \(C_T\) is independent of pair \(\pi \equiv (u,w)\) and \(\delta \).
Further, the minimization problem for Lagrange functional
plays the central role in all subsequent constructions. It is usual problem without equality and inequality constraints. It is solvable as a minimization problem for weakly semicontinuous functional on the weak compact set \(\mathcal{D}\subset L_2(Q_T)\times L_2(S_T)\). Here, the weak semicontinuity is a consequence of the convexity and continuity with respect to \(\pi \) of the Lagrange functional. Minimizers \(\pi ^\delta [\lambda ,\mu ]\in U^\delta [\lambda ,\mu ]\) for this optimal control problem satisfy the Pontryagin maximum principle under supplementary assumption of the existence of Lebesgue measurable with respect to \((x,t)\in Q\) for all \(z\in \mathbb {R}^1\) and continuous with respect to z for a.e. x, t gradients \(\nabla _z\varphi _2^\delta (x,t,z)\), \(\nabla _zG^\delta (x,z)\) with the estimates \(|\nabla _z\varphi _2^\delta (x,t,z)|\le C_M\), \(|\nabla _zG^\delta (x,z)|\le C_M\) \(\forall z\in S_M^1\) where \(C_M>0\) is independent of \(\delta \). Due to the estimates of the Propositions 1 and 2 and to the so called two-parameter variation [13] of the pair \(\pi ^\delta [\lambda ,\mu ]\) that is needle-shaped with respect to control u and classical with respect to control w the following lemma is true.
Lemma 1
Let \(H(y,\eta )\equiv -\eta y\) and the additional condition that specified above is fulfilled. Any pair \(\pi ^\delta [\lambda ,\mu ]=(u^\delta [\lambda ,\mu ],w^\delta [\lambda ,\mu ])\in U^\delta [\lambda ,\mu ],\,\,(\lambda ,\mu )\in L_2(Q)\times L_2^+(Q)\) satisfies to (usual) Pontryagin maximum principle in the problem (3): for \(\pi =\pi ^\delta [\lambda ,\mu ]\) the following maximum relations
hold, where \(\eta ^\delta (x,t),\,(x,t)\in Q_T\) is a solution for \(\pi =\pi ^\delta [\lambda ,\mu ]\) of the adjoint problem
Remark 1
Note that here and below, if the functions \(\varphi _1^\delta ,\,\nabla _z\varphi _2^\delta (\cdot ,\cdot ,z(\cdot ,\cdot ))\), \(\lambda ,\mu \in L_2(Q)\) are considered on the entire cylinder \(Q_T\), we set that the equalities \(\varphi _1^\delta (x,t)=\) \(\nabla _z\varphi _2^\delta (x,t,z(x,t))=\) \(\lambda (x,t)=\mu (x,t)=0\) take place for \((x,t)\in Q_T\setminus Q\); the same notation is preserved if these functions are taken on the entire cylinder.
In the next section we construct minimizing approximate solutions for Problem (\(P_{p,r}^0\)) from the elements \(\pi ^\delta [\lambda ,\mu ],\,(\lambda ,\mu )\in L_2(Q)\times L_2^+(Q)\). As consequence, this construction leads us to various versions of the stable sequential Lagrange principle and Pontragin maximum principle. In the case of strong convexity and subdifferentiability of the target functional \(g_0^0\), these versions are statements about stable approximations of the solutions of Problem (\(P_{p,r}^0\)) in the metric of \(Z\equiv L_2(Q_T)\times L_2(S_T)\) by the points \(\pi ^\delta [\lambda ,\mu ]\). Due to the estimates (2) and the Propositions 1 and 2 we may assert that the estimates
hold, in which the constants \(C_1,\,C_2,\,C_3>0\) are independent of \(\delta \in (0,\delta _0]\), \(\pi \).
3 Stable Sequential Pontryagin Maximum Principle
In this section we discuss the so-called regularized or, in other words, stable, with respect errors of input data, sequential Pontryagin maximum principle for Problem \((P_{p,r}^0)\) as necessary and sufficient condition for elements of minimizing approximate solutions. Simultaneously, this condition we may treat as one for existence of a minimizing approximate solutions in Problem \((P_{p,r}^0)\) with perturbed input data or as condition of stable construction of a minimizing sequence in this problem. The proof of the necessity of this condition is based on the dual regularization method [2,3,4] that is stable algorithm of constructing a minimizing approximate solutions in Problem \((P_{p,r}^0)\). Sketches of the proofs for the theorems in this section (Theorems 1, 2 and 3) and some comments may be found in [14, 15].
3.1 Dual Regularization for Optimal Control Problem with Pointwise State Constraints
The estimates (5) give a possibility to organize for constructing a minimizing approximate solution in Problem \((P_{p,r}^0)\) the procedure of the dual regularization in accordance with a scheme of the paper [11]. In accordance with this scheme the dual regularization consists in the direct solving dual problem to Problem \((P_{p,r}^0)\) and its Tikhonov stabilization
under consistency condition \(\delta /{\alpha (\delta )}\rightarrow 0,\,\,\alpha (\delta )\rightarrow 0,\,\,\,\delta \rightarrow 0\). This dual regularization leads to constructing minimizing approximate solution in Problem (\(P_{p,r}^0\)) from the elements \(\pi ^\delta [\lambda _{p,r}^{\delta ,\alpha (\delta )},\mu _{p,r}^{\delta ,\alpha (\delta )}]\in \mathrm {Argmin}\,\{L_{p,r}^\delta (\pi ,\lambda ,\mu ): \pi \in \mathcal{D}\}\), where \((\lambda _{p,r}^{\delta ,\alpha },\mu _{p,r}^{\delta ,\alpha })\equiv \mathrm {argmax}\{R_{p,r}^{\delta ,\alpha }(\lambda ,\mu ):\,(\lambda ,\mu )\in L_2(Q)\times L_2^+(Q)\}\) and \(\delta \rightarrow 0\).
We may assert that the following “convergence” theorem for the dual regularization method in Problem \((P_{p,r}^0)\) is valid.
Theorem 1
Regardless of the properties of the solvability of the dual problem to Problem \((P_{p,r}^0)\) or, in other words, regardless of the properties of the subdifferential \(\partial \beta (p,r)\) (it is empty or not empty), it is true that exist elements \(\pi ^\delta \in U^\delta [\lambda _{p,r}^{\delta ,\alpha (\delta )},\mu _{p,r}^{\delta ,\alpha (\delta )}]\) such that the relations
hold, in which the inequality \(g_2^0(\pi ^\delta )-r\le \kappa (\delta )\) is understood in the sense of ordering on a cone of nonpositive functions in \(L_2(Q)\). Simultaneously, the equality
is valid. If the dual of Problem \((P_{p,r}^0)\) is solvable, then the limit relation \((\lambda _{p,r}^{\delta ,\alpha (\delta )},\mu _{p,r}^{\delta ,\alpha (\delta )}) \rightarrow (\lambda _{p,r}^0,\mu _{p,r}^0), \delta \rightarrow 0\) is valid also, where \((\lambda _{p,r}^0,\mu _{p,r}^0)\) denotes minimum-norm solution of the dual problem.
This theorem may be proved in exact accordance with a scheme of proving the similar theorem in [11]. We note only that, as in [11], this proving uses a weak continuity of the operators \(g_1^\delta ,\,g_2^\delta \) that is consequence of the conditions on the input data of Problem (\(P_{p,r}^0\)) and a regularity of the bounded solutions of the boundary-value problem (1) inside of the cylinder \(Q_T\) [8, chap. III, Theorem 10.1].
3.2 Stable Sequential Lagrange Principle for Optimal Control Problem with Pointwise State Constraints
We formulate in this subsection the necessary and sufficient condition for existence of a minimizing approximate solution in Problem \((P_{p,r}^0)\). Also, it can be called by stable sequential Lagrange principle in nondifferential form for this problem. Simultaneously, as we deal only with regular Lagrange function, the formulated theorem may be called by Kuhn-Tucker theorem in nondifferential form. Note that the necessity of the conditions of formulated below theorem follows from the Theorem 1. At the same time, their sufficiency is a simple consequence of the convexity of Problem \((P_{p,r}^0)\) and the conditions on its input data. A verification of these propositions for similar situation of the convex programming problem in a Hilbert space may be found in [1, 7].
Theorem 2
Regardless of the properties of the subdifferential \(\partial \beta (p,r)\) (it is empty or not empty) or, in other words, regardless of the properties of the solvability of the dual problem to Problem \((P_{p,r}^0)\), necessary and sufficient conditions for Problem \((P_{p,r}^0)\) to have a minimizing approximate solution is that there is a sequence of dual variables \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+\), \(k=1,2,\dots \), such that \(\delta ^k\Vert (\lambda ^k,\mu ^k)\Vert \rightarrow 0\), \(k\rightarrow \infty \), and relations
hold for some elements \(\pi ^{\delta ^k}[\lambda ^k,\mu ^k]\in U^{\delta ^k}[\lambda ^k,\mu ^k]\). The sequence \(\pi ^{\delta ^k}[\lambda ^k,\mu ^k]\), \(k=1,2,\dots \), is the desired minimizing approximate solution and each of its weak limit points is a solution of Problem \((P_{p,r}^0)\). As \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+\), \(k=1,2,\dots \), we can use the sequence of the points \((\lambda _{p,r}^{\delta ^k,\alpha (\delta ^k)},\mu _{p,r}^{\delta ^k,\alpha (\delta ^k)})\), \(k=1,2,\dots \), generated by the dual regularization method of the Theorem 1. If the dual of Problem \((P_{p,r}^0)\) is solvable, the sequence \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+\), \(k=1,2,\dots \), should be assumed to be bounded. The limit relation
holds as a consequence of the relations (6) and (7). Furthermore, each weak limit point (if such points exist) of the sequence \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+,\,\,\,k=1,2,\dots \) is a solution of the dual problem \(V_{p,r}^0(\lambda ,\mu )\rightarrow \max ,\,\,\,(\lambda ,\mu )\in \mathcal{H}\times \mathcal{H}_+\).
Remark 2
If the functional \(g_0^0\) is strongly convex and subdifferentiable on \(\mathcal{D}\) then from the weak convergence of the unique in this case elements \(\pi ^{\delta ^k}[\lambda ^k,\mu ^k]\) to unique element \(\pi _{p,r}^0\) as \(k\rightarrow \infty \), and numerical convergence \(g_0^0(\pi ^{\delta ^k}[\lambda ^k,\mu ^k])\rightarrow g_0^0(\pi _{p,r}^0),\,\,k\rightarrow \infty \) follows the strong convergence \(\pi ^{\delta ^k}[\lambda ^k,\mu ^k]\rightarrow \pi _{p,r}^0\), \(k\rightarrow \infty \). Problem \((P_{p,r}^0)\) with the strongly convex \(g_0^0\) for linear system of ordinary differential equations but with exact input data is studied in [10].
3.3 Stable Sequential Pontryagin Maximum Principle for Optimal Control Problem with Pointwise State Constraints
Denote by \(U_{max}^\delta [\lambda ,\mu ]\) a set of the elements \(\pi \in \mathcal{D}\) that satisfy all relations of the maximum principle (4) of the Lemma 1. Under the supplementary condition of existence of continuous with respect to z gradients \(\nabla _z\varphi _2^\delta (x,t,z)\), \(\nabla _zG^\delta (x,z)\) with corresponding estimates, it follows that the proposition of the Theorem 2 may be rewritten in the form of the stable sequential Pontryagin maximum principle. It is obviously that the equality \(U_{max}^\delta [\lambda ,\mu ]=U^\delta [\lambda ,\mu ]\) takes place under mentioned supplementary condition.
Theorem 3
Regardless of the properties of the subdifferential \(\partial \beta (p,r)\) (it is empty or not empty) or, in other words, regardless of the properties of the solvability of the dual problem to Problem \((P_{p,r}^0)\), necessary and sufficient conditions for Problem \((P_{p,r}^0)\) to have a minimizing approximate solution is that there is a sequence of dual variables \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+\), \(k=1,2,\dots \), such that \(\delta ^k\Vert (\lambda ^k,\mu ^k)\Vert \rightarrow 0\), \(k\rightarrow \infty \), and relations (6) and (7) hold for some elements \(\pi ^{\delta ^k}[\lambda ^k,\mu ^k]\in U_{max}^{\delta ^k}[\lambda ^k,\mu ^k]\). Moreover, the sequence \(\pi ^{\delta ^k}[\lambda ^k,\mu ^k]\), \(k=1,2,\dots \), is the desired minimizing approximate solution and each of its weak limit points is a solution of Problem \((P_{p,r}^0)\). As \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+\), \(k=1,2,\dots \), we can use the sequence of the points \((\lambda _{p,r}^{\delta ^k,\alpha (\delta ^k)},\mu _{p,r}^{\delta ^k,\alpha (\delta ^k)})\), \(k=1,2,\dots \), generated by the dual regularization method of the Theorem 1. If the dual of Problem \((P_{p,r}^0)\) is solvable, the sequence \((\lambda ^k,\mu ^k)\in \mathcal{H}\times \mathcal{H}_+\), \(k=1,2,\dots \), should be assumed to be bounded. The limit relation (8) holds as a consequence of the relations (6) and (7).
Remark 3
When the inequality constraint in Problem (\(P_{p,r}^0\)) is absent, i.e., \((P_{p,r}^0)=(P_p^0)\), and \(\varphi _2(x,t)=r\equiv 0\), \(\varphi _1(x,t)\equiv 1\), the target functional \(g_0^0\) is taken, for example, in the form \(g_0^0(\pi )\equiv \Vert \pi \Vert ^2\equiv \Vert u\Vert ^2+\Vert w\Vert ^2\) then Problem \((P_p^0)\) acquires the typical form of unstable inverse problem. In this case the stable sequential Pontryagin maximum principle of the Theorem 3 becomes a tool for the direct solving such unstable inverse problem.
Remark 4
In important partial case of Problem \((P_{p,r}^0)=(P_r^0)\), when it has only the inequality constraint \((\varphi _1^\delta (x,t)=h^\delta (x,t)=p(x,t)=0,\,\,(x,t)\in Q)\), “weak” passage to the limit in the relations of the Theorem 3 leads to usual for similar optimal control problems Pontryagin maximum principle (see, e.g., [9, 16]) with nonnegative Radon measures in the input data of the adjoint equation.
Notes
- 1.
Here and below, we use the notations for the sets \(Q_T\), \(S_T\), \(Q_{i,T}\) and also for functional spaces and norms of their elements adopted in monograph [8].
References
Sumin, M.I.: Stable sequential convex programming in a Hilbert space and its application for solving unstable problems. Comput. Math. Math. Phys. 54, 22–44 (2014)
Sumin, M.I.: A regularized gradient dual method for the inverse problem of a final observation for a parabolic equation. Comput. Math. Math. Phys. 44, 1903–1921 (2004)
Sumin, M.I.: Duality-based regularization in a linear convex mathematical programming problem. Comput. Math. Math. Phys. 46, 579–600 (2007)
Sumin, M.I.: Regularized parametric Kuhn-Tucker theorem in a Hilbert space. Comput. Math. Math. Phys. 51, 1489–1509 (2011)
Warga, J.: Optimal Control of Differential and Functional Equations. Academic Press, New York (1972)
Sumin, M.I.: Dual regularization and Pontryagin’s maximum principle in a problem of optimal boundary control for a parabolic equation with nondifferentiable functionals. Proc. Steklov Inst. Math. 275(Suppl.), S161–S177 (2011)
Sumin, M.I.: On the stable sequential Kuhn-Tucker theorem and its applications. Appl. Math. 3, 1334–1350 (2012)
Ladyzhenskaya, O.A., Solonnikov, V.A., Ural’tseva, N.N.: Linear and Quasilinear Equations of Parabolic Type. American Mathematical Society, Providence (1968)
Casas, E., Raymond, J.-P., Zidani, H.: Pontryagin’s principle for local solutions of control problems with mixed control-state constraints. SIAM J. Control Optim. 39, 1182–1203 (2000)
Sumin, M.I.: Parametric dual regularization for an optimal control problem with pointwise state constraints. Comput. Math. Math. Phys. 49, 1987–2005 (2009)
Sumin, M.I.: Stable sequential Pontryagin maximum principle in optimal control problem with state constraints. In: Proceedings of XIIth All-Russia Conference on Control Problems, pp. 796–808. Institute of Control Sciences of RAS, Moscow (2014)
Plotnikov, V.I.: Existence and uniqueness theorems and a priori properties of weak solutions. Dokl. Akad. Nauk SSSR 165, 33–35 (1965)
Sumin, M.I.: The first variation and Pontryagin’s maximum principle in optimal control for partial differential equations. Comput. Math. Math. Phys. 49, 958–978 (2009)
Sumin, M.I.: Stable sequential Pontryagin maximum principle in optimal control for distributed systems. In: International Conference “Systems Dynamics and Control Processes” Dedicated to the 90th Anniversary of Academician N.N. Krasovskii (Ekaterinburg, Russia, 15–20 September 2014), pp. 301–308. Ural Federal University, Ekaterinburg (2015)
Sumin, M.I.: Subdifferentiability of value functions and regularization of Pontryagin maximum principle in optimal control for distributed systems. Tambov State University reports. Series: Natural and Technical Sciences, vol. 20, pp. 1461–1477 (2015)
Raymond, J.-P., Zidani, H.: Pontryagin’s principle for state-constrained control problems governed by parabolic equations with unbounded controls. SIAM J. Control Optim. 36, 1853–1879 (1998)
Acknowledgments
This work was supported by the Russian Foundation for Basic Research (project no. 15-47-02294-r_povolzh’e_a) and by the Ministry of Education and Science of the Russian Federation within the framework of project part of state tasks in 2014–2016 (code no. 1727).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 IFIP International Federation for Information Processing
About this paper
Cite this paper
Sumin, M. (2016). Stable Sequential Pontryagin Maximum Principle as a Tool for Solving Unstable Optimal Control and Inverse Problems for Distributed Systems. In: Bociu, L., Désidéri, JA., Habbal, A. (eds) System Modeling and Optimization. CSMO 2015. IFIP Advances in Information and Communication Technology, vol 494. Springer, Cham. https://doi.org/10.1007/978-3-319-55795-3_46
Download citation
DOI: https://doi.org/10.1007/978-3-319-55795-3_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-55794-6
Online ISBN: 978-3-319-55795-3
eBook Packages: Computer ScienceComputer Science (R0)