1 Introduction and Background

1.1 Convex Relaxations of Partitioning Problems

In this work, we will be concerned with a class of variational problems used in image processing and analysis for formulating multiclass image partitioning problems, which are of the form

(1)
(2)
(3)
(4)

The labeling function u:Ω→ℝl assigns to each point in the image domain Ω⊂ℝd a label \(i \in \mathcal{I} :=\{1, \ldots, l\}\), which is represented by one of the l-dimensional unit vectors e 1,…,e l. Since the labeling function is piecewise constant and therefore cannot be assumed to be differentiable, the problem is formulated as a free discontinuity problem in the space \(\operatorname{BV} (\varOmega, \mathcal{E})\) of functions of bounded variation; see [2] for an overview. We generally assume Ω to be a bounded Lipschitz domain.

The objective function f consists of a data term and a regularizer. The data term is given in terms of the nonnegative L 1 function s(x)=(s 1(x),…,s l (x))∈ℝl, and assigns to the choice u(x)=e i the “penalty” s i (x), in the sense that

(5)

where Ω i :=u −1({e i})={xΩu(x)=e i} is the class region for label i, i.e., the set of points that are assigned the i-th label. The data term generally depends on the input data—such as color values of a recorded image, depth measurements, or other features—and promotes a good fit of the minimizer to the input data. While it is purely local, there are no further restrictions such as continuity, convexity etc., therefore it covers many interesting applications such as segmentation, stitching, inpainting, multi-view 3D reconstruction and optical flow [23].

1.2 Convex Regularizers

The regularizer is defined by the positively homogeneous, continuous and convex function Ψ:ℝd×l→ℝ⩾0 acting on the distributional derivative Du of u, and incorporates additional prior knowledge about the “typical” appearance of the desired output. For piecewise constant u, it can be seen that the definition in (1) amounts to a weighted penalization of the discontinuities of u:

(6)

where J u is the jump set of u, i.e., the set of points where u has well-defined right-hand and left-hand limits u + and u and (in an infinitesimal sense) jumps between the values u +(x),u (x)∈ℝl across a hyperplane with normal ν u (x)∈ℝd, ∥ν u (x)∥2=1. We refer to [2] for the precise definitions.

A particular case is to set \(\varPsi= (1 / \sqrt{2}) \| \cdot\|_{2}\), i.e., the scaled Frobenius norm. In this case J(u) is just the scaled total variation of u, and, since u +(x) and u (x) assume values in \(\mathcal{E}\) and cannot be equal on the jump set J u , it holds that

(7)
(8)

Therefore, for \(\varPsi= (1 / \sqrt{2}) \| \cdot\|_{2}\) the regularizer just amounts to penalizing the total length of the interfaces between class regions as measured by the (d−1)-dimensional Hausdorff measure \(\mathcal {H}^{d - 1}\), which is known as uniform metric or Potts regularization.

A general regularizer was proposed in [19], based on [5]: given a metric distance d:{1,…,l}2→ℝ⩾0, (not to be confused with the ambient space dimension), define

(9)
(10)

It was then shown that

(11)

therefore in view of (6) the corresponding regularizer is non-uniform: the boundary between the class regions Ω i and Ω j is penalized by its length, multiplied by the weight d(i,j) depending on the labels of both regions.

However, even for the comparably simple regularizer (7), the model (1) is a (spatially continuous) combinatorial problem due to the integral nature of the constraint set \(\mathcal{C}_{\mathcal{E}}\), therefore optimization is nontrivial. In the context of multiclass image partitioning, a first approach can be found in [20], where the problem was posed in a level set-formulation in terms of a labeling function ϕ:Ω→{1,…,l}, which is subsequently relaxed to ℝ. Then ϕ is replaced by polynomials in ϕ, which coincide with the indicator functions u i for the case where ϕ assumes integral values. However, the numerical approach involves several nonlinearities and requires to solve a sequence of nontrivial subproblems.

The representation (1) suggests a more straightforward convex approach: replace \(\mathcal{E}\) by its convex hull, which is the unit simplex in l dimensions,

(12)

and solve the relaxed problem

(13)
(14)
(15)

Sparked by a series of papers [5, 17, 30], recently there has been much interest in problems of this form, since they—although generally nonsmooth—are convex and therefore can be solved to global optimality, e.g., using primal-dual techniques. The approach has proven useful in a wide range of applications [10, 11, 14, 29].

1.3 Finite-Dimensional vs. Continuous Approaches

Many of these applications have been tackled before in a finite-dimensional setting, where they can be formulated as combinatorial problems on a grid graph, and solved using combinatorial optimization methods such as α-expansion and related integer linear programming (ILP) methods [4, 15]. These methods have been shown to yield an integral labeling \(u' \in\mathcal{C}_{\mathcal{E}}\) with the a priori bound

(16)

where \(u^{\ast}_{\mathcal{E}}\) is the (unknown) solution of the integral problem (1). They therefore permit to compute a suboptimal solution to the—originally NP-hard [4]—combinatorial problem with an upper bound on the objective. No such bound is yet available for methods based on the spatially continuous problem (13).

Despite these strong theoretical and practical results available for the finite-dimensional combinatorial energies, the function-based, infinite-dimensional formulation (1) has several unique advantages:

  • The energy (1) is truly isotropic, in the sense that for a proper choice of Ψ it is invariant under rotation of the coordinate system. Pursuing finite-dimensional “discretize-first” approaches generally introduces artifacts due to the inherent anisotropy, which can only be avoided by increasing the neighborhood size, thereby reducing sparsity and severely slowing down the graph cut-based methods.

    In contrast, properly discretizing the relaxed problem (13) and solving it as a convex problem with subsequent thresholding yields much better results without compromising the sparse structure (Figs. 1 and 2, [13]). This can be attributed to the fact that solving the discretized problem as a combinatorial problem in effect discards much of the information about the problem structure that is contained in the nonlinear terms of the discretized objective.

    Fig. 1
    figure 1

    Segmentation of an image into 12 classes using a combinatorial method. Top left: Input image, Top right: Result obtained by solving a combinatorial discretized problem with 4-neighborhood. The bottom row shows detailed views of the marked parts of the image. The minimizer of the combinatorial problem exhibits blocky artifacts caused by the choice of discretization

    Fig. 2
    figure 2

    Segmentation obtained by solving a finite-differences discretization of the relaxed spatially continuous problem. Left: Non-integral solution obtained as a minimizer of the discretized relaxed problem. Right: Integral labeling obtained by rounding the fractional labels in the solution of the relaxed problem to the nearest integral label. The rounded result is almost free of geometric artifacts

  • Present combinatorial optimization methods [4, 15] are inherently sequential and difficult to parallelize. On the other hand, parallelizing primal-dual methods for solving the relaxed problem (13) is straight-forward, and GPU implementations have been shown to outperform state-of-the-art graph cut methods [30].

  • Analyzing the problem in a fully functional-analytic setting gives valuable insight into the problem structure, and is of theoretical interest in itself.

1.4 Optimality Bounds

However, one possible drawback of the spatially continuous approach is that the solution of the relaxed problem (13) may assume fractional values, i.e., values in \(\Delta_{l} \setminus \mathcal{E}\). Therefore, in applications that require a true partition of Ω, some rounding process is needed in order to generate an integral labeling \(\bar{u}^{\ast}\). This may increase the objective, and lead to a suboptimal solution of the original problem (1).

The regularizer Ψ d as defined in (9) is tight in the sense that it majorizes all other regularizers that can be written in integral form and satisfy (11). Therefore it is in a sense “optimal”, since it introduces as few fractional solutions as possible. In practice, this forces solutions of the relaxed problem to assume integral values in most points, and rounding is only required in a few small regions.

However, the rounding step may still increase the objective and generate suboptimal integral solutions. Therefore the question arises whether the approach allows to recover “good” integral solutions of the original problem (1).

In the following, we are concerned with the question whether it is possible to obtain, using the convex relaxation (13), integral solutions with an upper bound on the objective. We focus on inequalities of the form

(17)

for some constant C⩾1, which provide an upper bound on the objective of the rounded integral solution \(\bar {u}^{\ast}\) with respect to the objective of the (unknown) optimal integral solution \(u_{\mathcal{E}}^{\ast}\) of (1). Note that if the relaxation is not exact, it is only possible to show (17) for some C strictly larger than one. The reverse inequality

$$ f \bigl(u_{\mathcal{E}}^{\ast}\bigr) \leqslant f \bigl( \bar{u}^{\ast}\bigr) $$
(18)

always holds since \(\bar{u}^{\ast} \in\mathcal{C}_{\mathcal{E}}\) and \(u_{\mathcal{E}}^{\ast}\) is an optimal integral solution. An alternative interpretation of (17) is

$$ \frac{f ( \bar{u}^{\ast}) - f (u_{\mathcal{E}}^{\ast})}{f (u_{\mathcal{E}}^{\ast})} \leqslant C-1, $$
(19)

which provides a bound on the relative gap to the optimal objective of the combinatorial problem.

For many convex problems one can find a dual representation of the problem in terms of a dual objective f D and a dual feasible set \(\mathcal{D}\) such that

$$ \min_{u \in\mathcal{C}} f(u) = \max_{v \in\mathcal{D}} f_D(v), $$
(20)

see [25] for the general case and [18, 19] for results on the specific problem (13).

If such a representation exists, C can be obtained a posteriori by actually computing (or approximating) \(\bar{u}^{\ast}\) and a dual feasible point: Assume that a feasible primal-dual pair \(( u, v ) \in\mathcal{C} \times\mathcal{D}\) is known, where u approximates u , and assume that some integral feasible \(\bar{u} \in \mathcal{C}_{\mathcal{E}}\) has been obtained from u by a rounding process. Then the pair \(( \bar{u}, v )\) is feasible as well since \(\mathcal{C}_{\mathcal{E}} \subset\mathcal{C}\), and we obtain an a posteriori optimality bound with respect to the optimal integral solution \(u^{\ast}_{\mathcal{E}}\):

(21)

which amounts to setting C′:=δ+1 in (19). However, this requires that the primal and dual objectives f and f D can be accurately evaluated, and requires to compute a minimizer of the problem for the specific input data, which is generally difficult, especially in the infinite-dimensional formulation.

In contrast, true a priori bounds do not require knowledge of a solution and apply uniformly to all problems of a class, irrespective of the particular input. When considering rounding methods, one generally has to discriminate between

  • deterministic vs. probabilistic methods, and

  • spatially discrete (finite-dimensional) vs. spatially continuous (infinite-dimensional) methods.

To our knowledge, most a priori approximation results hold only in the finite-dimensional setting, and are usually proven using graph-based pairwise formulations, see [28] for an overview. In contrast, we assume an “optimize first” perspective due to the reasons outlined in the introduction. Unfortunately, the proofs for the finite-dimensional results often rely on pointwise arguments that cannot directly be transferred to the continuous setting. Deriving similar results for continuous problems therefore requires considerable additional work.

1.5 Contribution and Main Results

In this work we prove that using the regularizer (9), the a priori bound (16) can be carried over to the spatially continuous setting. Preliminary versions of these results with excerpts of the proofs have been announced as conference proceedings [18]. We extend these results to provide the exact bound (16), and supply the full proofs.

As the main result, we show that it is possible to construct a rounding method parametrized by γΓ, where Γ is an appropriate parameter space:

(22)
(23)

such that for a suitable probability distribution on Γ, the following theorem holds for the expectation \(\mathbb{E}f ( \bar{u}) := \mathbb{E}_{\gamma} f ( \bar{u}_{\gamma})\):

Theorem 1

Let \(u \in\mathcal{C}\), sL 1(Ω)l, s⩾0, and let Ψ:ℝd×l→ℝ⩾0 be positively homogeneous, convex and continuous. Assume there exists a lower bound λ l >0 such that, for z=(z 1,…,z l),

(24)

Moreover, assume there exists an upper bound λ u <∞ such that, for every ν∈ℝd satisfyingν2=1,

(25)

Then Algorithm  1 generates an integral labeling \(\bar{u} \in\mathcal{C}_{\mathcal{E}}\) almost surely, and

(26)
Algorithm 1
figure 3

Continuous Probabilistic Rounding

We refer to Sect. 3.1 for a description of the individual steps of the algorithm. Note that always λ u λ l , since (25) and (24) imply

(27)

for every ν with ∥ν2=1.

The proof of Theorem 1 (Sect. 4) is based on the work of Kleinberg and Tardos [12], which is set in an LP relaxation framework. However their results are restricted in that they assume a graph-based representation and extensively rely on the finite dimensionality. In contrast, our results hold in the continuous setting without assuming a particular problem discretization.

Theorem 1 guarantees that—in a probabilistic sense—the rounding process may only increase the energy in a controlled way, with an upper bound depending on Ψ. An immediate consequence is

Corollary 1

Under the conditions of Theorem 1, if u minimizes f over \(\mathcal{C}\), \(u_{\mathcal{E}}^{\ast}\) minimizes f over \(\mathcal{C}_{\mathcal{E}}\), and \(\bar{u}^{\ast }\) denotes the output of Algorithm  1 applied to u , then

(28)

Therefore the proposed approach allows to recover, from the solution u of the convex relaxed problem (13), an approximate integral solution \(\bar{u}^{\ast}\) of the nonconvex original problem (1) with an upper bound on the objective.

In particular, for the tight relaxation of the regularizer as in (9), we obtain

(29)

(cf. Proposition 13), which is exactly the same bound as has been achieved for the combinatorial α-expansion method (16).

To our knowledge, this is the first bound available for the fully spatially convex relaxed problem (13). Related is the work of Olsson et al. [21, 22], where the authors consider an infinite-dimensional analogue to the α-expansion method known as continuous binary fusion [27], and claim that a bound similar to (16) holds for the corresponding fixed points when using the separable regularizer

(30)

for some A∈ℝd×d, which implements an anisotropic variant of the uniform metric. However, a rigorous proof in the BV framework was not given.

In [3], the authors propose to solve the problem (1) by considering the dual problem to (13) consisting of l coupled maximum-flow problems, which are solved using a log-sum-exp smoothing technique and gradient descent. In case the dual solution allows to unambiguously recover an integral primal solution, the latter is necessarily the unique minimizer of f, and therefore a global integral minimizer of the combinatorial problem (1). This provides an a posteriori bound, which applies if a dual solution can be computed. While useful in practice as a certificate for global optimality, in the spatially continuous setting it requires explicit knowledge of a dual solution, which is rarely available since it depends on the regularizer Ψ as well as the input data s.

In comparison, the a priori bound (28) holds uniformly over all problem instances, does not require knowledge of any primal or dual solutions and covers also non-uniform regularizers.

2 A Probabilistic View of the Coarea Formula

2.1 The Two-Class Case

As a motivation for the following sections, we first provide a probabilistic interpretation of a tool often used in geometric measure theory, the coarea formula (cf. [2]). Given a scalar function \(u' \in \operatorname{BV} (\varOmega,[0,1])\), the coarea formula states that its total variation can be computed by summing the boundary lengths of its super-levelsets:

(31)

Here 1 A denotes the characteristic function of a set A, i.e., 1 A (x)=1 iff xA and 1 A (x)=0 otherwise. The coarea formula provides a connection between problem (1) and the relaxation (13) in the two-class case, where \(\mathcal{E} = \{e^{1}, e^{2} \}\), and \(u \in\mathcal{C}_{\mathcal{E}}\) implies u 1=1−u 2: as noted in [16],

(32)

therefore the coarea formula (31) can be rewritten as

(33)
(34)
(35)
(36)

Consequently, the total variation of u can be expressed as the mean over the total variations of a set of integral labelings \(\{ \bar{u}_{\alpha} \in\mathcal{C}_{\mathcal{E}} \mid \alpha\in[0, 1]\}\), obtained by rounding u at different thresholds α. We now adopt a probabilistic view of (36). We regard the mapping

(37)

as a parametrized deterministic rounding algorithm that depends on u and on an additional parameter α. From this we obtain a probabilistic (randomized) rounding algorithm by assuming α to be a uniformly distributed random variable. With these definitions the coarea formula (36) can be written as

(38)

This states that applying the probabilistic rounding to (arbitrary, but fixed) u does—in a probabilistic sense, i.e., in the mean—not change the objective. It can be shown that this property extends to the full functional f in (13): in the two-class case, the “coarea-like” property

(39)

holds. Functions with property (39) are also known as levelable functions [8, 9] or discrete total variations [6] and have been studied in [26]. A well-known implication is that if u=u , i.e., u minimizes the relaxed problem (13), then in the two-class case almost every \(\bar{u}^{\ast} = \bar{u}^{\ast }_{\alpha}\) is an integral minimizer of the original problem (1), i.e., the optimality bound (17) holds with C=1 [7].

2.2 The Multi-Class Case and Generalized Coarea Formulas

Generalizing these observations to more than two labels hinges on a property similar to (39) that holds for vector-valued u. In a general setting, the question is whether there exist

  • a probability space (Γ,μ), and

  • a parametrized rounding method, i.e., for μ-almost every γΓ:

    (40)
    (41)

    satisfying R γ (u′)=u′ for all \(u' \in \mathcal{C}_{\mathcal{E}}\),

such that a “multiclass coarea-like property” (or generalized coarea formula)

(42)

holds. The equivalent probabilistic interpretation is

(43)

For l=2 and Ψ(x)=∥⋅∥2, (38) shows that (43) holds with γ=α, Γ=[0,1], \(\mu=\mathcal{L}^{1}\), and \(R : \mathcal{C} \times\varGamma \rightarrow \mathcal{C}_{\mathcal{E}}\) as defined in (37). However, property (38) is intrinsically restricted to the two-class case and the \(\operatorname{TV}\) regularizer.

In the multiclass case, the difficulty lies in providing a suitable combination of a probability space (Γ,μ) and a parametrized rounding step \((u, \gamma) \mapsto\bar{u}_{\gamma}\). Unfortunately, obtaining a relation such as (38) for the full functional (1) is unlikely, as it would mean that solutions to the (after discretization) NP-hard problem (1) could be obtained by solving the convex relaxation (13) and subsequent rounding, which can be achieved in polynomial time.

Therefore we restrict ourselves to an approximate variant of the generalized coarea formula:

(44)

While (44) is not sufficient to provide a bound on \(f ( \bar{u}_{\gamma})\) for particular γ, it permits a probabilistic bound: for any minimizer u of the relaxed problem (13), Eq. (44) implies

(45)

and thus the ratio between the objective of the rounded relaxed solution and the optimal integral solution is bounded—in a probabilistic sense—by the constant C.

In the following sections we construct a suitable parametrized rounding method and probability space in order to obtain an approximate generalized coarea formula of the form (44).

3 Probabilistic Rounding for Multiclass Image Partitions

3.1 Approach

We consider the probabilistic rounding approach based on [12] as defined in Algorithm 1.

The algorithm proceeds in a number of phases. At each iteration, a label and a threshold

are randomly chosen (step 3), and label i k is assigned to all yet unassigned points x where \(u_{i^{k}}^{k - 1} (x) > \alpha^{k}\) holds (step 5). In contrast to the two-class case considered above, the randomness is provided by a sequence (γ k) of uniformly distributed random variables, i.e., Γ=(Γ′).

After iteration k, all points in the set U kΩ are still unassigned, while all points in ΩU k have been assigned an (integral) label in iteration k or in a previous iteration. Iteration k+1 potentially modifies points only in the set U k. The variable \(c_{j}^{k}\) stores the lowest threshold α chosen for label j up to and including iteration k, and is only required for the proofs.

For any uL 1(Ω l ) and fixed γ, the sequences (u k), (M k) and (U k) are unique up to \(\mathcal {L}^{d}\)-negligible sets, and therefore the sequence (u k) is well-defined when viewed as elements of L 1.

In an actual implementation, the algorithm could be terminated as soon as all points in Ω have been assigned a label, i.e., \(|U^{k}| := \mathcal{L}^{d} (U^{k})= 0\). However, in our framework used for analysis the algorithm never terminates explicitly. Instead, for fixed input u we regard the algorithm as a mapping between sequences of parameters (or instances of random variables) γ=(γ k)∈Γ and sequences of states \((u_{\gamma}^{k})\), \((U_{\gamma}^{k})\) and \((c_{\gamma}^{k})\). We drop the subscript γ if it does not create ambiguities. The elements of the sequence (γ (k)) are independently uniformly distributed, therefore choosing γ can be seen as sampling from the product space.

In order to define the parametrized rounding step \((u, \gamma) \mapsto \bar{u}_{\gamma}\), we observe that once \(|U^{k'}_{\gamma}| = 0\) occurs for some k′∈ℕ, the sequence \((u^{k}_{\gamma})\) becomes stationary at \(u_{\gamma}^{k'}\). In this case the output of the algorithm is defined as \(\bar{u}_{\gamma} :=u_{\gamma}^{k'}\):

Definition 1

Let \(u \in\operatorname{BV} (\varOmega)^{l}\) and \(f : \operatorname{BV} (\varOmega)^{l} \rightarrow\mathbb{R}\). For arbitrary, fixed γΓ, let \((u^{k}_{\gamma})\) be the sequence generated by Algorithm 1 and define \(\bar{u}_{\gamma} : \varOmega\rightarrow\bar{\mathbb{R}}^{l}\) as

(46)

We extend f to all functions \(u':\varOmega\to\bar{\mathbb{R}}^{l}\) by setting f(u′):=+∞ if \(u' \notin\operatorname{BV}(\varOmega ,\Delta_{l})\) and consider the induced mapping \(f ( \bar{u}_{(\cdot )}): \varGamma\to{\mathbb{R}\cup\{+\infty\}}\), \(\gamma\in\varGamma \mapsto f ( \bar{u}_{\gamma})\), i.e.,

(47)

We denote by \(f ( \bar{u})\) the random variable induced by assuming γ to be uniformly distributed on Γ, and by μ the uniform probability measure on Γ.

In the following we often use ℙ=μ where it does not create ambiguities. Measures are generally understood to be extended to the completion of the underlying σ-algebra, i.e., all subsets of zero sets are measurable.

As indicated above, \(f ( \bar{u}_{\gamma})\) is well-defined—indeed, if \(|U^{k'}_{\gamma}| = 0\) for some (γ,k′) then \(u_{\gamma}^{k'} = u_{\gamma}^{k''}\) for all k″⩾k′. Instead of focusing on local properties of the random sequence \((u_{\gamma}^{k})\) as in the proofs for the finite-dimensional case, we derive our results directly for the sequence \((f (u_{\gamma}^{k}))\). In particular, we show that the expectation of \(f ( \bar{u})\) over all sequences γ can be bounded according to

(48)

for some C⩾1, cf. (44). Consequently, the rounding process may only increase the average objective in a controlled way.

3.2 Termination Properties

Theoretically, the algorithm may produce a sequence \((u_{\gamma}^{k})\) that does not become stationary, or becomes stationary with a solution that is not an element of \(\operatorname{BV} (\varOmega)^{l}\). In Theorem 2 below we show that this happens only with zero probability, i.e., almost surely Algorithm 1 generates (in a finite number of iterations) an integral labeling function \(\bar{u}_{\gamma} \in \mathcal{C}_{\mathcal{E}}\). The following two propositions are required for the proof. We use the definition e:=(1,…,1).

Proposition 1

For the sequence (c k) generated by Algorithm  1,

(49)

holds. In particular,

(50)

Proof

Denote by \(n^{k}_{j} \in\mathbb{N}_{0}\) the number of k′∈{1,…,k} such that i k=j, i.e., the number of times label j was selected up to and including the k-th step. Then

(51)

i.e., the probability of a specific instance is

(52)

Therefore,

(53)
(54)

Since \(c_{1}^{k}, \ldots, c_{l}^{k} < \frac{1}{l}\) is a sufficient condition for e c<1, we may bound the probability according to

(55)

We now consider the distributions of the components \(c^{k}_{j}\) of c k conditioned on the vector \((n_{1}^{k}, \ldots, n_{l}^{k})\). Given \(n_{j}^{k}\), the probability of \(\{c_{j}^{k} \geqslant t\}\) is the probability that in each of the \(n_{j}^{k}\) steps where label j was selected the threshold α was randomly chosen to be at least as large as t. For 0<t<1, we conclude

(56)
(57)
(58)

The above formulation also covers the case \(n_{j}^{k} = 0\) (note that we assumed 0<t<1). For fixed k the distributions of the \(c_{j}^{k}\) are independent when conditioned on \((n_{1}^{k}, \ldots, n_{l}^{k})\). Therefore we obtain from (55) and (58)

(59)
(60)

Expanding the product and swapping the summation order, we derive

(61)
(62)
(63)

Using the multinomial summation formula, we conclude

(64)

which proves (49). Note that in (64) the \(n_{j}^{k}\) do not occur explicitly anymore. To show the second assertion (50), we use the fact that, for any p≠0, q p can be bounded by 0<q p <1. Therefore

(65)
(66)
(67)

which proves (50). □

We now show that Algorithm 1 generates a sequence in \(\operatorname{BV} (\varOmega)^{l}\) almost surely. The perimeter of a set A is defined as the total variation of its characteristic function \(\operatorname{Per} (A) :=\operatorname{TV} (1_{A})\) in Ω.

Proposition 2

For the sequences (u k), (U k) generated by Algorithm  1, define

(68)

Then

(69)

If \(\operatorname{Per} (U^{k}_{\gamma}) < \infty\) for all k, then \(u^{k}_{\gamma} \in \operatorname{BV} (\varOmega)^{l}\) for all k as well. Moreover,

(70)

i.e., the algorithm almost surely generates a sequence of \(\operatorname{BV}\) functions (u k) and a sequence of sets of finite perimeter (U k).

Proof

We first show that if \(\operatorname{Per} (U^{k'}) < \infty\) for all k′⩽k, then \(u^{k} \in\operatorname{BV} (\varOmega)^{l}\) for all k′⩽k as well. For k=0, the assertion holds since \(u^{0} = u \in\operatorname{BV} (\varOmega)^{l}\) by assumption. For k⩾1,

(71)

Since M k=U k−1∩(ΩU k), and U k,U k−1 are assumed to have finite perimeter, M k also has finite perimeter. Applying [2, Theorem 3.84] together with the boundedness of u k−1 and \(u^{k - 1} \in\operatorname{BV} (\varOmega)^{l}\) and an induction argument then provide \(u^{k} \in\operatorname{BV} (\varOmega)^{l}\).

We now denote

(72)

and the event that the first set with non-finite perimeter is encountered at step k∈ℕ0 by

(73)

Note that U 0=Ω, therefore \(\operatorname{Per} (U^{0}) = \operatorname{TV} (1_{U^{0}}) = 0 < \infty\) and ℙ(B 0)=0. For k⩾1, we use the basic inequality ℙ(EF)⩽ℙ(EF) and obtain

(74)

By the argument from the beginning of the proof, we know that \(u^{k - 1} \in \operatorname{BV} (\varOmega)^{l}\) under the condition on the perimeter \(\operatorname{Per} (U^{k'})\), therefore from [2, Theorem 3.40] we conclude that \(\operatorname{Per} (\{ x \in\varOmega\mid u_{i^{k}}^{k - 1} (x) \leqslant\alpha^{k} \})\) is finite for \(\mathcal{L}^{1}\)-a.e. α k and all i k, i.e., for fixed i k the set

(75)

is contained in an \(\mathcal{L}^{1}\)-zero set. As the sets of finite perimeter are closed under finite intersection, and since the α k are drawn from an uniform distribution, this implies that

(76)

Together with (74) we arrive at

(77)

which implies the assertion,

(78)

Equation (70) follows immediately.

Measurability of the sets involved follows from a similar recursive argument starting from (75) and using the fact that all sets or their complements are contained in a zero set, and are therefore measurable with respect to their respective (complete) probability measures. □

Using these propositions, we now formulate the main result of this section: Algorithm 1 almost surely generates an integral labeling that is of bounded variation.

Theorem 2

Let \(u \in\operatorname{BV} (\varOmega)^{l}\) and \(f ( \bar{u} )\) as in Definition 1. Then

(79)

Proof

The first part is to show that (u k) becomes stationary almost surely, i.e.,

(80)

Assume there exists k such that e c k<1, and assume further that |U k|>0, i.e., U k contains a non-negligible subset where \(u_{j} (x) \leqslant c^{k}_{j}\) for all labels j. But then e u(x)⩽e c k<1 on that set, which is a contradiction to u(x)∈Δ l almost everywhere. Therefore U k must be a zero set. From this observation and Proposition 1 we conclude, for all k′∈ℕ,

(81)

which proves (80).

In order to show that \(f ( \bar{u}_{\gamma}) < \infty\) with probability 1, it remains to show that the result is almost surely in \(\operatorname{BV} (\varOmega)^{l}\). A sufficient condition is that almost surely all iterates u k are elements of \(\operatorname{BV} (\varOmega)^{l}\), i.e.,

(82)

This is shown by Proposition 2. Then

(83)
(84)
(85)
(86)

Thus \(\mathbb{P} (f ( \bar{u}) < \infty) = 1\), which proves the assertion. □

4 Proof of the Main Theorem

In order to show the bound (48) and Theorem 1, we first need several technical propositions regarding the composition of two \(\operatorname{BV}\) functions along a set of finite perimeter. We denote by (E)1 and (E)0 the measure-theoretic interior and exterior of a set E, see [2],

(87)

Here \(\mathcal{B}_{\rho} (x)\) denotes the ball with radius ρ centered in x, and \(|A| :=\mathcal{L}^{d} (A)\) the Lebesgue content of a set A⊆ℝd.

Proposition 3

Let Ψ be positively homogeneous and convex, and satisfy the upper-boundedness condition (25). Then

(88)

Moreover, there exists a constant C<∞ such that

(89)
(90)

Proof

See the Appendix. □

Proposition 4

Let E,FΩ be \(\mathcal{L}^{d}\)-measurable sets. Then

(91)

Proof

See the Appendix. □

In the following proposition we denote by \(u^{+}_{\mathcal{F}E}\) and \(v^{-}_{\mathcal{F}E}\) the one-sided approximate limits of u and v on the reduced boundary \(\mathcal{F}E\) (traces in the sense of [2, Theorem 3.77]), and by ν E the generalized inner normal of the set E [2, Definition 3.54].

The measure Ψ(Du) is defined as (cf. [2, (2.26), Proposition 3.23])

(92)

Proposition 5

Let \(u, v \in\operatorname{BV} (\varOmega, \Delta_{l})\) and EΩ such that  \(\operatorname{Per} (E) < \infty\). Define

(93)

Then \(w \in\operatorname{BV} (\varOmega, \Delta_{l})^{l}\), and

(94)

Moreover, for continuous, convex and positively homogeneous Ψ satisfying the upper-boundedness condition (25) and any Borel set AΩ,

(95)

Proof

See the Appendix. □

Proposition 6

Let \(u, v \in\operatorname{BV} (\varOmega, \Delta_{l})\), EΩ such that \(\operatorname{Per} (E) < \infty \), and

(96)

Then (Du)⌞(E)1=(Dv)⌞(E)1, and Ψ(Du)⌞(E)1=Ψ(Dv)⌞(E)1. In particular,

(97)

The result also holds when (E)1 is replaced by (E)0. Moreover, the condition (96) is equivalent to

(98)

Proof

See the Appendix. □

Remark 1

Note that taking the measure-theoretic interior (E)1 is of central importance. Proposition 6 does not hold when replacing the integral over (E)1 with the integral over E, as can be seen from the example of the closed unit ball, i.e., \(E =\mathcal{B}_{1} (0)\), u=1 E and v≡1.

4.1 Proof of Theorem 1

In Sect. 3.2 we have shown that the rounding process induced by Algorithm 1 is well-defined in the sense that it returns an integral solution \(\bar{u}_{\gamma} \in\operatorname{BV} (\varOmega)^{l}\) almost surely. We now return to proving an upper bound for the expectation of \(f ( \bar{u})\) as in the approximate coarea formula (44).

We first establish measurability and show that the expectation of the linear part (data term) of f is invariant under the rounding process.

Proposition 7

Let \((u^{k}_{\gamma})\) be the sequence generated by Algorithm  1. Then for every k⩾1 the mappings

(99)

and

(100)

are \((\mu\times\mathcal{L}^{d})\)-measurable.

Proof

In Algorithm 1, instead of step 5 we consider the simpler update

(101)

This yields exactly the same sequence (u k), since if \(u_{i^{k}}^{k - 1} (x) > \alpha^{k}\), then either xU k−1, or \(u_{i^{k}}^{k - 1} (x) = 1\). In both algorithms, points that are assigned a label \(e^{i^{k}}\) at some point in the process will never be assigned a different label at a later point. This is made explicit in Algorithm 1 by keeping track of the set U k of yet unassigned points. In contrast, using the step (101), a point may be contained in several of the sets \({\{u_{i^{k}}^{k - 1} \leqslant\alpha^{k} \}}\) of points that get assigned label i k in step k, but once assigned its label cannot change during a later iteration.

For the measurability of the g k it suffices to show measurability of the mapping

(102)

From the update (101) we see that \(u^{k}_{(\gamma^{1},\ldots,\gamma^{k})}\) is a finite sum of functions of the form \(e^{i^{k}} \cdot1_{A^{1}} \cdots1_{A^{l}}\) and \(u \cdot1_{A^{1}} \cdots1_{A^{l}}\), for some lk, where each A m,ml is either the set \(\{(\gamma^{1},\ldots ,\gamma^{k},x)\mid u(x)_{i^{m}} > \alpha^{m}\}\) or its complement. Each of these indicator functions is jointly measurable in (γ,x): every component of u is again measurable, and for any measurable scalar-valued function v, the set B:={(α,x)∣v(x)>α} is the countable union of measurable sets,

(103)

and therefore (α,x)↦1 B (x) is jointly measurable in (α,x). Consequently, \(u^{k}_{\gamma}\) is the finite sum of products of functions that are jointly measurable in (γ,x), which shows the first assertion.

Regarding the second assertion, Theorem 2 shows that h(γ,x)=lim k→∞ g k(γ,x), except possibly for a negligible set of γ where the sequence \((u^{k}_{\gamma})\) does not become stationary. Since all g k are measurable, their pointwise limit and therefore h are measurable as well. □

Proposition 8

For every k⩾1 the mappings

(104)

and

(105)

are μ-measurable.

Proof

The first assertion follows directly from Proposition 7 and \((\mu\times\mathcal{L}^{d})\)-measurability of the map (γ,x)↦s(x). For each fixed γ the sequence (gk(γ)) k is bounded since sL 1(Ω) and u is essentially bounded. Together with Theorem 2 this implies

(106)

therefore h′ is measurable as well, as it is the limit of measurable functions. □

Proposition 9

The sequence (u k) generated by Algorithm  1 satisfies

(107)

Proof

Proposition 8 shows that the expectation is well-defined. Integrability on Γ×ℝd again holds because \(u_{\gamma}^{k}\) is in L 1(Ω l ) and therefore essentially bounded, sL 1(Ω), and Ω is bounded, which uniformly bounds the inner integral over all γ.

Assume γΓ is arbitrary but fixed, and denote γ′:=(γ 1,…,γ k−1) and \(u^{\gamma'} :=u_{\gamma}^{k - 1}\). We apply induction on k: For k⩾1,

(108)
(109)
(110)
(111)

We take into account the property [2, Proposition 1.78], which is a direct consequence of Fubini’s theorem, and also used in the proof of the thresholding theorem for the two-class case [7]:

(112)
(113)

This leads to

(114)

and therefore, using u γ(x)∈Δ l a.e.,

(115)
(116)

Since 〈u 0,s〉=〈u,s〉, the assertion follows by induction. □

Remark 2

Proposition 9 shows that the data term is—in the mean—not affected by the probabilistic rounding process, i.e., it satisfies an exact coarea-like formula, even in the multiclass case.

Bounding the regularizer is more involved: For γ k=(i k,α k), define

(117)
(118)
(119)

As the measure-theoretic interior is invariant under \(\mathcal{L}^{d}\)-negligible modifications, given some fixed sequence γ the sequence (V k) is invariant under \(\mathcal{L}^{d}\)-negligible modifications of u=u 0, i.e., it is uniquely defined when viewing u as an element of L 1(Ω)l. Some calculations yield

(120)
(121)

From these observations and Proposition 4,

(122)
(123)
(124)

The last equality can be shown by induction: For the base case k=1, we have V 0=(U 0)1=(Ω)1=Ω, where the last equality can be shown by mutual inclusion, using the fact that Ω is open and has a Lipschitz boundary by assumption. For k⩾2,

(125)
(126)
(127)
(128)

which shows (124).

Moreover, since V k is the measure-theoretic interior of U k, both sets are equal up to an \(\mathcal{L}^{d}\)-negligible set (cf. (197)). Again we first show measurability of the involved mappings.

Proposition 10

For every k⩾1 the mappings

(129)

and

(130)

are μ-measurable.

Proof

We only sketch the proof. Let k⩾1 be arbitrary but fixed. Using a similar argument as in the proof of Proposition 8 (see also the proof of Theorem 1) one can see that \(h''(\gamma)=\sum_{k=1}^{\infty} g''^{k}(\gamma)\), therefore it suffices to show measurability of the gk.

We note that gk can be written, up to a μ-negligible set, as the sum

(131)

The key is that \(u^{k'}_{\gamma} = \bar{u}_{\gamma}\) once e c k<1. Each p k depends only on a finite number of γ i, and since the indicator function is measurable, it is enough to show measurability of the mappings p k in their respective finite-dimensional subsets of Γ for all k′∈ℕ.

Choose a fixed but arbitrary k′. With the definition \(E_{\gamma} := U_{\gamma^{k}}\) we obtain from Proposition 4

(132)

which together with [2, Theorem 3.84] leads to

(133)
(134)

where \(\nu_{E_{\gamma}} ( x ) := (D 1_{E_{\gamma}} / |D 1_{E_{\gamma}}|) (x) \) on \(\mathcal{F} E_{\gamma}\). Measurability of the p k can be shown using a result about measure-valued mappings [2, Proposition 2.26]. This first requires to show that the mapping \(\gamma\mapsto|D 1_{E_{\gamma}}|(B)\) is μ-measurable for every open set BΩ, which is a corollary of the coarea formula [2, Theorem 3.40].

The second requirement is that the integrand in (134) is bounded and \((\mathcal{B}_{\mu} \times\mathcal{B}(\varOmega ))\)-measurable. For the indicator function this follows from the definitions in a straightforward way. The normal mapping can be rewritten as

(135)

Using a slight modification of [2, Proposition 2.26] one can show the \((\mathcal{B}_{\mu} \times\mathcal{B}(\varOmega ))\)-measurability of the mappings \((\gamma,x) \mapsto D 1_{E_{\gamma }}(\mathcal{B}_{\rho}(x))\) and \((\gamma,x) \mapsto| D 1_{E_{\gamma }}(\mathcal{B}_{\rho}(x)) |\), and therefore of \(1_{\mathcal{F} E_{\gamma}}\) and of the normal mapping in (135). Together with Proposition 7 this ensures \((\mathcal {B}_{\mu} \times\mathcal{B}(\varOmega))\)-measurability of the normal and trace terms in (134), and, since Ψ is continuous, of the whole integrand.

Therefore all assumptions of [2, Proposition 2.26] are fulfilled, and we obtain the μ-measurability of all p k and finally of gk and h″. □

We now prepare for an induction argument on the expectation of the regularizing term when restricted to the sets V k−1V k. The following proposition provides the initial step (k=1).

Proposition 11

Assume that Ψ satisfies the lower- and upper-boundedness conditions (24) and (25). Then

(136)

Proof

Denote (i,α)=γ 1. Since \(1_{U_{(i, \alpha)}} = 1_{V_{(i, \alpha)}}\) \(\mathcal{L}^{d}\)-a.e., we have

(137)

Therefore, since V 0=(U 0)1=(Ω)1=Ω,

(138)

Since \(u \in\operatorname{BV} (\varOmega)^{l}\), we know that \(\operatorname{Per} (V_{(i, \alpha)}) < \infty\) holds for \(\mathcal{L}^{1}\)-a.e. α and any i [2, Theorem 3.40]. Therefore we conclude from Proposition 5 that for \(\mathcal{L}^{1}\)-a.e. α,

(139)

Both of the integrals are zero, since De i=0 and

(140)

therefore

(141)

Since Proposition 10 provides measurability the bound carries over to the expectation,

Also \(\operatorname{Per} (V_{(i, \alpha)}) = \operatorname{Per} (U_{(i, \alpha)})\) since the perimeter is invariant under \(\mathcal{L}^{d}\)-negligible modifications. The assertion then follows using V 0=Ω, V 1=V (i,α) and the coarea formula:

(142)
(143)
(144)
(145)

 □

We now take care of the induction step for the regularizer bound.

Proposition 12

Let Ψ satisfy the upper-boundedness condition (25). Then, for any k⩾2,

(146)
(147)

Proof

Define the shifted sequence \(\gamma' = (\gamma'^{k})_{k = 1}^{\infty}\) by γk:=γ k+1, and let

(148)
(149)

By Proposition 2 and Proposition 10 we may assume that \(\bar{u}_{\gamma}\) exists μ-a.e. and is an element of \(\operatorname{BV} (\varOmega)^{l}\), and that the expectation is well-defined. We denote γ 1=(i,α), then V k−1V k=V (i,α)W γ due to (123). For each pair (i,α) we denote by ((i,α),γ′) the sequence obtained by prepending (i,α) to the sequence γ′. Then

(150)

Since in the first iteration of the algorithm no points in U (i,α) are assigned a label, \(\bar{u}_{((i, \alpha), \gamma')} = \bar {u}_{\gamma'}\) holds on U (i,α), and therefore \(\mathcal{L}^{d}\)-a.e. on V (i,α). Therefore we may apply Proposition 6 and substitute \(D \bar{u}_{((i, \alpha), \gamma')}\) by \(D \bar {u}_{\gamma'}\) in (150):

(151)
(152)

By definition of the measure-theoretic interior (87), the indicator function \(1_{V_{(i, \alpha )}}\) is bounded from above by the density function \(\varTheta_{U_{(i, \alpha)}}\) of U (i,α),

(153)

which exists \(\mathcal{H}^{d - 1}\)-a.e. on Ω by [2, Proposition 3.61]. Therefore, denoting by \(\mathcal{B}_{\delta} (\cdot)\) the mapping \(x \in\varOmega\mapsto \mathcal{B}_{\delta} (x)\),

Rearranging the integrals and the limit, which can be justified by \(\operatorname{TV} ( \bar{u}_{\gamma'}) < \infty\) almost surely and dominated convergence using (25), we get

(154)

We again apply [2, Proposition 1.78] to the two innermost integrals (alternatively, use Fubini’s theorem), which leads to

(155)
(156)

Using the fact that u(y)∈Δ l , this collapses according to

(157)
(158)
(159)

Reverting the index shift and using \(\bar{u}_{\gamma'} = \bar{u}_{\gamma}\) concludes the proof:

(160)

 □

We are now ready to prove the main result, Theorem 1, as stated in the introduction.

Proof of Theorem 1

The fact that the algorithm provides \(\bar{u} \in\mathcal{C}_{\mathcal{E}}\) almost surely follows from Theorem 2. Therefore there almost surely exists k′:=k′(γ)⩾1 such that |U k|=0 and \(\bar{u}_{\gamma} = u_{\gamma}^{k'}\). On one hand, this implies

(161)

almost surely. On the other hand, V k=(U k)1=∅ and therefore

(162)

almost surely. From (161) and (162) we obtain

(163)

In the first term, the \(u_{\gamma}^{k}\) are elements of \(\operatorname{BV}(\varOmega,\Delta_{l})\) and therefore L (Ω,ℝl) except possibly on a negligible set of γ. Since sL 1(Ω) this means \(\gamma\mapsto\langle u_{\gamma}^{k}, s \rangle= |\langle u_{\gamma}^{k}, s \rangle|\) is bounded from above by a constant outside a negligible set (by Proposition 8 it is also measurable) and the dominated convergence theorem applies. The second term satisfies the requirements for monotone convergence, since all summands exist, are nonnegative almost surely, and measurable by Proposition 10. Therefore the integrals and limits can be swapped,

(164)

The first term in (164) is equal to ∫ Ω u,sdx due to Proposition 9. An induction argument using Proposition 11 and 12 shows that the second term can be bounded according to

(165)
(166)
(167)

therefore

(168)

Since s⩾0 and λ u λ l , and therefore the linear term is bounded by ∫ Ω u,sdx⩽2(λ u /λ l )∫ Ω u,sdx, this proves the assertion. □

Corollary 1 (see Sect. 1) follows immediately using \(f (u^{\ast}) \leqslant f (u_{\mathcal{E}}^{\ast})\), cf. (45). We have demonstrated that the proposed approach allows to recover, from the solution u of the convex relaxed problem (13), an approximate integral solution \(\bar{u}^{\ast}\) of the nonconvex original problem (1) with an upper bound on the objective.

For the specific case Ψ=Ψ d as in (9), we have

Proposition 13

Let \(d : \mathcal{I}^{2} \rightarrow\mathbb{R}_{\geqslant0}\) be a metric and Ψ=Ψ d . Then one may set

Proof

From the remarks in the introduction we obtain (cf. [19])

which shows the upper bound. For the lower bound, take any z∈ℝd×l satisfying ze=0 as in (24), set c:=min ij d(i,j), \(v'^{i} :=\frac{c}{2} \frac{z^{i}}{\|z^{i} \|_{2}}\) if z i≠0 and vi:=0 otherwise, and \(v :=v' (I - \frac{1}{l} ee^{\top}) \). Then \(v \in\mathcal {D}_{\operatorname{loc}}^{d}\), since ∥v iv j2=∥vivj2c and \(ve = v' (I - \frac{1}{l} ee^{\top}) e = 0\). Therefore,

(169)
(170)

proving the lower bound. □

Finally, for Ψ d we obtain the factor

(171)

determining the optimality bound, as claimed in the introduction (29). The bound in (28) is the same as the known bounds for finite-dimensional metric labeling [12] and α-expansion [4], however it extends these results to problems on continuous domains for a broad class of regularizers.

5 Conclusion

In this work we considered a method for recovering approximate solutions of image partitioning problems from solutions of a convex relaxation. We proposed a probabilistic rounding method motivated by the finite-dimensional framework, and showed that it is possible to obtain a priori bounds on the optimality of the integral solution obtained by rounding a solution of the convex relaxation.

The obtained bounds are compatible with known bounds for the finite-dimensional setting. However, to our knowledge, this is the first fully convex approach that is both formulated in the spatially continuous setting and provides a true a priori bound. We showed that the approach can also be interpreted as an approximate variant of the coarea formula.

A peculiar property of the presented approach is that it provides a bound of two for the uniform metric even in the two-class case, where the relaxation is known to be exact. The question remains how to prove an optimal bound.

While the results apply to a quite general class of regularizers, they are formulated for the homogeneous case. Non-homogeneous regularizers constitute an interesting direction for future work. In particular, such regularizers naturally occur when applying convex relaxation techniques [1, 24] in order to solve nonconvex variational problems.

With the increasing computational power, such techniques have become quite popular recently. For problems where the convexity is confined to the data term, they permit to find a global minimizer. A proper extension of the results outlined in this work may provide a way to find good approximate solutions of problems where also the regularizer is nonconvex.