Keywords

1 Introduction

In recent years, more and more researchers in the control community have focused their attention on distributed coordination of multi-agent systems due to its broad applications in many fields such as sensor networks, e.g. UAV (Unmanned Air Vehicles), MRS (mobile robots systems), robotic teams.

In the cooperative control, a key problem is to design distributed protocols such that group of agents can achieve consensus through local communications. So far, numerous interesting results for consensus problem have been obtained for both discrete-time and continuous-time multi-agent system in the past decade. Reynolds systematically studied and simulated the behavior of biological group such as birds and fishes, and proposed Boidmodel [1] which still has a broad impact in the field of Swarm Intelligence. Vicsek model [2] is proposed based on statistical mechanics theory in which the movement rate of Agent on two-dimensional plane remains unchanged, and the N agents on the 2-D plane determine their motion direction according to the directions of their neighbor agents. One of the most promising tools are the linear consensus algorithms, which are simple distributed algorithms which require only minimal computation, communication and synchronization to compute averages of local quantities that reside in each device. These algorithms have their roots in the analysis of Markov chains [3] and have been deeply studied within the computer science community for load balancing and within the linear algebra community for the asynchronous solution of linear systems [4, 5]. For linear consensus problem, Olfati-Saber et al. established a relatively complete theoretical framework based on graph theory and kinetic theory, and systematically analyzed the different types of consistency issues based on the framework [6–9]. Based on the study of Olfati-Saber, Yu et al. [10–12] discussed three necessary and sufficient conditions for the algorithm to converge to a consistent state when the states of agents has nothing to do with the data transferred, and conducted a meaningful analysis of its correctness, effectiveness and efficiency with the verification in several specific applications. For multi-Agent consensus and synchronization problems of complex networks, Li et al. in their rather deep discussion proposed multi-Agent control architecture based on higher order linear system with a series of fruitful results [13–15]. In [16] average consensus issues are discussed, with the consensus algorithm formulated as matrix factorization problem, machine learning methods are proposed to solve matrix decomposition problem.

For most of consensus results in the literature, it is usually assumed that each agent can obtain its neighbor’s information precisely. Since real networks are often in uncertain communication environments, it is necessary to consider consensus problems under measurement noises. Such consensus problems have been studied, Some research [17–19] have addressed the consensus problem of multi-agents system under multiplicative measurement noises, where the noises’ intensities are considered proportional to the relative states. In [20, 21] the authors studied consensus problems when there exist noisy measurements of the states of neighbors, and a stochastic approximation approach was applied to obtain mean square and almost sure convergence in models with fixed network topologies or with independent communications failures. Necessary and/or sufficient conditions for stochastic consensus of multi-agent systems were established for the case of fixed topology and time varying topologies in [22, 23]. Liu et al. studied signal delay of linear consensus protocol [24], and presented strong consensus and mean square consensus concept under the conditions of the fixed topology and the presence of noise and delay between agents, and gave theoretically necessary and sufficient conditions of strong consensus and mean square under Non_Leader_Follower and Leader_Follower modes. The distributed consensus problem for linear discrete-time multi-agent systems with delays and noises was investigated in [25] by introducing a novel technique to overcome the difficulties induced by the delays and noises. In [26], a novel kind of cluster consensus of multi-agents systems with several different subgroups was considered based on Markov chains and nonnegative matrix analysis.

In this paper, we discussed the noise problem of the linear consensus protocol, and gave a sufficient condition that ensure the noise of linear consensus protocol is controllable. The remainder of this paper is organized as follows. Some preliminaries and definitions are given in Sect. 2. in Sect. 3, we pointed out that noise of DLCP is uncontrollable. in Sect. 4, we proposed the strategy of using noise suppression function to control noise, and put forward Theorem 1 about a reasonable range of noise suppression function; Sect. 5 is devoted to show the conclusions of this paper.

2 Preliminaries

Consider n agents distributed according to a directed graph \( \boldsymbol{\mathcal{G}} = \left( {\boldsymbol{\mathcal{V}},\boldsymbol{\mathcal{E}}} \right) \) consisting of a set of nodes \( \boldsymbol{\mathcal{V}} = \left\{ {1,2, \ldots ,n} \right\} \) and a set of edges \( \boldsymbol{\mathcal{E}} \in \boldsymbol{\mathcal{V}} \times \boldsymbol{\mathcal{V}} \). In the digraph, an edge from node i to node j is denoted as an ordered pair (i, j) where i ≠ j (so there is no edge between a node and itself). A path (from i 1 to i l) consists of a sequence of nodes i 1, i 2, · · ·, i l , l ≥ 2, such that (i k , i k+1 ) \( \in \boldsymbol{\mathcal{E}} \) for k = 1· · ·, l − 1. We say node i is connected to node j(i ≠ j) if there exists a path from i to j. For convenience of exposition, the two names, agent and node, will be used alternatively. The agent A k (resp., node k) is a neighbor of A i (resp., node i) if (k, i) \( \in \boldsymbol{\mathcal{E}} \) where k ≠ i. Denote the neighbors of node i by \( \boldsymbol{\mathcal{N}}_{i} \) = {k|(k, i) \( \in \boldsymbol{\mathcal{E}} \)}. For agent A i , we denote its state at time t by x i (t) ∈ \( {\mathbb{R}} \), where t ∈ \( {\mathbb{Z}}_{ + } \), \( {\mathbb{Z}}_{ + } \) = {0, 1, 2, · · ·}. For each i ∈ \( \boldsymbol{\mathcal{V}} \), agent A i receives information from its neighbors.

Definition 1:

(Discrete Linear Consensus Protocol DLCP) The so-called linear consensus protocol is given by the following (1):

$$ x_{i} \left( {t + 1} \right) = x_{i} \left( t \right) + \sum\limits_{{j \in \boldsymbol{\mathcal{N}}_{{\boldsymbol{\ominus }i}} \left( t \right)}} {\alpha_{ij} \left( t \right)\left( {x_{j} \left( t \right) - x_{i} \left( t \right)} \right)} \;\;\;\forall i,j \in \boldsymbol{\mathcal{V}} $$
(1)

Where \( \alpha_{ij} \left( t \right) > 0 \) is a real-valued function with variable t and \( \sum\limits_{j = 1}^{n} {\alpha_{ij} \left( t \right)} \le 1 \), it is used to characterize the extent of the impact at time \( t \) from agent \( j \) to agent \( i \).

Definition 2:

(Weighted Laplacian Matrix) The matrix \( \boldsymbol{\mathcal{L}}\left( t \right) = \left[ {l_{ij} \left( t \right)} \right]_{n \times n} \) is called weighted Laplacian matrix of graph \( \boldsymbol{\mathcal{G}} \), where

$$ l_{ij} \left( t \right) = \left\{ {\begin{array}{*{20}l} { - \alpha_{ij} \left( t \right)} \hfill & {if\;j \in \boldsymbol{\mathcal{N}}_{i} \;{\text{and}}\;i \ne j} \hfill \\ {\sum\limits_{j = 1}^{n} {\alpha_{ij} \left( t \right)} } \hfill & {if\;\;i = j} \hfill \\ 0 \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$

let \( {\mathbf{X}}\left( k \right) = \left[ {x_{1} \left( k \right), \ldots ,x_{n} \left( k \right)} \right]^{T} \), \( \boldsymbol{\mathcal{I}}_{n} \) denote an \( n \) order unit matrix, the matrix form of (1):

$$ {\mathbf{X}}\left( {t + 1} \right) = \boldsymbol{\mathcal{A}}\left( t \right){\mathbf{X}}\left( t \right) $$
(2)

where \( \boldsymbol{\mathcal{A}}\left( t \right) = \boldsymbol{\mathcal{I}}_{n} - \boldsymbol{\mathcal{L}}\left( t \right) \).

Suppose \( r \sim N\left( {\mu ,\sigma^{2} } \right) \) is a random number that satisfies the normal distribution, let \( \text{var} \left( r \right) \) represent variance \( \sigma^{2} \) of \( r \), if \( {\mathbf{R}} = \left[ {r_{1} , \ldots ,r_{n} } \right]^{T} \) is a random vector, then \( \text{var} \left( {\mathbf{R}} \right) \) represent the covariance matrix of R, \( \left[ {\text{var} \left( {\mathbf{R}} \right)} \right]_{i} \) denotes the variance of the ith components of R, i.e. \( \left[ {\text{var} \left( {\mathbf{R}} \right)} \right]_{i}\, = \text{var} \left( {r_{i} } \right) \) is ith element of diagonal of matrix \( \text{var} \left( {\mathbf{R}} \right) \).

If the received message of \( Agent \) \( i \) contains a mutually independent and normally distributed noise interference, (1) can be rewritten as:

$$ x_{i} \left( {t + 1} \right) = x_{i} \left( t \right) + \sum\limits_{{j \in \boldsymbol{\mathcal{N}}_{i}^{{}} \left( t \right)}} {\alpha_{ij} \left( t \right)\left( {\left( {x_{j} \left( t \right) + r_{j} \left( t \right)} \right) - x_{i} \left( t \right)} \right)\;\;\;\;\forall i,j \in \boldsymbol{\mathcal{V}}} $$
(3)

where \( r_{j} \left( t \right) \sim N\left( {0,\sigma_{j}^{2} } \right) \) is a random number and satisfies normal distribution, which represents the noise carried by the status information \( x_{j} \left( t \right) \). Let \( {\mathbf{R}}\left( t \right) = \left[ {r_{1} \left( t \right), \ldots ,r_{n} \left( t \right)} \right]^{T} \), thus (3) can be rewritten in matrix form:

$$ {\mathbf{X}}\left( {t + 1} \right) = \boldsymbol{\mathcal{A}}\left( t \right){\mathbf{X}}\left( t \right) + \boldsymbol{\mathcal{W}}\left( t \right){\mathbf{R}}\left( t \right) $$
(4)

In the above equation, \( \boldsymbol{\mathcal{W}}\left( t \right) = - \boldsymbol{\mathcal{L}}\left( t \right) - diag\left( { - \boldsymbol{\mathcal{L}}\left( t \right)} \right) \), \( diag\left( { - \boldsymbol{\mathcal{L}}\left( t \right)} \right) \) is the diagonal matrix of \( - \boldsymbol{\mathcal{L}}\left( t \right) \), so we can further get:

$$ {\mathbf{X}}\left( {t + 1} \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) + \sum\limits_{m = 1}^{t} {\left( {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)} } \right)\boldsymbol{\mathcal{W}}\left( {t - m} \right){\mathbf{R}}\left( {t - m} \right)} \right)} + \boldsymbol{\mathcal{W}}\left( t \right){\mathbf{R}}\left( t \right) $$
(5)

In order to facilitate the description, let \( \left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)} } \right) = \boldsymbol{\mathcal{I}}_{n} \) when \( m = 0 \), then (5) can be simplified as:

$$ {\mathbf{X}}\left( {t + 1} \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) + \sum\limits_{m = 0}^{t} {\left( {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)} } \right)\boldsymbol{\mathcal{W}}\left( {t - m} \right){\mathbf{R}}\left( {t - m} \right)} \right)} $$

let \( \boldsymbol{\mathcal{R}}\left( {t - m} \right) = \boldsymbol{\mathcal{W}}\left( {t - m} \right){\mathbf{R}}\left( {t - m} \right) \), \( {\mathbf{Y}}\left( t \right) = \sum\limits_{m = 0}^{t} {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)\boldsymbol{\mathcal{R}}\left( {t - m} \right)} } \right)} \), \( {\mathbf{B}}\left( t \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) \) thus we get:

$$ {\mathbf{X}}\left( {t + 1} \right) = {\mathbf{B}}\left( t \right) + {\mathbf{Y}}\left( t \right) $$
(6)

Analyzing the random part \( {\mathbf{Y}}\left( t \right) \) of (6), we can find out that it is a linear combination of several random vectors, therefore it is also a random vector satisfying normal distribution.

Definition 3:

(Noise Controllable) Assuming consensus protocol can converge to a consistent state vectors \( {\mathbf{X}}^{*} = \left[ {x^{*} , \ldots ,x^{*} } \right]^{T} \) under the noise-free conditions, we call the consensus protocol described in (6) is noise controllable, if and only if when \( t \to \infty \), lim t→∞ B(t) = X * for \( \forall i,j \in \boldsymbol{\mathcal{V}} \) and there is constant M which will make \( \lim\limits_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}( t )}\right)} \right]_{i} \le M \), \( i = 1, \ldots ,n \).

3 Noise Uncontrollability of Discrete Linear Consistency Protocol

For any initial state \( {\mathbf{X}}\left( 0 \right) \), assuming that the consistency protocol in (2) converge to a consistent state \( {\mathbf{X}}^{ *} \) associated with \( {\mathbf{X}}\left( 0 \right) \), under this condition, we discuss the impact of the noise on the protocol.

Lemma 1:

Suppose \( {\mathbf{Y}}\left( t \right) \) is random part of consensus protocol (6), when \( t \to \infty \), \( \mathop {\lim\limits}_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}\left( t \right)} \right)} \right]_{i} = \infty \) for any initial state \( {\mathbf{X}}\left( 0 \right) \), \( i = 1, \ldots ,n \).

Proof:

let \( y(t,m) = \prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)\boldsymbol{\mathcal{R}}\left( {t - m} \right)} \), then \( {\mathbf{Y}}\left( t \right) = \sum\limits_{m = 0}^{t} {\left( {y\left( {t,m} \right)} \right)} \) we have:

$$ \begin{aligned} & \text{var} \left( {y\left( {t,m} \right)} \right) = \boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)\text{var} \left( {\prod\limits_{j = t - m + 2}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)\boldsymbol{\mathcal{R}}\left( {t - m} \right)} } \right)\boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)^{T} \\ & = \boldsymbol{\mathcal{A}}\left( t \right) \ldots \boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)\text{var} \left( {\boldsymbol{\mathcal{R}}\left( {t - m} \right)} \right)\boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)^{T} \ldots \boldsymbol{\mathcal{A}}\left( t \right)^{T} \\ \end{aligned} $$

where \( \text{var} \left( {\boldsymbol{\mathcal{R}}\left( {t - m} \right)} \right) = \boldsymbol{\mathcal{W}}\left( {t - m} \right)\text{var} \left( {{\mathbf{R}}\left( {t - m} \right)} \right)\boldsymbol{\mathcal{W}}\left( {t - m} \right)^{T} \), \( m = 1, \ldots ,t \). It is known that when there is no noise, and \( t \to \infty \), \( \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) \) converges, for a determined constant \( m \), there always is:

$$ \begin{aligned} \mathop {\lim }\limits_{t \to \infty } \text{var} \left( {y\left( {t,m} \right)} \right) & = \mathop {\lim }\limits_{t \to \infty } \boldsymbol{\mathcal{A}}\left( t \right) \ldots \boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)\text{var} \left( {\boldsymbol{\mathcal{R}}\left( {t - m} \right)} \right)\boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)^{T} \ldots \boldsymbol{\mathcal{A}}\left( t \right)^{T} \\ & = \mathop {\lim }\limits_{t \to \infty } \left( {{\mathbf{V}}_{1} \left( m \right),{\mathbf{V}}_{2} \left( m \right), \ldots {\mathbf{V}}_{n} \left( m \right)} \right)\boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)^{T} \ldots \boldsymbol{\mathcal{A}}\left( t \right)^{T} \\ & = \mathop {\lim }\limits_{t \to \infty } \left( {\boldsymbol{\mathcal{A}}\left( t \right) \ldots \boldsymbol{\mathcal{A}}\left( {t - m + 1} \right)\left( {{\mathbf{V}}_{1} \left( m \right),{\mathbf{V}}_{2} \left( m \right), \ldots {\mathbf{V}}_{n} \left( m \right)} \right)^{T} } \right)^{T} = \left( {\zeta \left( m \right)} \right)_{n \times n} \\ \end{aligned} $$

Where, \( {\mathbf{V}}_{i} \left( m \right) = \left( {v_{i} , \ldots ,v_{i} } \right)^{T} \) is a constant vector related to m, \( \left( {\zeta \left( m \right)} \right)_{n \times n} \) is a constant matrix, and \( \zeta \left( m \right) > 0 \). So that:

$$ \mathop {\lim }\limits_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}\left( t \right)} \right)} \right]_{i} = \mathop {\lim }\limits_{t \to \infty } \left[ {\text{var} \left( {\sum\limits_{m = 0}^{t} {\left( {y\left( {t,m} \right)} \right)} } \right)} \right]_{i} = \mathop {\lim }\limits_{t \to \infty } \sum\limits_{m = 0}^{t} {\left[ {\text{var} \left( {y\left( {t,m} \right)} \right)} \right]}_{i} = \infty $$

â–¡

In fact, when \( t \to \infty \), \( {\mathbf{B}}\left( t \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) \) will eventually reach a consistent state. Similarly, for a specific constant \( m \), \( \text{var} \left( {y(t,m)} \right) \) will eventually tend to a stable constant when \( t \to \infty \), and \( \text{var} \left( {{\mathbf{Y}}\left( t \right)} \right) \) just is the infinite series accumulated by \( \text{var} \left( {y(t,m)} \right) \), So it will not converge, i.e. the consensus protocol (6) is noise uncontrollable.

4 Noise Suppression Discrete Linear Consensus Protocol(NS-DLCP)

We reconstruct the state transition matrix \( \boldsymbol{\mathcal{A}}\left( t \right) \), let \( \boldsymbol{\mathcal{L}}_{\varepsilon } \left( t \right) = \varepsilon \left( t \right)\boldsymbol{\mathcal{L}}\left( t \right) \), where \( \varepsilon \left( t \right):{\mathbb{R}}_{ + } \to {\mathbb{R}}_{ + } \) is a function whose independent variable is \( t \), \( \varepsilon \left( t \right) > 0 \) and when \( t \to \infty \), \( \varepsilon \left( t \right) \to 0 \), we call \( \varepsilon \left( t \right) \) as noise suppression function, Let \( \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) = \boldsymbol{\mathcal{I}}_{n} - \varepsilon \left( t \right)\boldsymbol{\mathcal{L}}\left( t \right) \), replace \( \boldsymbol{\mathcal{A}}\left( t \right) \) in (2) with \( \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) \), we get:

$$ {\mathbf{X}}\left( {t + 1} \right) = \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right){\mathbf{X}}\left( t \right) $$
(7)

Here we call (7) as Noise Suppression Consensus Protocol(NS-CP), then we rewrite (7) as the relation between \( X\left( {t + 1} \right) \) and the initial state \( X\left( 0 \right) \), and consider the noise carried by \( Agent \), then (7) is rewritten as:

$$ {\mathbf{X}}\left( {t + 1} \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( k \right)} {\mathbf{X}}\left( 0 \right) + \sum\limits_{m = 0}^{t} {\left( {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( j \right)} } \right)\boldsymbol{\mathcal{W}}_{\varepsilon } \left( {t - m} \right){\mathbf{R}}\left( {t - m} \right)} \right)} $$
(8)

Similarly, let \( \boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right) = \boldsymbol{\mathcal{W}}_{\varepsilon } \left( {t - m} \right){\mathbf{R}}\left( {t - m} \right) \), \( {\mathbf{Y}}_{\varepsilon } \left( t \right) = \sum\limits_{m = 0}^{t} {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( j \right)\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} } \right)} \), \( {\mathbf{B}}_{\varepsilon } \left( t \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( k \right)} {\mathbf{X}}\left( 0 \right) \) then (8) is simplified as:

$$ {\mathbf{X}}\left( {t + 1} \right) = {\mathbf{B}}_{\varepsilon } \left( t \right) + {\mathbf{Y}}_{\varepsilon } \left( t \right) $$
(9)

Lemma 2:

Suppose consensus protocol (2) can converge to a consistent state \( {\mathbf{X}}^{*} \) under the noise-free conditions, if noise suppression function \( \varepsilon \left( t \right) \) is the low-order infinitesimal of \( t^{ - 1} \), then \( \mathop {\lim\limits}_{t \to \infty } {\mathbf{B}}_{\varepsilon } \left( t \right) = {\mathbf{X}}^{*} \).

Proof:

Study formula (10), we have \( {\mathbf{X}}\left( {t + 1} \right) = {\mathbf{B}}_{\varepsilon } \left( t \right) \) in the case without noise, from the conclusion in [11] we know that \( \left\| {{\mathbf{X}}\left( {t + 1} \right) - {\mathbf{X}}^{*} } \right\| \le \mu_{\varepsilon 2} \left( t \right)\left\| {{\mathbf{X}}\left( t \right) - {\mathbf{X}}^{*} } \right\| \) for any determined \( t \), where \( \mu_{\varepsilon 2} \left( t \right) \) is the second largest eigenvalues of matrix \( \frac{1}{2}\left( {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) + \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right)^{T} } \right) \), let \( \lambda_{2} \left( t \right) \) be the second smallest eigenvalues of \( \frac{1}{2}\left( {\boldsymbol{\mathcal{L}}\left( t \right) + \boldsymbol{\mathcal{L}}\left( t \right)^{T} } \right) \), obviously, \( \mu_{\varepsilon 2} \left( t \right) = 1 - \varepsilon \left( t \right)\lambda_{2} \left( t \right) \), thus:

$$ \left\| {{\mathbf{B}}_{\varepsilon } \left( t \right) - {\mathbf{X}}^{*} } \right\| \le \prod\limits_{k = 1}^{t} {\left( {1 - \varepsilon \left( k \right)\lambda_{2} \left( k \right)} \right)} \left\| {{\mathbf{X}}\left( 0 \right) - {\mathbf{X}}^{*} } \right\| $$
(10)

Let \( \lambda_{2}^{*} \) be the smallest one in the second smallest eigenvalues of \( \frac{1}{2}\left( {\boldsymbol{\mathcal{L}}\left( t \right) + \boldsymbol{\mathcal{L}}\left( t \right)^{T} } \right) \), according to the known conditions that \( O\left( {\varepsilon (t)} \right) < O\left( {t^{ - 1} } \right) \), then we can deduce that \( \mathop {\lim\limits}_{t \to \infty } \left( {1 - \varepsilon \left( t \right)\lambda_{2}^{*} } \right)^{t} = 0 \), and because \( \varepsilon \left( k \right) > 0 \), thus for \( \forall t \), \( 0 \le \prod\limits_{k = 1}^{t} {\left( {1 - \varepsilon \left( k \right)\lambda_{2} \left( k \right)} \right)} \le \left( {1 - \varepsilon \left( t \right)\lambda_{2}^{*} } \right)^{t} \), when \( t \to \infty \), from squeeze theorem we can obtain: \( \mathop {\lim\limits}_{t \to \infty } \prod\limits_{k = 1}^{t} {\left( {1 - \varepsilon \left( k \right)\lambda_{2} \left( k \right)} \right)} = 0 \), that means:

$$ 0 \le \mathop {\lim\limits}_{t \to \infty } \left\| {{\mathbf{B}}_{\varepsilon } \left( t \right) - {\mathbf{X}}^{*} } \right\| \le \mathop {\lim }\limits_{t \to \infty } \prod\limits_{k = 1}^{t} {\left( {1 - \varepsilon (k)\lambda_{2} \left( k \right)} \right)} \left\| {{\mathbf{X}}(0) - {\mathbf{X}}^{*} } \right\| = 0 $$

Then we have \( \mathop {\lim\limits}_{t \to \infty } \left\| {{\mathbf{B}}_{\varepsilon } \left( t \right) - {\mathbf{X}}^{*} } \right\| = 0 \), i.e. \( \mathop {\lim\limits}_{t \to \infty } {\mathbf{B}}_{\varepsilon } \left( t \right) = {\mathbf{X}}^{*} \).

Lemma 3:

Suppose consensus protocol (2) can converge to a consistent state \( {\mathbf{X}}^{*} \) under the noise-free conditions, if noise suppression function \( \varepsilon \left( t \right) \) is the high-order infinitesimal of \( t^{ - 0.5} \), then there is constant M which make \( \lim_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}_{\varepsilon } \left( t \right)} \right)} \right]_{i} \le M \).

Proof:

let \( \left\| \bullet \right\|_{\infty } \) to represent the row sum norm of the matrix, and investigate the row sum norm of the variance matrix of \( \boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right) \), then:

$$ \begin{aligned} & \left\| {\text{var} \left( {\boldsymbol{\mathcal{R}}_{\varepsilon } (t - m)} \right)} \right\|_{\infty } = \left\| {\boldsymbol{\mathcal{W}}_{\varepsilon } \left( {t - m} \right)\text{var} \left( {{\mathbf{R}}(t - m)} \right)\boldsymbol{\mathcal{W}}_{\varepsilon } \left( {t - m} \right)^{T} } \right\|_{\infty } \\ & = \varepsilon^{2} \left( {t - m} \right)\left\| {\boldsymbol{\mathcal{W}}\left( {t - m} \right)\text{var} \left( {{\mathbf{R}}(t - m)} \right)\boldsymbol{\mathcal{W}}\left( {t - m} \right)^{T} } \right\|_{\infty } \\ & \le \varepsilon^{2} \left( {t - m} \right)\left\| {\boldsymbol{\mathcal{W}}\left( {t - m} \right)} \right\|_{\infty } \left\| {\text{var} \left( {{\mathbf{R}}(t - m)} \right)} \right\|_{\infty } \left\| {\boldsymbol{\mathcal{W}}\left( {t - m} \right)^{T} } \right\|_{\infty } \\ & \le \varepsilon^{2} \left( {t - m} \right)\left\| {\text{var} \left( {{\mathbf{R}}(t - m)} \right)} \right\|_{\infty } \\ \end{aligned} $$

let \( y_{\varepsilon } (t,m) = \prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( j \right)\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} \), then \( {\mathbf{Y}}_{\varepsilon } \left( t \right) = \sum\limits_{m = 0}^{t} {\left( {y_{\varepsilon } (t,m)} \right)} \). Study the norm of the variance matrix of \( y_{\varepsilon } (t,m) \), we have:

$$ \begin{aligned} & \left\| {\text{var} \left( {y_{\varepsilon } (t,m)} \right)} \right\|_{\infty } = \left\| {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) \ldots \boldsymbol{\mathcal{A}}_{\varepsilon } \left( {t - m + 1} \right)\text{var} \left( {\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} \right)\boldsymbol{\mathcal{A}}_{\varepsilon } \left( {t - m + 1} \right)^{T} \ldots \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right)^{T} } \right\|_{\infty } \\ & \le \left\| {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right)} \right\|_{\infty } \ldots \left\| {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( {t - m + 1} \right)} \right\|_{\infty } \left\| {\text{var} \left( {\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} \right)} \right\|_{\infty } \left\| {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( {t - m + 1} \right)^{T} } \right\|_{\infty } \ldots \left\| {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( {t - m + 1} \right)^{T} } \right\|_{\infty } \\ & \le \left\| {\text{var} \left( {\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} \right)} \right\|_{\infty } \le \varepsilon^{2} \left( {t - m} \right)\left\| {\text{var} \left( {{\mathbf{R}}\left( {t - m} \right)} \right)} \right\|_{\infty } \\ \end{aligned} $$

In fact, \( \left[ {\text{var} \left( {y_{\varepsilon } \left( {t,m} \right)} \right)} \right]_{i} \) is exactly the ith element of the diagonal of the variance matrix \( \text{var} \left( {y_{\varepsilon } \left( {t,m} \right)} \right) \), denote \( \rho = \hbox{max} \left( {\left[ {\text{var} \left( {y_{\varepsilon } \left( {t,m} \right)} \right)} \right]_{i} } \right) \), obviously \( \left[ {\text{var} \left( {y_{\varepsilon } (t,m)} \right)} \right]_{i} \le \varepsilon^{2} \left( {t - m} \right)\rho \), then

$$ \left[ {\text{var} \left( {{\mathbf{Y}}_{\varepsilon } (t)} \right)} \right]_{i} = \sum\limits_{m = 0}^{t} {\left[ {\text{var} \left( {y_{\varepsilon } (t,m)} \right)} \right]_{i} } \le \rho \left( {\varepsilon^{2} (t) + \ldots + \varepsilon^{2} (0)} \right) = \rho \sum\limits_{m = 0}^{t} {\varepsilon^{2} \left( m \right)} $$

According to the condition that \( O\left( {\varepsilon (t)} \right) > O\left( {t^{ - 0.5} } \right) \), therefore series \( \rho \sum\limits_{m = 0}^{t} {\varepsilon^{2} \left( m \right)} \) will converge when \( t \to \infty \), let \( \mathop {\lim\limits}_{t \to \infty } \rho \sum\limits_{m = 0}^{t} {\varepsilon^{2} \left( m \right)} = M \), we can obtain:\( \lim_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}_{\varepsilon } (t)} \right)} \right]_{i} \le M \) â–¡

From Lemmas 2 and 3, it easy to get:

Theorem 1:

Suppose consensus protocol (2) can converge to a consistent state \( {\mathbf{X}}^{*} \) under the noise-free conditions, if order of \( \varepsilon \left( t \right) \) satisfies \( O\left( {t^{ - 0.5} } \right) < O\left( {\varepsilon (t)} \right) < O\left( {t^{ - 1} } \right) \) then NS-SDLC (9) is noise controllable.

5 Conclusion

Bases on the above theoretical results and discussion, Table 1 summarized the main conclusions of this paper.

Table 1. Main conclusions of this paper

Our main conclusions are:

  1. I.

    if ε(t) = 1 (Equivalent to \( \varepsilon \left( t \right) \) is useless) or O(ε(t)) ≤ O(t−0.5), the determined part Bε(t) of linear consensus protocol (9) can converge to consistent state vectors \( {\mathbf{X}}^{*} \), but the variance of its random part Yε(t) is unbounded. In this case, linear consensus protocol is noise uncontrollable.

  2. II.

    When O(ε(t)) ≥ O(t−1), the variance of its random part Yε(t) is bounded, but the determined part Bε(t) of linear consensus protocol can’t converge to consistent state vectors \( {\mathbf{X}}^{*} \), under this circumstances, linear consensus protocol is also noise uncontrollable.

  3. III.

    If O(t−0.5) < O(ε(t)) < O(t−1), Bε(t) will converge to consistent state vectors \( {\mathbf{X}}^{*} \) and the variance of Yε(t) is bounded, so linear consensus protocol is noise controllable. At this time, every Agent’s state will be a normal distribution with center x*.