Introduction

Phased array imaging (PHA) as a conventional method in medical ultrasound is a simple method that provides a high signal-to-noise ratio (SNR) due to using all elements in transmit to focus in a special direction [1]. The reflected echo signals are recorded by receive sensors (RF signals). RF signals are beamformed to synthesize the receive focusing. The conventional beamformer is known as a delay-and-sum (DAS) beamformer [2]. In this method, the proper samples of RF signals are selected and averaged to provide dynamic receive focusing. Smaller subarrays can be used as transmitters, which extends the depth of focus and frame rate [3, 4]. Karaman [5] suggested a new phased subarray imaging (PSA) where a central subarray fired multiple times to sweep out the whole imaging region. Subsequent development improved the capabilities of PSA imaging [6,7,8]. The transmit subarray is shifted across the whole array, which leads to extension of effective aperture length compared to the earlier PSA. This method uses transmit subarrays to scan the medium based on the principles of PHA and then coherently compounds the beamformed responses of subarrays. Although the capabilities of PSA imaging are comparable to the performance of PHA imaging at the focus point, the quality of the image is degraded away from the focal points [7]. Virtual extension of the receive aperture length in PSA imaging improves the receive beamformer’s performance if a pixel-based focusing (PBF) method is used [9]. PBF performs dynamic receive focusing at the pixel level [10]. If overlapped subarrays scan the whole medium and the beamformed responses are coherently compounded, sidelobes of the point source function (PSF) will be lower, leading to an improvement of the contrast compared to PHA [11].

The main purpose of this study is to search for the proper compounding of subarray images in PSA to achieve a better resolution and contrast compared to PHA. Each subarray leads to an image that is reconstructed based on the principles of PHA, which includes these two steps: (1) steering the transmit beam in different directions and (2) reconstructing the image points of the field of view of the transmit beam in each firing. Adaptive averaging of the resulting subarray images is proposed, which causes transmit subarrays to have different effects on the final images. Finding the proper weights for coherent compounding of subarray images is an effective tool for designing the overall beam. For this purpose, adaptive apodization (ADAP) is proposed, which determines the proper weights for averaging of the subarray images. These weights are calculated such that the energy of the beamformer output is minimized. Recently, a minimum variance (MV) beamformer was studied as an adaptive method to improve the resolution in PHA [12,13,14]. In MV, a weight vector of length equal to the length of the observed samples (N) must be calculated [15]. The flops needed in MV are proportional to N3 [16]. MV improves resolution at the expense of a huge computational load (CL), which limits the applicability of this method.

Several researchers tried to solve the CL problem associated with the MV method. The main problem is related to computation of the covariance matrix and calculating its inverse, which implies O(N3) flops. Considering the covariance matrix to be a Toeplitz matrix helps to calculate the inverse matrix with a lower CL [16]. Beam space transformation is another method that maps the received signal onto a new space [17]. This transformation decreases the dimension of the covariance matrix, so the flops needed to determine its inverse are decreased. The main CL in the beam-space method is related to calculation of the covariance matrix, which is in the order of O(N2). GPU implementation of the beam-space method was later proposed [18]. QR decomposition is an other idea that decreases the CL to O(N2) [19]. Applying MV beamformers across subapertures can also decrease CL due to using a smaller signal length. Hasegawa [20] proposed using a DAS beamformer across subapertures, and then an APES beamformer was applied on the outputs of the DAS beamformer. In all of the abovementioned methods, the covariance matrix must be calculated first, which itself consumes O(N2) flops. This paper proposes a new optimization method that eliminates the need to calculate the covariance matrix. It is not claimed that the proposed method leads to the performance demonstrated in the previous studies, but the improvement in resolution at the expense of only O(N) flops is so significant that it highlights the motivation for using this method.

ADAP is a constrained MV problem that needs to calculate only two coefficients during the minimization process. Fewer calculations compared to MV make the ADAP method more applicable.

Using a DAS beamformer to reconstruct each subarray image and compound subarray images through the ADAP method introduces a combination of DAS and an adaptive method. This hybrid method leads to better resolution and contrast compared to the conventional PHA. Simulation and experimental results validate these achievements. The remainder of the paper is organized as follows: “The proposed PSA imaging using ADAP” explains the known theory for image reconstruction in PSA where overlapped subarrays are assumed to be transmitters. Using the ADAP method in PSA to adjust the contribution of each subarray is then explained. Results and discussion are provided in Sections III and IV, respectively. Finally, the conclusion of this study is presented in Section V.

The proposed PSA imaging using ADAP

In conventional PSA imaging (CPSA), each subarray scans the whole medium and RF data are recorded. Proper samples related to each image point are selected and averaged to form the subarray image, as shown in Fig. 1a. This configuration uses Nt subarrays to scan the medium, which decreases the frame rate effectively. If subarrays are fully overlapped, it means adjacent subarrays differ only in one element, and the frame rate decrement will be greater. To solve this problem, only two subarrays on the left and right sides of the array are used to scan the medium. In the proposed PSA (PPSA), each scanning needs multiple shots in different directions. The angular space between two sequential shots is considered to be 0.1 degree in simulations. For each image point, proper delays are applied on the recorded RF, which provides a vector of Nr samples related to each subarray. The redundancy in the data capturing helps to extract the required samples related to the intermediate subarrays from the recorded samples, as shown in Fig. 1b.

Fig. 1
figure 1

Arrangement of transmit subarrays in a the conventional PSA and b the proposed PSA

Suppose there is a target at point (x, z), as shown in Fig. 2. Time of flight (TOF) for transmit steering of the initial subarray at point (x, z) and sensing the reflected echo signal by the rth element of the array is equal to \(\tau_{r,1} (x,z) = \tau_{tp,1} (x,z) + \tau_{pr,r} (x,z)\). If the ith subarray is used for transmit steering at point (x, z), TOF equals \(\tau_{{r^{\prime},i}} (x,z) = \tau_{tp,i} (x,z) + \tau_{{pr,r^{\prime}}} (x,z)\), as shown in Fig. 2. If the source point (x, z) is located in the far field of the array, we have \(\tau_{tp,i} (x,z) - \tau_{tp,1} (x,z) = (i - 1)d\sin (\theta )/c\), where d is the inter-element space of the array and c is for speed of sound, and \(\theta\) shows the direction of transmit steering. If we have \(\tau_{pr,r} (x,z) - \tau_{{pr,r^{\prime}}} (x,z) = (i - 1)d\sin (\theta )/c\), it can be expected that \(\tau_{r,1} (x,z) = \tau_{{r^{\prime},i}} (x,z)\). This equality is satisfied if \(r = r^{\prime} + i - 1\).

Fig. 2
figure 2

TOF calcualtion for the first or ith subarray as transmitters

Therefore, it can be concluded that the RF signal related to the ith subarray can be extracted from RFinitial if the condition \(r = r^{\prime} + i - 1\) is met. RFinitial is the recorded signal after scanning the medium by the initial subarrays. For \(r^{\prime} > N_{r} + 1 - i\), the same argument is used to extract RFi(r) from RFlast, where RFlast means the RF signal related to the final subarray imaging. This argument is the basis of Eq. (1) to assign a value to each subarray based on the recorded RF signals:

$$s_{i} (x,z) = \sum\limits_{r = 1}^{{N_{r} }} {{\text{RF}}_{i} (r,t + \tau_{r,i} (x,z))} ,\begin{array}{*{20}c} {} & {{\text{RF}}_{i} (r,t + \tau_{r,i} (x,z)) = \left\{ {\begin{array}{*{20}c} {{\text{RF}}_{\text{initial}} (r + i - 1,t + \tau_{r,1} (x,z))\begin{array}{*{20}c} {} & {1 \le r \le Nr + 1 - i} \\ \end{array} } \\ {{\text{RF}}_{\text{last}} (r + i - N_{t} ,t + \tau_{r,2} (x,z)\begin{array}{*{20}c} {} & {Nr + 1 - i < r \le Nr} \\ \end{array} } \\ \end{array} } \right.} \\ \end{array} .$$
(1)

RFi, determined based on RFinitial and RFlast, shows the samples that would be obtained if the ith subarray was used for scanning. In other words, only two disjoint subarrays scan the medium, but subarray images (si (x, z)) related to the fully overlapped configuration can be reconstructed.

Proposed ADAP method

Putting apart these observed signals (si (x, z)), a vector is made whose number of entries equals the number of assumed subarrays (Nt). The goal is to determine the contribution of subarrays in reconstructing the final image point based on the observation vector (\(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} = [s_{1} , \ldots ,s_{{N_{t} }} ]^{\text{T}}\), where T stands for transpose). Minimizing the variance of output leads to a weight vector, known as MV solution, which is useful for adaptive compounding of subarray images at the expense of a huge CL. We try to minimize the variance of the output with much less CL.

Cost function (\(J(W)\)) in the MV problem relates to the weight vector, observation vector, and Lagrange multiplier (\(\gamma\)) as follows:

$$\begin{array}{*{20}c} {\mathop {{\text{min}}}\limits_{{W = [w_{1} , \ldots ,w_{{N_{t} }} ]}} } & {W^{\text{H}} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} ^{\text{H}} W\begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\sum\limits_{{i = 1}}^{{N_{t} }} {w_{i} = 1} } & {\Rightarrow } \\ \end{array} } \\ \end{array} } \\ \end{array} J(W) = W^{\text{H}} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} ^{\text{H}} W + \gamma \left( {1 - \sum\limits_{{i = 1}}^{{N_{t} }} {w_{i} } } \right).$$
(2)

If the MV problem is considered to be iteratively solved using the steepest gradient method, the weight vector in the nth iteration (\(W(n + 1)\)) is related to the gradient of the cost function (\(\mathop \nabla \limits_{W} J(W)\)) and previous weight vector (\(W(n)\)) as follows:

$$W^{\text{H}} (n + 1) = W^{\text{H}} (n) - \zeta \mathop \nabla \limits_{W} J(W) = W^{\text{H}} (n) - \zeta (W^{\text{H}} \underline{S} \underline{S}^{\text{H}} - \gamma \vec{1}).$$
(3)

\(W(n)\) shows step size for each iteration. Considering the weight vector at the initial step to be equal to \(\vec{1}\), the weight vector at next step equals:

$$W^{\text{H}} (1) = \vec{1} \Rightarrow W^{\text{H}} (2) = \vec{1} - \zeta \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {1} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} ^{\text{H}} - \gamma \vec{1}} \right) = - \zeta \left( {\sum\limits_{i} {s_{i} } } \right)\underline{{S^{\text{H}} }} + (1 + \zeta \gamma )\vec{1} = a^{*} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{S} ^{\text{H}} + b^{*} \vec{1}.$$
(4)

where * means conjugate operation and H stands for hermitian. As can be seen, the weight vector at the second step is proportionally related to the observation vector. To assign proper values for “a” and “b” in (4), we propose to minimize the energy of the inner product of these two vectors (\(W^{\text{H}} \underline{S}\)), as shown in (5):

$$\min\limits_{W = [{w_{1}} ,\ldots, {w_{N_{t}}} ]} W^{\text{H}} \underline{S} \,{\underline{S}^{\text{H}}} W \quad s.t. \quad w_{i} (x,z) = a \times s_{i} (x,z) + b, {\sum\limits_{i = 1}^{N_{t}} } w_{i} = 1.$$
(5)

A Hilbert transform was also applied to the RF signals before solving the minimization problem. It is a necessary step to provide an analytical signal for MV beamformers, which leads to better elimination of interference.

The mean of the apodization vector must also be equal to 1 to keep the desired signal unchanged. Using the method of Lagrange multipliers, the minimization problem in (5) can be solved as follows:

$$\begin{array}{*{20}c} {\mathop \nabla \limits_{a,b,\gamma } } & {(a\underline{S} + b)^{\text{H}} \underline{S} \underline{S}^{\text{H}} (a\underline{S} + b) + \gamma (1 - \sum\limits_{i = 1}^{{N_{t} }} {(as_{i} + b)} } \\ \end{array} ) = 0.$$
(6)

Calculating the gradient vector (\(\mathop \nabla \limits_{a,b,\gamma }\)), we have three equations to find the optimum a, b, and \(\gamma\). To simplify the equations, \(\beta\) is defined as the square of the norm of the vector \(\underline{S}\), and \(\eta\) shows the mean of \(\underline{S}\).

$$\partial /\partial a = 0 \Rightarrow \begin{array}{*{20}c} {} & {} \\ \end{array} \beta^{2} a^{*} + \beta \eta b^{*} - \gamma \eta = 0.$$
(7)
$$\partial /\partial b = 0 \Rightarrow \begin{array}{*{20}c} {} & {} \\ \end{array} \eta^{*} \beta a^{\text{H}} + b^{*} \left| \eta \right|^{2} - \gamma N_{t} = 0.$$
(8)
$$\partial /\partial \gamma = 0 \Rightarrow \begin{array}{*{20}c} {} & {} \\ \end{array} a^{*} \eta^{*} + b^{*} N_{t} = 1.$$
(9)

After computing \(a\) and \(b\) using (79), the weight averaging of the observed signals, \(\eta_1 (x,z)\), can be calculated by (10):

$$(a\underline{S} + b)^{\text{H}} \underline{S} = \eta_{1} (x,z).$$
(10)

Interpretation of the proposed weighted average

Suppose there is a uniformly spaced linear array of Nr elements. The desired signal of amplitude A plus interference Ii is recorded at each senor:

$$s_{i} (x,z) = A(x,z) + I_{i} (x,z),\quad i = 1,2, \ldots ,N_{r}.$$
(11)

For simplicity, coordinate (x, z) is removed from the following formulas. The DAS method leads output to be simply equal to:

$${\text{DAS}}:\frac{1}{{N_{r} }}\sum\nolimits_{i = 1}^{{N_{r} }} {s_{i} = } A + \frac{1}{{N_{r} }}\sum\nolimits_{i = 1}^{{N_{r} }} {I_{i} } .$$
(12)

Constraint \(w_{i} = a^{*} s_{i} + b^{*} = a^{*} A + a^{*} I_{i} + b^{*}\) leads to the following results:

$$\begin{aligned}{\sum\nolimits_{i = 1}^{N_{r} }} {w_{i}^{*}}{s_{i}} &= \sum\nolimits_{i = 1}^{{N_{r} }} {(aA^{*} + aI_{i}^{*} +b)(A + I_{i} )} = a\left| A \right|^{2} N_{r} \\& \quad +\, a\sum\nolimits_{i = 1}^{{N_{r} }} {\left| {I_{i} } \right|^{2} +aA^{*} \sum\nolimits_{i = 1}^{{N_{r} }} {I_{i} } } +aA\sum\nolimits_{i = 1}^{{N_{r} }} I_{i}^{*} \\& \quad +\,b\sum\nolimits_{i = 1}^{{N_{r} }} {(A + I_{i} )} \\ \end{aligned}$$
(13)

On the other hand, the next constraint, which limits the mean of apodization W to be equal to 1, defines the value of b as follows:

$$\sum\nolimits_{i = 1}^{{N_{r} }} {w_{i} } = 1 = aN_{r} A^{*} + a\sum\nolimits_{i = 1}^{{N_{r} }} {I_{i}^{*} } + bN_{r} \Rightarrow b = \frac{{1 - aN_{r} A^{*} - a\sum\nolimits_{i = 1}^{{N_{r} }} {I_{i}^{*} } }}{{N_{r} }}.$$
(14)

Putting the value of b in (13) leads to:

$$\begin{aligned}\sum\nolimits_{{i = 1}}^{{N_{r} }} {w_{i}^{*} } s_{i} & = a\left| A \right|^{2} N_{r} + a\sum\nolimits_{{i = 1}}^{{N_{r} }} {\left| {I_{i} } \right|^{2} + aA^{*} \sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i} } } + aA\sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i}^{*} + \left( {\frac{{1 - aN_{r} A^{*} - a\sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i}^{*} } }}{{N_{r} }}} \right)\sum\nolimits_{{i = 1}}^{{N_{r} }} {(A + I_{i} )} } {\text{ }} \\ &= a\underbrace {{\left( {\sum\nolimits_{{i = 1}}^{{N_{r} }} {\left| {I_{i} } \right|^{2} } - \frac{{\left| {\sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i} } } \right|^{2} }}{{N_{r} }}} \right)}}_{\mu } + A + \frac{{\sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i} } }}{{N_{r} }} = a\mu + A + \frac{{\sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i} } }}{{N_{r} }}.\end{aligned}$$
(15)

Coefficient “a” is determined through a minimization process as follows:

$$\begin{array}{*{20}c} {\begin{array}{*{20}c} {} & {} \\ \end{array} } & {} & {} \\ \end{array} a = - \frac{{A + \frac{{\sum\nolimits_{i = 1}^{{N_{r} }} {I_{i} } }}{{N_{r} }}}}{\mu }.$$
(16)

As can be seen, “a” relates to the coherency of the interference in the observation vector. A lower value of \(\mu\) means higher coherency of the observation vector, which leads to a higher value of “a”. Coefficient “b” is also proportional to “a”, as shown in (14), so higher levels of coherency of the observation signal leads to amplification of both coefficients “a” and “b”.

If \(\gamma = \left( {\sum\nolimits_{{i = 1}}^{{N_{r} }} {\left| {I_{i} } \right|^{2} } - \frac{{\left| {\sum\nolimits_{{i = 1}}^{{N_{r} }} {I_{i} } } \right|^{2} }}{{N_{r} }}} \right)\) is equal to zero, coefficient “a” has no effect on the output, so an output of \(\left| {\sum\nolimits_{i = 1}^{{N_{r} }} {w_{i}^{*} s_{i} } } \right|\) in the proposed weighted average would be equal to the output of DAS method. Otherwise, “a” is defined such that \(\left| {\sum\nolimits_{i = 1}^{{N_{r} }} {w_{i}^{*} s_{i} } } \right|\) will be equal to zero. This means that the output of this minimization problem is a hard threshold based on the value of \(\gamma\). To achieve a soft threshold, the new value of \(\gamma 1^{{}}_{{}}\) is defined as:

$$_{{\gamma 1 = \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {0.01\sum\nolimits_{i = 1}^{{N_{r} }} {\left| {I_{i} } \right|}^{2} } & {\begin{array}{*{20}c} {} & {} \\ \end{array} } & {\frac{{\left| {\sum\nolimits_{i = 1}^{{N_{r} }} {I_{i} } } \right|^{2} }}{{N_{r} }} \ge 0.99\sum\nolimits_{i = 1}^{{N_{r} }} {\left| {I_{i} } \right|}^{2} } \\ \end{array} } \\ {} \\ {\sum\nolimits_{i = 1}^{{N_{r} }} {\left| {I_{i} } \right|}^{2} - \begin{array}{*{20}c} {\begin{array}{*{20}c} {\frac{{\left| {\sum\nolimits_{i = 1}^{{N_{r} }} {I_{i} } } \right|^{2} }}{{N_{r} }}} & {} & {} & {} \\ \end{array} } & {} & {o.w.} \\ \end{array} } \\ \end{array} } \right.}}$$
(17)

Applying \(\gamma 1\) to calculate “a” and “b” means the output will be equal to zero if the coherency of interference is lower than 0.99. Applying only the proposed weighted average (\(\eta_{1} (x,z)\)) severely degrades the image points that exhibit high interference. After experimentally trying different ways to improve the performance, we propose to multiply the weighted average with a simple equal weighted average (\(\eta (x,z)\)). This multiplication provides a tradeoff between adaptive averaging and conventional averaging. This tradeoff helps to emphasize the points with less interference due to using the ADAP method, and also helps to save points with higher interference due to using the DAS method. The output is dimensionally squared (i.e., [\({\text{volt}}^{2}\)] instead of [\({\text{volt}}\)]). To solve this problem, the square root of the absolute value of the output is used, as shown by (18):

$$im(x,z) = \sqrt {\left| {\eta (x,z) \times \eta_{1} (x,z)} \right|} .$$
(18)

In the next section, some simulation and experimental data are used to demonstrate the capability and performance of PSA-ADAP imaging compared to PHA imaging.

Simulations or experimental data correspond to a full synthetic aperture scan. For each combination of transmit and receive elements in the aperture, the received signal from a collection of scatterers or a real phantom is first recorded. Focusing or apodization is employed on the data afterwards. For example, to synthesize steering of a subarray at a special direction in CPSA imaging, the recorded signals related to the combination of the elements of that subarray and whole elements of the receive array are properly delayed and then averaged. Delays are determined based on the depth and direction of the transmit focus point. The depth of the transmit focus point in all simulations is set at the middle of the image and 30, 60, 75.5, and 85 mm in Figs. 3, 6, 8, and 10, respectively. The apodization vector at the transmit side to steer subarray beams at different directions equals vector \(\vec{1}\) in all simulations. Obviously, proper delays are also applied to the excitation pulses for transmit focusing. The angular space between two sequential beams in CPSA or PPSA is considered to be 0.1 degree. The number of firing for each subarray is related to the ratio of the angular width of the medium to the angular space between sequential transmit beams (0.1 degree). In this study, a 96-element array with a center frequency of 3.3 MHz is simulated as transmitter and receiver in PHA. The inter-element spaces are considered to be half of the wavelength. The speed of sound is set at 1480 m/s. A two-cycle sinusoidal pulse is considered as the excitation pulse, and the fractional bandwidth of the transducers is considered to be 80%. In CPSA, adjacent subarrays of length 48 elements, shifted equal to one element relative to each other, must be considered for scanning. In PPSA, only the initial and last subarrays of length 48 elements are used for scanning the medium, for which two images are then provided. The other 47 images related to the intermediate subarrays are reconstructed, as explained for Fig. 1b. After providing subarray images, the ADAP method is applied to these images to reconstruct the final image.

Fig. 3
figure 3

Comparison of PSF in different imaging methods: a PHA (DAS), b CPSA (DAS), c CPSA (MV), d CPSA (ADAP), e PPSA (DAS), f PPSA (MV), and g PPSA (ADAP). All images are shown in 80 dB dynamic range

Results

Two types of image objects (point targets and cyst phantom) were simulated using Field II toolbox [21]. The results are compared in terms of the resolution and contrast.

Point target simulation

The beamformed responses for some scatterers at depths from 5 mm to 50 mm and also the related lateral variation of PSF are, respectively, shown in Figs. 3 and 4 to compare different imaging methods. The transmit focus point is set at a depth of 30 mm. Figure 3a shows the reconstructed image in PHA imaging using a DAS beamformer at receive (PHA-DAS). Compounding the subarray images in CPSA in equal weights (CPSA-DAS) and adaptive weights (CPSA-MV or CPSA-ADAP) is shown in Fig. 3b–d, respctively. The usage of PPSA to reconstruct subarray images and then compound the subarray images in equal weights (PPSA-DAS) and adaptive weights (PPSA-MV or PPSA-ADAP) is also shown in Fig. 3e–g, respctively.

Fig. 4
figure 4

Cross section of PSF for targets a at a depth of 30 mm using the DAS method, b at a depth of 30 mm using an adaptive method, c at a depth of 50 mm using the DAS method, and d at a depth of 50 mm using an adaptive method

As can be seen, the performance of CPSA is improved compared to PHA. This is due to extended depth of field, which leads to resolution improvement away from the transmit focus point. Also, for targets at a depth of 30 mm (transmit focus point), sidelobes are decreased in CPSA compared to PHA. This property is related to the effect of diversity gain introduced in radar studies, which helps to better form the overall beam [11].

Comparison of PPSA and CPSA shows that, for deeper areas, the performance of both methods is the same, but for the near-field area (i.e., depths lower than 20 mm), the performance of PPSA is strongly degraded in comparison with CPSA. The argument to prove the equivalence of CPSA and PPSA is true for the far-field area. It is clear that the equality of \(\tau_{tp,i} (x,z) - \tau_{tp,1} (x,z) = id\sin (\theta )/c\) is not valid for the near field.

Figure 4 shows the normalized cross section of PSF related to targets at different depths. Figure 4a, c shows that lower sidelobes compared to PHA are observed in both CPSA and PPSA, even far from the focus point, but the resolution is still the same as that of PHA. Applying the ADAP method to CPSA or PPSA improves the resolution where targets are more resolved from each other compared to PHA even at the focus point, as shown in Fig. 4b, d. Figure 5 shows a lateral variation of PSF for two targets laterally spaced 4 mm at a depth of 40 mm. The computed coefficients “a” and “b” for different positions in the lateral direction are shown in Fig. 5b, c. As can be seen, at the position of the targets (x = − 2), the values of “a” and “b” are strongly increased compared to non-target positions. The second row shows the observation vector at three lateral positions (x = − 3, − 2, − 1). As can be seen for the target position (x = − 2), the signal variation across subarrays is so small that it makes index \(\gamma\) closer to zero. Other cases (x = − 3, − 1) show some signal variation across subarrays. This variation affects the values of “a” and “b” and effectively decreases them. The last row in the following figure shows the provided apodization vector that must be applied to the signal.

Fig. 5
figure 5

Comparison between observation and apodization vector at target or non-target positions. First row: a lateral variation of PSF, b computed coefficient “a” and c computed coefficient “b” at a depth of 40 mm. Second row: observation vector at d (− 3, 40) mm, e (− 2, 40) mm, and f (− 1, 40) mm. Last row: apodization vector at g (− 3, 40) mm, h (− 2, 40) mm, and i (− 1, 40) mm

A quantitative comparison between PSF of different imaging methods is summarized in Table 1. Sidelobe level (SLL), the intensity of the first sidelobe, as shown in Fig. 4, is measured for different methods. Full width at half maximum (FWHM) is defined for lateral variation of the point source function. The width of the mainlobe shows a decrease of 6 dB relative to the maximum value of the mainlobe. Calculating the FWHM at the depth of focus (30 mm) shows that PPSA-ADAP reaches a resolution of 0.2 mm, three times better than PHA imaging, which leads to FWHM of about 0.6 mm. Also, CPSA and PPSA imaging methods lead to the same FWHM in the presence of MV or ADAP. A contrast ratio (CR) index equal to the difference in intensity between two areas can also be used [22]. Different imaging methods lead to different intensity. To have a fair comparison between different imaging methods, a normalized index is computed equal to the difference in the mean value in the background (\(I_{b}\)) and the mean value of the target \((I_{\text{tar}} )\) divided by \(I_{\text{tar}}\). The background areas are shown in Fig. 2 (a) as some red rectangulars. Computational load is another term that must be compared between the different methods. The last row in Table 1 summarizes the flops needed for each imaging method in reconstructing one image point. In PHA, the samples across the receive array must be averaged, which imposes \(N_{r}\) flops. In CPSA-DAS, the beamformed responses of subarrays must be first calculated and then be averaged, and these steps impose \(N_{s} \times (N_{r} + 1)\) flops. In CPSA-ADAP, the variables \(\beta\) and \(\eta\), and solving (4–6) and calculating (14), require 2* \(N_{s}\), \(N_{s}\), 15, and 5 flops, respectively. Therefore, all these steps consume \(20 + N_{s} \times (3 + N_{r} )\) flops. In PPSA-DAS, the beamformed responses of only the initial and last subarrays must be calculated. This step imposes only \(2 \times N_{r}\) flops. As can be proved by (1), the sample vector of the adjacent subarrays differs only in one entry. So, beamformed responses of intermediate subarrays only impose \(2 \times (N_{s} - 2)\) flops, and \(N_{s}\) flops are then consumed for averaging. Like CPSA, \(3 \times N_{s}\) + 20 flops are needed for applying ADAP in PPSA. Although ADAP increases the CL, the flops needed are only 4.5 times larger than with PHA if the length of subarrays is half of the array.

Table 1 Quantitative comparison of PSF in different imaging methods

Cyst phantom simulation

Using Field II, some circular cysts having various radiuses located at different depths from 20 mm to 100 are simulated. The pattern outside the cysts is simulated with five randomly placed scatterers within a resolution cell of λ2. Also, the scattering amplitudes are Gaussian distributed to simulate the speckle [23]. The reconstructed images yielded by the different imaging methods are shown in Fig. 6. The simulated phantom with distribution of scatterers is displayed in Fig. 6a. The reconstructed image for PHA imaging is shown in Fig. 6b. After observing the effectiveness of PPSA to provide the same performance as CPSA for depths higher than 20 mm, the next simulations provide a comparison between PHA and PPSA in the presence of DAS, MV, and ADAP. Figure 6c–e shows the final images for PPSA-DAS, PPSA-MV, and PPSA-ADAP, respectively. All images are displayed with an 80-dB dynamic range.

Fig. 6
figure 6

Comparison of reconstructed image related to some cyst phantoms at depths from 20 mm to 100 mm. a real phantom, b PHA, c PPSA (DAS), d PPSA (MV), and e PPSA (ADAP). All images are displayed in 80 db dynamic range. The background areas for calculating CR are also shown with red circles in b

A quantitative comparison of reconstructed cyst phantom images for the different methods is also shown in Table 2. To evaluate the contrast of the reconstructed cyst phantom, the CR index for cyst phantoms is computed equal to the difference in the mean value in the background (\(I_{b}\)) and the mean value in the cyst region \((I_{\text{cyst}} )\) divided by \(I_{b}\). The background areas are shown with red circles in Fig. 6b. The standard deviation of intensity in the background areas (STDb) for the different methods is also measured and shown in Table 2. As can be compared, the STDb in PPSA-ADAP is closer to the STDb in the simulated phantom compared to the other methods.

Table 2 Mean values of intensity (dB) in the cyst regions and background area of images shown in Fig. 4 and computed CR index in percent

Cross sections of the cysts at depths of 30 mm, 70 mm, and 90 mm are also shown in Fig. 7a–c, respectively. As can be seen, PPSA-ADAP provides sharper edges compared to PHA or PPSA-DAS. The intensity of the cross section of the cyst regions at different depths for PSA (DAS or ADAP) is lower than that for PHA, as shown in Fig. 7a–c. Calculating the mean intensity in the cyst region shown in Table 2 proves this finding. Also, PPSA-ADAP provides the boundaries of cysts, with the same sharpness as PPSA-MV. The intensity in the cyst region is also lower for PPSA-ADAP compared to PPSA-MV. As can be seen in Fig. 5, ADAP can approximately provide intensity in the final image in the same manner as PPSA-MV.

Fig. 7
figure 7

Cross section of the reconstructed images in Fig. 6 for different cysts at depths, a 30 mm, b 70 mm, and c 90 mm

Experimental data

To evaluate the method under real conditions, RF data obtained from the former Biomedical Ultrasound Laboratory at the University of Michigan website (no longer available online) were also used. This data set was collected using a complete set of synthetic aperture-focusing techniques with an array of 64 elements and pitch of 0.24 mm, a 3.33-MHz transducer, and an A/D sampling frequency of 17.76 MHz. The transmit focus point in the PHA or PPSA imaging methods was synthesized with delay and sum of the recorded data from each individual transmitter, as explained in the last paragraphs of “The proposed PSA imaging using ADAP”. In PPSA imaging, subarrays with a length of 32 elements were used as transmitters. The reconstructed images for PHA, PPSA-DAS, and PPSA-MV or PPSA-ADAP are shown in Fig. 8a–d, respectively. As can be seen, the resolution in Fig. 8d is improved compared to Fig. 8a, b. Wire targets are seen with better resolution. Cross sections of these images at depths of 78.5 mm, 70 mm, and 90 mm are also shown in Fig. 9a–c, respectively. As can be seen, ADAP provides better resolution. The edges of the cysts with PPSA-ADAP are sharper than with the other methods. The cyst region in Fig. 8d also appears darker than that in Fig. 8a–c. As shown in Table 1, SLL of PSF in PPSA (ADAP) is lower compared to the other imaging methods. The lower SLL leads to lower leakage of background intensity in the cyst region. The reconstructed images of a heart phantom are also shown in Fig. 10. As can be seen, PPSA-ADAP provides better resolution and contrast compared to the other methods. Also, PPSA-ADAP provides the boundaries of cysts, with the same sharpness as PPSA-MV. As can be seen in Fig. 9a, PPSA-ADAP can provide intensity in the final image comparable to that of PPSA-MV.

Fig. 8
figure 8

The reconstructed image of a phantom for, a PHA using a DAS beamformer, b PPSA using DAS, c PPSA using MV, and d PPSA using ADAP

Fig. 9
figure 9

Cross section of the reconstructed images in Fig. 8 for wire targets at depths, a 78.5 mm, b 70 mm, and c 90 mm

Fig. 10
figure 10

The reconstructed image of a heart phantom for, a PHA using a DAS beamformer, b PPSA using DAS, c PPSA using MV, and d PPSA using ADAP

Discussion

In the case of PPSA, subarray images are adaptively weighted to reconstruct the final image. ADAP helps to determine the proper weight vector. The PPSA-ADAP method leads to the improvement of resolution and also contrast. The ADAP algorithm defines the \(W^{\text{H}} = [w_{1} , \ldots ,w_{{N_{s} }} ]\) vector based on the recorded samples after the transmit and the receive gains are formed. In other words, the ADAP method tries to form the diversity gain (contribution of subarrays) to form the overall beam.

Measuring the FWHM in Fig. 5a shows that PPSA-ADAP provides a beamwidth of 0.2 mm smaller than that of PHA or PPSA-DAS, which leads to a beamwidth of 0.6 mm. Figure 5 shows that PPSA-ADAP provides sharper edges. This property is related to the narrower beamwidth of PPSA-ADAP compared with the other two methods. The intensity of the cyst region with PHA is also higher than that with the other methods, as shown in Fig. 6. This problem stems from an excessively high SLL of the overall beam with PHA. As Fig. 3 illustrates, PHA imaging shows significant sidelobes at different depths. The large SLL degrades the contrast in PHA imaging. In comparison with PHA, PPSA-ADAP does not lead to high sidelobes, and therefore better contrast is expected with this method. Measuring CR as a metric of contrast for cyst phantoms in Fig. 6, as summarized in Table 2, also shows that PPSA-ADAP leads to higher contrast compared to both of the other imaging methods. By comparing the intensity in the cyst region or background areas with different imaging methods, as summarized in Table 2, it can be seen that PPSA-ADAP leads to results that are closer to real conditions compared with PHA. Real conditions refer to the known distribution of the scatterer that is used for simulation. SLL in PHA is so large that PHA has the worst contrast, even around the transmit focus point. PPSA-ADAP does not suffer strongly from high SLL, and therefore the contrast is better.

This paper shows that adjusting the weights for coherent compounding of subarray images (PPSA-ADAP) is more effective than approaches where all subarray images are conventionally averaged (PPSA-DAS).

In the MV method, the length of the apodization vector is N. Therefore, the MV beamformer searches in an N-dimensional space to find the proper weight vector, which minimizes the received energy. It is clear that MV reaches the best weight vector at the expense of a huge CL.

We solve the MV problem using a simple constraint where only two coefficients must be calculated during minimization. Therefore, the proper weight vector is searched in a 2D space, which helps to strongly decrease the CL. The proposed optimization problem tries to decrease the variance in the output. Therefore, improved resolution is expected compared to a DAS beamformer. It is important to emphasize the capability of PPSA imaging for the functionality of the ADAP method. Each entry of the observation vector, corresponding to a specific subarray image, is obtained after applying DAS beamforming to the transmit and the receive sides. Therefore, the effect of interference on the desired targets is expected to be sufficiently decreased. Putting apart these entries (subarray images), an observation vector strongly affected by the desired target is obtained. So, measuring the coherency of the observation vector by applying the ADAP method can well resolve the target from interference.

Conclusion

Although PHA imaging is a conventional method in medical imaging, it neglects the capability of subarray imaging. Using subarrays at different lateral positions and compounding the resulting images leads to an imaging method that serves as a better tool for designing the transmit beam. Transmit focusing is achieved by the combination of an adaptive and DAS beamformer in this paper. The proposed adaptive method (ADAP) provides coherent compounding of subarray images based on the beamformed responses of subarrays themselves. PPSA imaging provides better image quality compared with PHA imaging in terms of resolution and contrast, as well as depth of focus. These achievements are gained at the expense of a higher CL, only 4.5 times higher than PHA. In this study, all subarrays are assumed to have the same length equal to the half of the whole array. The effect of the transmit subarray’s length on the performance of the ADAP method must also be studied in a future work.