Skip to main content
Log in

Minimizing the average achievable distortion using multi-layer coding approach in two-hop networks

  • Published:
Annals of Telecommunications Aims and scope Submit manuscript

Abstract

Minimizing the average achievable distortion (AAD) of a Gaussian source at the destination of a two-hop block fading relay channel is studied in this paper. The communication is carried out through the use of a Decode and Forward (DF) relay with no source-destination direct links. The associated receivers of both hops are assumed to be aware of the corresponding channel state information (CSI), while the transmitters are unaware of their corresponding CSI. The current paper explores the effectiveness of incorporating the successive refinement source coding together with multi-layer channel coding in minimizing the AAD. In this regard, the closed form and optimal power allocation policy across code layers of the second hop is derived, and using a proper curve fitting approach, a close-to-optimal power allocation policy associated with the first hop is devised. It is numerically shown that the DF strategy closely follows the Amplify and Forward (AF) relaying, while there is a sizable gap between the AAD of multi-layer coding and single-layer source coding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. The condition under which \(\mathcal {G}(.)\) is an increasing function is provided in Appendix 1.2

References

  1. Van Der Meulen EC (1971) Three-terminal communication channels. Adv Appl Probab 3 (1):120–154

    Article  MathSciNet  Google Scholar 

  2. Cover T, Gamal AE (1979) Capacity theorems for the relay channel. IEEE Trans Inf Theory 25(5):572

    Article  MathSciNet  Google Scholar 

  3. Steiner A, Shamai S (2006) Single-user broadcasting protocols over a two-hop relay fading channel. IEEE Trans Inf Theory 52(11):4821–4838

    Article  MathSciNet  Google Scholar 

  4. Sharma A, Aggarwal M, Ahuja S, et al. (2018) Performance analysis of DF-relayed cognitive underlay networks over EGK fading channels. AEU-Int J Electron Commun 83:533–540

    Article  Google Scholar 

  5. Luan NT, Do DT (2017) A new look at AF two-way relaying networks: energy harvesting architecture and impact of co-channel interference. Ann Telecommun 72(11-12):533–540

    Article  Google Scholar 

  6. Hadj Alouane W, Hamdi N, Meherzi S (2015) Semi-blind two-way AF relaying over Nakagami-m fading environment. Ann Telecommun 70(1-2):49–62

    Article  Google Scholar 

  7. Shamai S (1997) A broadcast strategy for the Gaussian slowly fading channel. In: IEEE International symposium on information theory, p 150

  8. Shamai S, Steiner A (2003) A broadcast approach for a single user slowly fading MIMO channel. IEEE Trans Inf Theory 49(10):2617

    Article  MathSciNet  Google Scholar 

  9. Steiner A, Shamai S (2006) Achievable rates with imperfect transmitter side information using a broadcast transmission strategy. IEEE Trans Wirel Commun 7(3):1043–1051

    Article  Google Scholar 

  10. Liang Y, Lai L, Poor HV, Shamai S (2014) A broadcast approach for fading wiretap channels. IEEE Trans Inf Theory 60(2):842–858

    Article  MathSciNet  Google Scholar 

  11. Pourahmadi V, Motahari AS, Khandani AK (2013) Multilayer codes for broadcasting over quasi-static fading MIMO networks. IEEE Trans Commun 61(4):1573–1583

    Article  Google Scholar 

  12. Cohen KM, Steiner A, Shamai S (2020) On the broadcast approach over parallel MIMO two-state fading channel. International Zurich Seminar on Information and Communication (IZS) 26–28

  13. Khodam Hoseini SA, Akhlaghi S (2018) Proper multi-layer coding in fading dirty-paper channel. IET Commun 12(19):2454–2459

    Article  Google Scholar 

  14. Zohdy M, Tajer A, Shamai S (2019) Broadcast approach to multiple access with local CSIT. IEEE Trans Commun 67(11):7483–7498

    Article  Google Scholar 

  15. Pourahmadi V, Bayesteh A, Khandani AK (2012) Multilayer coding over multihop single user networks. IEEE Trans Inf Theory 58(8):5323

    Article  MathSciNet  Google Scholar 

  16. Keykhosravi S, Akhlaghi S (2016) Multi-layer coding strategy for multi-hop block fading channels with outage probability. Ann Telecommun 71(5-6):173–185

    Article  Google Scholar 

  17. Baghani M, Akhlaghi S, Golzadeh V (2016) Average achievable rate of broadcast strategy in relay-assisted block fading channels. IET Commun 10(3):346–355

    Article  Google Scholar 

  18. Cover TM, Thomas JA (2012) Elements of information theory, (John)Wiley and Sons, New York

  19. Tian C, Steiner A, Shamai S, Diggavi SN (2008) Successive refinement via broadcast: Optimizing expected distortion of a Gaussian source over a gaussian fading channel. IEEE Trans Inf Theory 54(7):2903–2918

    Article  MathSciNet  Google Scholar 

  20. Steinberg Y (2008) Coding and common knowledge. In: Proc 2008 information theory and applications (ITA 2008) workshop

  21. Aguerri EI, Gunduz D (2016) Distortion exponent in MIMO fading channels with time-varying source side information. IEEE Trans Inf Theory 62(6):3597–3617

    Article  MathSciNet  Google Scholar 

  22. Ng TC, Gunduz D, Goldsmith AJ, Erkip E (2009) Distortion minimization in gaussian layered broadcast coding with successive refinement. IEEE Trans Inf Theory 55(11):5074–5086

    Article  MathSciNet  Google Scholar 

  23. Ng TC, Tian C, Goldsmith AJ, Shamai S (2012) Minimum expected distortion in Gaussian source coding with fading side information. IEEE Trans Inf Theory 58(9):5725–5739

    Article  MathSciNet  Google Scholar 

  24. Mesbah W, Shaqfeh M, Alnuweiri H (2014) Jointly optimal rate and power allocation for multilayer transmission. IEEE Trans Wirel Commun 13(2):834–845

    Article  Google Scholar 

  25. Hoseini SAK, Akhlaghi S, Baghani M (2012) The achievable distortion of relay-assisted block fading channels. IEEE Commun Lett 16(8):1280–1283

    Article  Google Scholar 

  26. Saatlou O, Akhlaghi S, Hoseini SAK (2013) The achievable distortion of df relaying with average power constraint at the relay. IEEE Commun Lett 17(5):960–963

    Article  Google Scholar 

  27. Wang J, Kim YH, Cosman PC, Milstein LB (2013) Milstein, Minimization of expected distortion with layer-selective relaying of two-layer superposition coding. In: 77th IEEE Vehicular technology conference (VTC Spring),(pp 1-5)

  28. Khodam Hoseini SA, Akhlaghi S (2019) The impact of distribution uncertainty on the average distortion in a block fading channel. In: Iran workshop on communication and information theory (IWCIT), Tehran, Iran, pp 24–25

  29. Geldfand I, Fomin S (1991) Calculus of variation. Dover, New York

    Google Scholar 

  30. Abramowitz M, Irene AS (1965) Handbook of mathematical functions: with formulas, graphs, and mathematical tables. Dover, Washington D.C.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soroush Akhlaghi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1

Appendix 1

1.1 1.1 Proof of Lemma 1

The last constraint of Eq. 18, results in Eq. 19. Now, one can formulate the Lagrangian function of Eq. 18 as follows:

$$ \begin{array}{@{}rcl@{}} \mathcal{L}(T)={\int}_{0}^{\infty}\left( f(x)\mathcal{G}\left( \frac{1}{T(x)^{b}}\right)+\lambda \frac{T(x)}{x^{2}}-T^{\prime}(x)\psi(x)\right)dx, \end{array} $$
(45)

where λ and ψ(.) are, respectively, the slack multiplier and the slack arbitrary positive function that ensure meeting the power constraint, as well as monotonically non-decreasing characteristics of \(T^{\prime }(.)\). Defining

$$ \begin{array}{@{}rcl@{}} D(x)=\frac{1}{T(x)^{b}}, \end{array} $$
(46)

and applying the variational method in [29] to Eq. 45 gives the following:

$$ \begin{array}{@{}rcl@{}} -b\mathcal{G}_{D}\left( D(x)\right)\frac{f(x)}{T(x)^{b+1}}+\frac{\lambda}{x^{2}}+\psi^{\prime}(x)=0, \end{array} $$
(47)

where \(\mathcal {G}_{D}(D(x))=\frac {\partial }{\partial D(x)}\mathcal {G}(D(x))\), and \(\psi ^{\prime }(x)=\frac {d}{dx}\psi (x)\). Taking the slackness condition corresponding to the positivity of \(T^{\prime }(.)\) into account, for positive power allocation to the code layers, the second constraint of Eq. 18 is met with inequality, so ψ(x) = 0, and Eq. 47 leads to:

$$ \begin{array}{@{}rcl@{}} T(x)=\left( \frac{bx^{2}f(x)\mathcal{G}_{D}\left( D(x)\right)}{\lambda}\right)^{\frac{1}{b+1}}. \end{array} $$
(48)

As the above equation is derived for the case of having \(T^{\prime }(.)>0\), bringing both sides of Eq. 48 to the power of (b + 1) and taking derivation w.r.t. x, gives the requirements of having the aforementioned condition met, as follows:

$$ \begin{array}{@{}rcl@{}} (b+1)T(x)^{b}T^{\prime}(x) = \frac{b}{\lambda}\left[\mathcal{G}_{D}\left( D(x)\right)\frac{d}{dx}\left( x^{2}f(x)\right)\right.\\ \left.+x^{2}f(x)\frac{d}{dx}\mathcal{G}_{D}\left( D(x)\right)\right] \end{array} $$
(49)

Noting that \(\mathcal {G}(.)\) is an increasing convex function, \(\mathcal {G}_{D}\left (D(x)\right )\) has positive values. Going further through the left-most term of Eq. 49, x2f(x) is a non-negative value, as we are dealing with a non-negative f(.) function, so, the term \(\frac {d}{dx}\mathcal {G}_{D}\left (D(x)\right )\) should be discussed. Incorporating the derivative chain rule leads to:

$$ \begin{array}{@{}rcl@{}} \frac{d}{dx}\mathcal{G}_{D}\left( D(x)\right)=D^{\prime}(x) \mathcal{G}_{DD}\left( D(x)\right), \end{array} $$
(50)

where \(\mathcal {G}_{DD}(D(x))=\frac {\partial ^{2}}{\partial D^{2}}\mathcal {G}(D(x))\). Considering the definition of D(.) in Eq. 46, the relation Eq. 50 converts to:

$$ \begin{array}{@{}rcl@{}} \frac{d}{dx}\mathcal{G}_{D}\left( D(x)\right)=\frac{-b}{T(x)^{b+1}}T^{\prime}(x)\mathcal{G}_{DD}\left( D(x)\right). \end{array} $$
(51)

Plugging the result of Eq. 51 into Eq. 49 results in Eq. 52:

$$ \begin{array}{@{}rcl@{}} \left[(b+1)T(x)^{b} + \frac{b^{2}}{\lambda T(x)^{b+1}}\mathcal{G}_{DD}\left( D(x)\right)\right]\\ T^{\prime}(x)=\frac{b}{\lambda}\mathcal{G}_{D}\left( D(x)\right)\frac{d}{dx}\left( x^{2}f(x)\right). \end{array} $$
(52)

Noting the convexity and the increasing properties of \(\mathcal {G}(.)\), the positivity of \(T^{\prime }(x)\) depends on:

$$ \begin{array}{@{}rcl@{}} \frac{d}{dx}\left( x^{2} f(x)\right)>0. \end{array} $$
(53)

In the sequel, the existence of at most one single interval of positive power allocation in any region like \(x\in \mathcal {R}=[l,u]\) which satisfies the \(\frac {d}{dx}\left (x^{2} f(x)\right )>0\) is discussed. To this end, let us assume the contradiction. For instance, assume there are two disjoint intervals of [x1,x2] and [x3,x4] (x1 < x2 < x3 < x4) in the region \(\mathcal {R}\) where \(T^{\prime }(x)=0\) for x ∈ (x2,x3), which leads to T(x2) = T(x3). Using Eq. 48, the latter equality leads to:

$$ \begin{array}{@{}rcl@{}} {{x}_{2}^{2}}f(x_{2})\mathcal{G}_{D}\left( D({x}_{2})\right) = {{x}_{3}^{2}}f(x_{3})\mathcal{G}_{D}\left( D(x_{3})\right), \end{array} $$
(54)

which contradicts the positivity of \(\frac {d}{dx}(x^{2}f(x))\) in \(\mathcal {R}\). Therefore, there should be at most one continues positive power allocation interval in the region satisfying Eq. 53.

1.2 1.2 Proof of the increasing property for \(\mathcal {G}(.)\) function

Here, we demonstrate that \(\mathcal {G}(.)\) is an increasing function. Inserting the derived auxiliary function of Eq. 16 into Eq. 8 we have:

$$ \begin{array}{@{}rcl@{}} \mathcal{G}\left( D_{r}(\alpha)\right)=\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)^{\frac{b}{b+1}}{\int}_{\beta_{1}}^{\beta_{2}}\frac{f_{r}(\beta)^{\frac{1}{b+1}}}{\beta^{\frac{2b}{b+1}}}d\beta \\+F_{r}(\beta_{1}) + \left( 1-F_{r}(\beta_{2})\right)D_{r}(\alpha). \end{array} $$
(55)

Re-writing the power constraint of Eq. 9 using Eq. 16, the following expression for the integral part of Eq. 55 is derived.

$$ \begin{array}{@{}rcl@{}} &&{\int}_{\beta_{1}}^{\beta_{2}}\left( \frac{\beta^{2}f_{r}(\beta)}{{{\beta}_{1}^{2}}f_{r}(\beta_{1})}\right)^{\frac{1}{b+1}}\frac{1}{\beta^{2}}d\beta+\frac{T_{r}(\beta_{2})}{\beta_{2}}-\frac{1}{\beta_{1}}=P_{r} \rightarrow \\ &&{\int}_{\beta_{1}}^{\beta_{2}}\frac{f_{r}(\beta)^{\frac{1}{b+1}}}{\beta^{\frac{2b}{b+1}}}d\beta = \left( P_{r}+\frac{1}{\beta_{1}}-\frac{D_{r}(\alpha)^{\frac{-1}{b}}}{\beta_{2}}\right)\\&&\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)^{\frac{1}{b+1}}. \end{array} $$
(56)

Noting that β1 and β2 in Eq. 55 are dependent to Dr(α), substituting Eq. 56 in Eq. 55, and taking partial derivative from Eq. 55, yields:

$$ \begin{array}{@{}rcl@{}} \frac{\partial\mathcal{G}\left( D_{r}(\alpha)\right)}{\partial D_{r}(\alpha)}\!&=&\!\left[\frac{D_{r}(\alpha)^{\frac{-1}{b}}}{{{\beta}_{2}^{2}}} \left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)-f_{r}(\beta_{2})D_{r}(\alpha)\right]\\&&\!\times\frac{\partial \beta_{2}}{\partial D_{r}(\alpha)} + \left[f_{r}(\beta_{1}) - \frac{{{\beta}_{1}^{2}}f_{r}(\beta_{1})}{{{\beta}_{1}^{2}}}\right]\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)} \\&&\!+\frac{D_{r}(\alpha)^{-\frac{b+1}{b}}}{b\beta_{2}}\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)+\frac{\partial \left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)}{\partial \beta_{1}}\\&&\!\times\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)}\left( P_{r}+\frac{1}{\beta_{1}}-\frac{D_{r}(\alpha)^{\frac{-1}{b}}}{\beta_{2}}\right)\\ &&\!+\left( 1-F_{r}(\beta_{2})\right). \end{array} $$
(57)

According to Eq. 17, the first term in Eq. 57 is 0. Simplifying the second component, it is equal to 0, too. Thus, Eq. 57 changes to:

$$ \begin{array}{@{}rcl@{}} \frac{\partial\mathcal{G}\left( D_{r}(\alpha)\right)}{\partial D_{r}(\alpha)}&=&\frac{D_{r}(\alpha)^{-\frac{b+1}{b}}}{b\beta_{2}}\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right) \\ &&+\frac{\partial \left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)}{\partial \beta_{1}}\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)}\\&&\times\left( P_{r}+\frac{1}{\beta_{1}}-\frac{D_{r}(\alpha)^{\frac{-1}{b}}}{\beta_{2}}\right)\\ &&+\left( 1-F_{r}(\beta_{2})\right) \end{array} $$
(58)

In order to characterize \(\frac {\partial \beta _{1}}{\partial D_{r}(\alpha )}\), one can re-write the integral of Eq. 56 as \({\int \limits }_{\beta _{1}}^{\beta _{2}}\frac {1}{\beta ^{2}} \left (\beta ^{2}f_{r}(\beta )\right )^{\frac {1}{b+1}}d\beta \). Thus, taking derivative from Eq. 56 with respect to Dr(α) yields:

$$ \begin{array}{@{}rcl@{}} &&\frac{\left( {{\beta}_{2}^{2}}f_{r}(\beta_{2})\right)}{{\beta}_{2}^{2}}^{\frac{1}{b+1}}\!\!\frac{\partial \beta_{2}}{\partial D_{r}(\alpha)}-\frac{\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)}{{\beta}_{1}^{2}}^{\frac{1}{b+1}}\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)} \\ &&=\frac{1}{b+1}\frac{\partial\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)}{\partial \beta_{1}}\left( \!P_{r} + \frac{1}{\beta_{1}} - \frac{D_{r}(\alpha)^{\frac{-1}{b}}}{\beta_{2}}\!\right)\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)}\\ &&+\left( {\beta_{1}^{2}}f_{r}(\beta_{1})\right)\\ &&\times\left( \!\frac{-1}{b}\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)} + \frac{D_{r}(\alpha)^{\frac{-(b+1)}{b}}}{b\beta_{2}} + \frac{D_{r}(\alpha)^{\frac{-1}{b}}}{{\beta_{2}^{2}}}\frac{\partial \beta_{2}}{\partial D_{r}(\alpha)}\!\right) \end{array} $$
(59)

Re-ordering Eq. 59 leads to:

$$ \begin{array}{@{}rcl@{}} &&\left[\frac{({\beta}_{2}^{2}f_{r}(\beta_{2}))}{{\beta}_{2}^{2}}^{\frac{1}{b+1}}-\frac{D_{r}(\alpha)\frac{-1}{b}({\beta}_{1}^{2}f_{r}(\beta_{1}))}{{\beta}_{2}^{2}}^{\frac{1}{b+1}}\right] \frac{\partial\beta_{2}}{\partial{D_{r}}(\alpha)}\\ &&= \left[\frac{\left( {\beta}_1^2f_r(\beta_1)\right)}{{\beta}_1^2}^{\frac{1}{b+1}}\! - \frac{\left( {\beta}_1^2f_r(\beta_1)\right)}{{\beta}_1^2}^{\frac{1}{b+1}}\right. \\&&+ \frac{1}{b+1}\frac{\partial \left( {\beta}_1^2f_r(\beta_1)\right)}{\partial \beta_1}\left( \beta_1^2f_r(\beta_1)\right)^{\frac{-b}{b+1}}\\ && \times\left. \left( P_r+\frac{1}{\beta_1}-\frac{D_r(\alpha)^{\frac{-1}{b}}}{\beta_2}\right)\right]\times\frac{\partial \beta_1}{\partial D_r(\alpha)} \\&&+\frac{D_r(\alpha)^{\frac{-(b+1)}{b}}\left( \beta_1^2f_r(\beta_1)\right)^{\frac{1}{b+1}}}{b\beta_2} \end{array} $$
(60)

and simplifying Eq. 60 by the use of Eq. 17 gives:

$$ \begin{array}{@{}rcl@{}} &&\!\!\!\!\frac{\partial \beta_{1}}{\partial D_{r}(\alpha)}\\&& \!\! = -\frac{(b+1)D_{r}(\alpha)^{\frac{-(b+1)}{b}}\left( {\beta_{1}^{2}}f_{r}(\beta_{1})\right)^{\frac{1}{b+1}}}{b\beta_{2}\frac{\partial \left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)}{\partial \beta_{1}}\left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right)^{\frac{-b}{b+1}}\left( P_{r}+\frac{1}{\beta_{1}}-\frac{D_{r}(\alpha)^{\frac{-1}{b}}}{\beta_{2}}\right)}.\\ \end{array} $$
(61)

Applying Eq. 61 into Eq. 58 and simplifying the relation gives:

$$ \begin{array}{@{}rcl@{}} \frac{\partial \mathcal{G}\left( D_{r}(\alpha)\right)}{\partial D_{r}(\alpha)} &=&-\frac{D_{r}(\alpha)^{-(\frac{b+1}{b})}}{\beta_{2}} \left( {{\beta}_{1}^{2}}f_{r}(\beta_{1})\right) \\&&+ \left( 1-F_{r}(\beta_{2})\right), \end{array} $$
(62)

and finally using Eqs. 1762 simplifies to:

$$ \begin{array}{@{}rcl@{}} \frac{\partial \mathcal{G}\left( D_{r}(\alpha)\right)}{\partial D_{r}(\alpha)} =1 - \beta_{2}f_{r}(\beta_{2}) -F_{r}(\beta_{2}). \end{array} $$
(63)

To give an example, in the Rayleigh fading case, the necessary condition to have \(\frac {\partial \mathcal {G}\left (D_{r}(\alpha )\right )}{\partial D_{r}(\alpha )}\geq 0\) is β2 ≤ 1.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khodam Hoseini, S.A., Akhlaghi, S. & Baghani, M. Minimizing the average achievable distortion using multi-layer coding approach in two-hop networks. Ann. Telecommun. 76, 83–95 (2021). https://doi.org/10.1007/s12243-020-00812-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12243-020-00812-0

Keywords

Navigation