Skip to main content
Log in

Optimum test planning for heterogeneous inverse Gaussian processes

  • Published:
Lifetime Data Analysis Aims and scope Submit manuscript

Abstract

The heterogeneous inverse Gaussian (IG) process is one of the most popular and most considered degradation models for highly reliable products. One difficulty with heterogeneous IG processes is the lack of analytic expressions for the Fisher information matrix (FIM). Thus, it is a challenge to find an optimum test plan using any information-based criteria with decision variables such as the termination time, the number of measurements and sample size. In this article, the FIM of an IG process with random slopes can be derived explicitly in an algebraic expression to reduce uncertainty caused by the numerical approximation. The D- and V-optimum test plans with/without a cost constraint can be obtained by using a profile optimum plan. Sensitivity analysis is studied to elucidate how optimum planning is influenced by the experimental costs and planning values of the model parameters. The theoretical results are illustrated by numerical simulation and case studies. Simulations, technical derivations and auxiliary formulae are available online as supplementary materials.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Bagdonavičius V, Nikulin MS (2001a) Accelerated life models: modeling and statistical analysis. Chapman & Hall/CRC, Boca Raton

    Book  MATH  Google Scholar 

  • Bagdonavičius V, Nikulin MS (2001b) Estimation in degradation models with explanatory variables. Lifetime Data Anal 7:85–103

    Article  MathSciNet  MATH  Google Scholar 

  • Banerjee AK, Bhattacharyya GK (1974) A Bayesian study of the inverse Gaussian distribution. Technical Report, 399, Department of Statistics, University of Wisconsin, Madison

  • Banerjee AK, Bhattacharyya GK (1976) A purchase incidence model with inverse Gaussian interpurchase times. J Am Stat Assoc 71:823–829

    Article  MATH  Google Scholar 

  • Banerjee AK, Bhattacharyya GK (1979) Bayesian results for the inverse Gaussian distribution with an application. Technometrics 21:247–251

    Article  MathSciNet  MATH  Google Scholar 

  • Boulanger M, Escobar LA (1994) Experimental design for a class of accelerated degradation tests. Technometrics 36:260–272

    Article  MATH  Google Scholar 

  • Box GEP, Tiao GC (2011) Bayesian inference in statistical analysis. John Wiley & Sons, New York

    MATH  Google Scholar 

  • Cheng YS, Peng CY (2012) Integrated degradation models in R using iDEMO. J Stat Softw 49:1–22

    Article  Google Scholar 

  • Chernoff H (1953) Locally optimal designs for estimating parameters. Ann Math Stat 24:586–602

    Article  MathSciNet  MATH  Google Scholar 

  • Chhikara RS, Folks L (1989) The inverse Gaussian distribution: theory, methodology, and applications. Marcel Dekker, New York

    MATH  Google Scholar 

  • Çinlar E (1980) On a generalization of gamma processes. J Appl Probab 17:467–480

    Article  MathSciNet  MATH  Google Scholar 

  • Cox DR, Reid N (1987) Parameter orthogonality and approximate conditional inference (with discussion). J Royal Stat Soc B 40:1–39

    MATH  Google Scholar 

  • Dette H, O’Brien TE (1999) Optimality criteria for regression models based on predicted variance. Biometrika 86:93–106

    Article  MathSciNet  MATH  Google Scholar 

  • Hu CH, Lee MY, Tang J (2015) Optimum step-stress accelerated degradation test for Wiener degradation process under constraints. Eur J Operation Res 241:412–421

    Article  MathSciNet  MATH  Google Scholar 

  • Jørgensen B (1982) Statistical properties of the generalized inverse Gaussian distribution, Lecture Notes in Statistics, Vol 9. Springer-Verlag, New York

  • Lange KL, Little RJA, Taylor JMG (1989) Robust statistical modeling using the \(t\) distribution. J Am Stat Assoc 84:881–896

    MathSciNet  Google Scholar 

  • Lawless J, Crowder M (2004) Covariates and random effects in a gamma process model with application to degradation and failure. Lifetime Data Anal 10:213–227

    Article  MathSciNet  MATH  Google Scholar 

  • Lehmann EL, Shaffer JP (1988) Inverted distributions. Am Stat 42:191–194

    MathSciNet  Google Scholar 

  • Lim H (2015) Optimum accelerated degradation tests for the gamma degradation process case under the constraint of total cost. Entropy 17:2556–2572

    Article  Google Scholar 

  • Lu CJ, Meeker WQ (1993) Using degradation measures to estimate a time-to-failure distribution. Technometrics 35:161–174

    Article  MathSciNet  MATH  Google Scholar 

  • Meeker WQ, Escobar LA, Pascual FG (2022) Statistical methods for reliability data, 2nd edn. John Wiley & Sons, New York

    MATH  Google Scholar 

  • Meng X, Rubin D (1993) Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika 80:267–278

    Article  MathSciNet  MATH  Google Scholar 

  • Montgomery DC (2012) Design and analysis of experiments, 8th edn. John Wiley & Sons, New York

    Google Scholar 

  • Nelson WB (2004) Accelerated testing: statistical models, test plans, and data analysis. John Wiley & Sons, New York

    Google Scholar 

  • Padgett WJ, Tomlinson MA (2004) Inference from accelerated degradation and failure data based on Gaussian process models. Lifetime Data Anal 10:191–206

    Article  MathSciNet  MATH  Google Scholar 

  • Peng CY (2015) Inverse Gaussian processes with random effects and explanatory variables for degradation data. Technometrics 57:100–111

    Article  MathSciNet  Google Scholar 

  • Peng CY, Cheng YS (2020) Student-\(t\) processes for degradation analysis. Technometrics 62:223–235

    Article  MathSciNet  Google Scholar 

  • Peng CY, Cheng YS (2021) Profile optimum planning for degradation analysis. Naval Res Logist 68:951–962

    Article  MathSciNet  Google Scholar 

  • Peng CY, Tseng ST (2009) Misspecification analysis of linear degradation models. IEEE Trans Reliab 58:444–455

    Article  Google Scholar 

  • Polya G, Szegö G (1997) Problems and theorems in analysis II. Springer, Berlin

    MATH  Google Scholar 

  • Pukelsheim F (1993) Optimal design of experiments. John Wiley & Sons, New York

    MATH  Google Scholar 

  • R Core Team (2022) R: a language and environment for statistical computing, Vienna: R foundation for statistical computing. Available at http://www.R-project.org/

  • Sheshadri V (1993) The inverse Gaussian distribution, a case study in exponential families. Oxford Univ. Press, New York

    Google Scholar 

  • Sheshadri V (1999) The inverse Gaussian distribution, statistical theory and applications. Springer-Verlag, New York

    Book  Google Scholar 

  • Shi Y, Escobar LA, Meeker WQ (2009) Accelerated destructive degradation test planning. Technometrics 51:1–13

    Article  MathSciNet  Google Scholar 

  • Singpurwalla ND (1997) Gamma processes and their generalizations: an overview. In: Cook R, Mendel M, Vrijling H (eds) Engineering probabilistic design and maintenance for flood protection. Kluwer Academic, New York, pp 67–73

    Chapter  Google Scholar 

  • Taha HA (2017) Operations research: an introduction, 10th edn. Pearson Education, New York

    MATH  Google Scholar 

  • Tsai CC, Tseng ST, Balakrishnan N (2012) Optimal design for gamma degradation processes with random effects. IEEE Trans Reliab 61:604–613

    Article  Google Scholar 

  • Tseng ST, Tsai CC, Balakrishnan N (2011) Optimal sample size allocation for accelerated degradation test based on wiener process. In: Balakrishnan N (ed) Methods and applications of statistics in engineering, quality control, and the physical sciences. Wiley, New York, pp 330–343

    Google Scholar 

  • van Noortwijk JM (2009) A survey of the application of gamma processes in maintenance. Reliab Eng Syst Safety 94:2–21

    Article  Google Scholar 

  • Wang X, Xu D (2010) An inverse Gaussian process model for degradation data. Technometrics 52:188–197

    Article  MathSciNet  Google Scholar 

  • Weaver BP, Meeker WQ (2021) Bayesian methods for planning accelerated repeated measures degradation tests. Technometrics 63:90–91

    Article  MathSciNet  Google Scholar 

  • Weaver BP, Meeker WQ, Escobar LA, Wendelberger J (2013) Methods for planning repeated measures degradation studies. Technometrics 55:122–134

    Article  MathSciNet  Google Scholar 

  • Whitmore GA (1986) Normal-gamma mixtures of inverse Gaussian distributions. Scand J Stat 13:211–220

    MathSciNet  MATH  Google Scholar 

  • Whitmore GA, Schenkelberg F (1997) Modeling accelerated degradation data using wiener diffusion with a time scale transformation. Lifetime Data Anal 3:27–45

    Article  MATH  Google Scholar 

  • Wu CFJ, Hamada M (2021) Experiments: planning, analysis and optimization, 3rd edn. John Wiley & Sons, New York

    Book  MATH  Google Scholar 

  • Wu SJ, Chang CT (2002) Optimal design of degradation tests in presence of cost constraint. Reliab Eng Syst Safety 76:109–115

    Article  Google Scholar 

  • Ye Z, Chen N (2014) The inverse Gaussian process as a degradation model. Technometrics 56:302–311

    Article  MathSciNet  Google Scholar 

  • Yu HF, Tseng ST (1999) Designing a degradation experiment. Naval Res Logist 46:689–706

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang Y, Meeker WQ (2006) Bayesian methods for planning accelerated life tests. Technometrics 48:49–60

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chien-Yu Peng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

C.-Y. Peng: This work by Peng was partially supported by the Ministry of Science and Technology (MOST-109-2118-M-001-009-MY3) and Academia Sinica (AS-CDA-107-M09) of Taiwan, Republic of China. Hideki Nagatsuka was partially supported by the Grant-in-Aid for Scientific Research (C) 19K04890, Japan Society for the Promotion of Science, and Chuo University Grant for Special Research. The authors are grateful to the Editor-in-Chief, Associate Editor and three referees for their helpful and valuable comments. This work was partially carried out while the first author was visiting the Chuo University during October, 2019. The kind hospitality of the faculties and staffs is gratefully acknowledged for providing a congenial working environment.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 312 KB)

Appendix

Appendix

Appendix 1: proof of theorem 2.1.

To prove Theorem 2.1, the following result is needed. From the log-likelihood function of the heterogeneous IG process in (3), a few negative integer moments of \(1+\sigma _\mu ^2Y(t)\) are studied for its FIM.

Theorem 6.1

If \(Y(t)|\mu \sim \mathcal {IG}(\mu t, \lambda t^2)\) and \(\delta = \mu ^{-1} \sim \mathcal {N}(\xi , \sigma _\mu ^2/\lambda )\), then we have

$$\begin{aligned} (i) \,&\mathrm{E}\left( \frac{1}{1+\sigma _\mu ^2Y(t)}\right) = \frac{\xi }{\xi + \sigma _\mu ^2 t}, \\ (ii) \,&\mathrm{E}\left( \frac{1}{(1+\sigma _\mu ^2Y(t))^2}\right) = \frac{\xi ^2}{(\xi + \sigma _\mu ^2 t)^2} + \frac{\sigma _\mu ^4 t}{\lambda (\xi + \sigma _\mu ^2 t)^3}, \\ (iii) \,&\mathrm{E}\left( \frac{1}{(1+\sigma _\mu ^2Y(t))^3}\right) = \frac{\xi ^3}{(\xi + \sigma _\mu ^2 t)^3} + \frac{3\xi \sigma _\mu ^4 t}{\lambda (\xi + \sigma _\mu ^2 t)^4} - \frac{3\sigma _\mu ^6 t}{\lambda ^2(\xi + \sigma _\mu ^2 t)^5}. \end{aligned}$$

Proof

Some auxiliary results given in Supplementary Section 2 are used to facilitate the proof of Theorem 6.1. (i) Since \(Y(t)|\mu \sim \mathcal {IG}(\mu t , \lambda t^2)\), then the conditional moment-generating function of \(Y(t)|\mu \) is given by

$$\begin{aligned} M_{Y(t)|\mu }(x) = \exp \left( \frac{\lambda t }{\mu } - \frac{\lambda t}{\mu }\sqrt{1-\frac{2\mu ^2 x}{\lambda }}\right) , \end{aligned}$$

where \(\delta = \mu ^{-1} \sim \mathcal {N}(\xi , \sigma _\mu ^2/\lambda )\). We have

$$\begin{aligned} \text{ E }\left( \frac{1}{1+\sigma _\mu ^2Y(t)}\right)&{\mathop {=}\limits ^{(1)}} \text{ E }\left( \text{ E }\left( \frac{1}{1+\sigma _\mu ^2Y(t)}\bigg |\delta \right) \right) , \\&{\mathop {=}\limits ^{(2)}} \text{ E }\left( \int _{0}^{\infty }\exp \left( \frac{\lambda t }{\mu } - \frac{\lambda t}{\mu }\sqrt{1 + \frac{2\mu ^2 \sigma _\mu ^2 x}{\lambda }} - x\right) \text{ d }x \right) \\&{\mathop {=}\limits ^{(3)}} 1 - \sqrt{2\pi }\lambda t \int _{-\infty }^{\infty } \exp (C(\delta )^2/2) \phi \left( \frac{\delta -\xi }{\sigma _\mu /\sqrt{\lambda }}\right) \Phi (-C(\delta )) \text{ d }\delta \\&{\mathop {=}\limits ^{(4)}} \frac{\xi }{\xi + \sigma _\mu ^2 t}, \end{aligned}$$

where \(C(\delta ) = \sqrt{\lambda }\delta /\sigma _\mu + \sqrt{\lambda }\sigma _\mu t\). Here (1) follows the law of total expectation, (2) from Lemma 1 in Supplementary Section 2, (3) from Corollary 1(i) in Supplementary Section 2, (4) from Lemma 3(i) in Supplementary Section 2. Similar to the proof (i), the proofs of (ii) and (iii) are thus omitted since the details are straightforward but tedious. \(\square \)

Now, we return to the proof of Theorem 2.1.

Proof

By using partial fractions and Theorem 6.1, the entries of the FIM can be shown as follows:

$$\begin{aligned} \text{ E }\left( -\frac{\partial ^2\mathcal {L}(\varvec{\theta })}{\partial \xi ^2}\right)&= \text{ E }\left( \frac{\lambda Y(t_m)}{1+\sigma _\mu ^2 Y(t_m)}\right) = \frac{\lambda }{\sigma _\mu ^2} - \frac{\lambda }{\sigma _\mu ^2}\text{ E }\left( \frac{1}{1+\sigma _\mu ^2 Y(t_m)}\right) = \frac{\lambda t_m}{\xi + \sigma _\mu ^2 t_m},\\ \text{ E }\left( -\frac{\partial ^2\mathcal {L}(\varvec{\theta })}{\partial \xi \partial \sigma _\mu ^2}\right)&= \text{ E }\left( \frac{\lambda Y(t_m) (t_m - \xi Y(t_m))}{(1+\sigma _\mu ^2 Y(t_m))^2}\right) \\&= -\frac{\lambda \xi }{\sigma _\mu ^4} + \lambda \left( \frac{t_m}{\sigma _\mu ^2} + \frac{2\xi }{\sigma _\mu ^4}\right) \text{ E }\left( \frac{1}{1+\sigma _\mu ^2 Y(t_m)}\right) \\&\quad - \lambda \left( \frac{t_m}{\sigma _\mu ^2} + \frac{\xi }{\sigma _\mu ^4}\right) \text{ E }\left( \frac{1}{(1+\sigma _\mu ^2 Y(t_m))^2}\right) \\&= \frac{-t_m}{(\xi + \sigma _\mu ^2 t_m)^2}, \\ \text{ E }\left( -\frac{\partial ^2\mathcal {L}(\varvec{\theta })}{\partial \xi \partial \lambda }\right)&= \text{ E }\left( \frac{\xi Y(t_m) - t_m}{1+\sigma _\mu ^2 Y(t_m)}\right) = \frac{\xi }{\sigma _\mu ^2}- \left( \frac{\xi }{\sigma _\mu ^2} + t_m\right) \text{ E }\left( \frac{1}{1+\sigma _\mu ^2 Y(t_m)}\right) = 0, \\ \text{ E }\left( -\frac{\partial ^2\mathcal {L}(\varvec{\theta })}{\partial (\sigma _\mu ^2)^2}\right)&= \frac{-1}{2}\text{ E }\left( \frac{Y(t_m)^2}{(1+\sigma _\mu ^2 Y(t_m))^2}\right) + \lambda \text{ E }\left( \frac{Y(t_m)(\xi Y(t_m) - t_m)^2}{(1+\sigma _\mu ^2 Y(t_m))^3}\right) \\&= \frac{\lambda \xi ^2}{\sigma _\mu ^6} - \frac{1}{2\sigma _\mu ^4} + \left( \frac{1}{\sigma _\mu ^4} - \frac{2\xi \lambda t_m}{\sigma _\mu ^4} - \frac{3\xi ^2\lambda }{\sigma _\mu ^6}\right) \text{ E }\left( \frac{1}{1+\sigma _\mu ^2 Y(t_m)}\right) \\&\quad + \left( \frac{\lambda t_m^2}{\sigma _\mu ^2} + \frac{4\xi \lambda t_m}{\sigma _\mu ^4} + \frac{3\xi ^2\lambda }{\sigma _\mu ^6} - \frac{1}{2\sigma _\mu ^4} \right) \text{ E }\left( \frac{1}{(1+\sigma _\mu ^2 Y(t_m))^2}\right) \\&\quad - \lambda \left( \frac{t_m^2}{\sigma _\mu ^2} + \frac{2\xi t_m}{\sigma _\mu ^4} + \frac{\xi ^2}{\sigma _\mu ^6} \right) \text{ E }\left( \frac{1}{(1+\sigma _\mu ^2 Y(t_m))^3}\right) \\ {}&= \frac{t_m(\lambda \sigma _\mu ^2 t_m^2 + \xi \lambda t_m + 5)}{2\lambda (\xi + \sigma _\mu ^2 t_m)^3}, \\ \text{ E }\left( -\frac{\partial ^2\mathcal {L}(\varvec{\theta })}{\partial \sigma _\mu ^2\partial \lambda }\right)&= \frac{-1}{2}\text{ E }\left( \frac{(t_m - \xi Y(t_m))^2}{(1+\sigma _\mu ^2 Y(t_m))^2}\right) \\&= \frac{-\xi ^2}{2\sigma _\mu ^4} + \left( \frac{\xi t_m}{\sigma _\mu ^2} + \frac{\xi ^2}{\sigma _\mu ^4}\right) \text{ E }\left( \frac{1}{1+\sigma _\mu ^2 Y(t_m)}\right) \\&\quad - \frac{1}{2} \left( t_m + \frac{\xi ^2}{\sigma _\mu ^2}\right) ^2 \text{ E }\left( \frac{1}{(1+\sigma _\mu ^2 Y(t_m))^2}\right) \\&= \frac{-t_m}{2\lambda (\xi + \sigma _\mu ^2 t_m)},\\ \text{ E }\left( -\frac{\partial ^2\mathcal {L}(\varvec{\theta })}{\partial \lambda ^2}\right)&= \frac{m}{2\lambda ^2}. \end{aligned}$$

In addition, the FIM, \(\mathcal {I}(\varvec{\theta })\), is positive definite, which can be checked by Sylvester’s criterion. Consequently, the theorem can be established. \(\square \)

Appendix 2: proof of theorem 3.1.

Proof

(i) The proof is verified easily. (ii) (Necessity) Taking the first derivative of \(h(t_m)\) with respect to the termination time \(t_m\), we obtain

$$\begin{aligned} \frac{\text{ d } h(t_m)}{\text{ d }t_m} = \frac{3t_m n^3(\lambda \xi \sigma _\mu ^2(m-1)t_m^2 + (\lambda \xi ^2(m-1) - 2m\sigma _\mu ^2) t_m + 2\xi m)}{4\lambda ^2(\sigma _\mu ^2 t_m + \xi )^5}. \end{aligned}$$
(18)

Setting the derivative to zero and solving the equation, we have \(t_m = (2m\sigma _\mu ^2 - \lambda \xi ^2(m-1) \pm \sqrt{K_0})/(2\sigma _\mu ^2\lambda \xi (m-1))\). Clearly, the discriminant of the quadratic equation in \(t_m\) in the numerator of (18) is \(K_0\), i.e.,

$$\begin{aligned} K_0&= (\xi \sqrt{\lambda (m-1)} - 2\sigma _\mu \sqrt{m}- \sqrt{2m}\sigma _\mu )(\xi \sqrt{\lambda (m-1)} - 2\sigma _\mu \sqrt{m}+ \sqrt{2m}\sigma _\mu ) \nonumber \\&\quad \times (\lambda \xi ^2(m-1) + 4\xi \sigma _\mu \sqrt{\lambda m(m-1)} + 2m\sigma _\mu ^2). \end{aligned}$$
(19)

By Vieta’s formula, the product of the two roots is \(2m/(\lambda \sigma _\mu ^2(m-1)) > 0\), implying that the signs of the two roots are the same. Hence, if the condition \({\xi \sqrt{\lambda }}/{\sigma _\mu } < (2\sqrt{m} - \sqrt{2m})/\sqrt{m-1}\) holds, then it implies

$$\begin{aligned}&\lambda \xi ^2(m-1) -4\xi \sigma _\mu \sqrt{\lambda m(m-1)} + 2m\sigma _\mu ^2> 0\ \text{(i.e., } K_0> 0)\quad \ \text{ and }\\&\quad 2m\sigma _\mu ^2 - \lambda \xi ^2(m-1) > 0. \end{aligned}$$

This means that the quadratic equation has two distinct positive real roots, \((2m\sigma _\mu ^2 - \lambda \xi ^2(m-1) - \sqrt{K_0})/(2\sigma _\mu ^2\lambda \xi (m-1))\) and \((2m\sigma _\mu ^2 - \lambda \xi ^2(m-1) + \sqrt{K_0})/(2\sigma _\mu ^2\lambda \xi (m-1))\), which are the local maximum and minimum points, respectively. In addition, it is easy to verify that \(\lim _{t_m \rightarrow \infty } h(t_m) = n^3(m-1)/(4\lambda \sigma _\mu ^6)\). The remaining part is straightforward and is omitted.

(Sufficiency) If the D-optimum termination time is \(t^*_m \in (0, \infty )\), then \(K_0\) should be positive since \(t^*_m\) is not positive when \(K_0 \le 0\). For \(K_0>0\), we consider the two cases \(\xi \sqrt{\lambda (m-1)}/\sigma _\mu - 2 \sqrt{m} < 0\) and \(> 0\). For the first case, \(K_0>0\) implies that \(\xi \sqrt{\lambda (m-1)}/\sigma _\mu - 2 \sqrt{m} < - \sqrt{2 m}\) from (19). For the second case, \(K_0>0\) implies that \(\xi \sqrt{\lambda (m-1)}/\sigma _\mu - 2 \sqrt{m} > \sqrt{2 m}\), i.e., \({2m}\sigma _\mu ^2 - \lambda \xi ^2(m-1) < 0\). We get the contradiction \(t^*_m < 0\). Hence, \(K_0 > 0\) implies that \(\xi \sqrt{\lambda (m-1)}/\sigma _\mu - 2 \sqrt{m} < 0\) and \(\xi \sqrt{\lambda (m-1)}/\sigma _\mu - 2 \sqrt{m} < -\sqrt{2 m}\), which is the condition (a). In addition, we have \(\lim _{t_m \rightarrow \infty } h(t_m) = n^3(m-1)/(4\lambda \sigma _\mu ^6)\) and \(\text{ d }h(t_m)/\text{d }t_m > 0\) for sufficiently large \(t_m\). For \(t^*_m\) to be a maximum point, \(h(t^*_m) \ge n^3(m-1)/(4 \lambda \sigma _\mu ^6)\) must hold since there must be larger values of h than \(h(t^*_m)\) when \(h(t^*_m) < n^3(m-1)/(4\lambda \sigma _\mu ^6)\). Therefore, \(t^*_m\) can be the unique maximum point for \(0< t_m < \infty \) even when \(h(t^*_m) = n^3(m-1)/(4\lambda \sigma _\mu ^6)\). The proof is complete. \(\square \)

Appendix 3: proof of theorem 3.2.

Proof

  1. (i)

    Substituting \(n = (C_b-C_{op}t_m)/(C_{mea}m + C_{it})\) into \(h_0(t_m, n, m)\) gives the profile objective function:

    $$\begin{aligned} h_{1}(t_m, m) = \left( \frac{C_b-C_{op}t_m}{C_{mea}m + C_{it}}\right) ^3 \frac{(3m + \lambda t_m (m-1)(\sigma _\mu ^2 t_m + \xi ))t_m^2}{4\lambda ^2(\sigma _\mu ^2 t_m + \xi )^4}. \end{aligned}$$

    Taking the first derivative with respect to the number of measurements m, we have

    $$\begin{aligned}&\frac{\partial h_{1}(t_m, m)}{\partial m} \\&\quad = \frac{t^2_m (C_b-C_{op}t_m)^3 (3C_{it}+\lambda t_m(\sigma _\mu ^2 t_m + \xi )(C_{it} + 3C_{mea}) - 2C_{mea}(\lambda t_m(\sigma _\mu ^2 t_m + \xi )+3)m)}{4\lambda ^2(\sigma _\mu ^2 t + \xi )^4(C_{mea}m + C_{it})^4}. \end{aligned}$$

    Solving the equation \(\partial h_{1}(t_m,m)/\partial m = 0\) for m and checking the roots for feasibility, we obtain \(m(t_m)\) and \(n(t_m)\) as shown in (8). Again, substituting \(m(t_m)\) into \(h_{1}(t_m, m)\) gives the profile objective function in (7). By using Proposition 1(i) in Supplementary Section 3, the result of the D-optimum test plan follows.

  2. (ii)

    Substituting \(m = 1\) and \(n = (C_b-C_{op}t_m)/(C_{mea} + C_{it})\) into \(h_0(t_m, n, m)\) gives the objective function

    $$\begin{aligned} \tilde{h}_1(t_m) = \frac{3(C_b-C_{op}t)^3 t_m^2}{4\lambda ^2(\sigma _\mu ^2 t_m + \xi )^4(C_{mea} + C_{it})^3}. \end{aligned}$$

    Taking the first derivative with respect to the termination time \(t_m\), we have

    $$\begin{aligned} \frac{\text{ d } \tilde{h}_1(t_m)}{\text{ d }t_m} = -\frac{3t_m(C_b-C_{op}t_m)^2(C_{op}\sigma _\mu ^2 t_m^2 + (5C_{op}\xi + 2C_b\sigma _\mu ^2)t_m - 2C_b\xi )}{4\lambda ^2(\sigma _\mu ^2 t_m + \xi )^5(C_{mea} + C_{it})^3}. \end{aligned}$$

    Solving the equation \(\text{ d } \tilde{h}_1(t_m)/\text{d }t_m = 0\), we then obtain four roots as follows: \(t_m = 0\), \(C_b/C_{op}\), and \(\left( -5C_{op}\xi - 2C_b\sigma _\mu ^2 \pm \sqrt{4C_b^2\sigma _\mu ^4 + 28 C_bC_{op}\xi \sigma _\mu ^2 + 25C_{op}^2\xi ^2}\right) /(2C_{op}\sigma _\mu ^2)\). Checking the roots for feasibility with respect to the constraints (i.e., \(0< t_m < (C_b-C_{mea} - C_{it})/{C_{op}}\)), we have \(t_m^*\) and \(n^*\) as shown in (10). Therefore, it is easy to check \(\text{ d } \tilde{h}_1(t_m)/\text{d }t_m < 0\) for \(t_m > t_m^*\) and \(\text{ d } \tilde{h}_1(t_m)/\text{d }t_m > 0\) for \( 0< t_m < t_m^*\). By using Proposition 1(ii) in Supplementary Section 3, if the condition in (9) is satisfied, then the condition is equivalent to \(t_m^* < (C_b- C_{mea} - C_{it})/C_{op}\) and \(n^{*} > 1\).

  3. (iii)

    Substituting \(n = 1\) and \(m = (C_b - C_{it} - C_{op}t)/C_{mea}\) into \(h_0(t_m, n, m)\) gives the objective function in (11). By using Proposition 1(iii) in Supplementary Section 3, the sufficient condition \(0< t_D < (C_b - C_{mea} - C_{it})/C_{op}\) is equivalent to \(1< m^* < (C_b-C_{it})/C_{mea}\).

The last case (iv) is trivial. The proof is complete. \(\square \)

Appendix 4: proof of theorem 3.3.

Proof

  1. (i)

    Taking the first derivative of \(g(t_m)\) with respect to the termination time \(t_m\) and setting the derivative to zero, we get \(\alpha _{13} t_m^3 + \alpha _{12} t_m^2 - \alpha _{10} = 0\), where \(\alpha _{1i}\) for \(i = 0, 2, 3\) is defined in Theorem 3.3(i). It is easy to see that \(\alpha _{10} > 0\) and \(\alpha _{13} > 0\) for \(\xi > 0\). If \(\alpha _{12} > 0\), then using Lemma 1 in Supplementary Section 5.1 of Peng and Cheng (2021), the unique positive root \(t_m\) in (12) can be obtained immediately by checking the feasibility. If \(\alpha _{12} < 0\), by Descartes’ rule of signs (refer to Polya and Szegö (1997)), the cubic equation has only one positive real root \(t_m^*\) because of the positive discriminant (i.e., \(27\alpha _{13}^2\alpha _{10} \ge 4\alpha _{12}^3\)).

  2. (ii)

    By elementary calculations, we have \(\text{ d } g(t_m)/\text{d } t_m = 0\), which is equivalent to the quartic equation defined in Theorem 3.3(ii). In addition, it can be verified that

    $$\begin{aligned} \lim _{t_m \rightarrow \infty } g(t_m) = \frac{2\lambda ^2\zeta _3(2\sigma _\mu ^2\zeta _2 + \lambda \zeta _3) + 2\lambda \sigma _\mu ^4\zeta _2^2 m + (m-1)\sigma _\mu ^2\zeta _1^2}{n\lambda (m-1)}. \end{aligned}$$

    Hence, if the conditions in Theorem 3.3(ii) are satisfied, the result holds immediately. \(\square \)

Appendix 5: proof of theorem 3.4.

Proof

  1. (i)

    Substituting \(n = (C_b-C_{op}t_m)/(C_{mea}m + C_{it})\) into \(g_0(t_m, n, m)\) gives the profile objective function:

    $$\begin{aligned} g_{1}(t_m, m)&= (C_{mea}m + C_{it}) \{(\sigma _\mu ^2 t_m + \xi )(2(\zeta _1 + \zeta _2\lambda (\sigma _\mu ^2 t_m + \xi ))^2 \\&\quad + \zeta _1^2(3 + \lambda t_m(\sigma _\mu ^2 t_m + \xi )) )m \\&\quad + \lambda t_m (4\zeta _1\zeta _3\lambda (\sigma _\mu ^2 t_m + \xi ) - \zeta _1^2(\sigma _\mu ^2 t_m + \xi )^2 + 2\zeta _3\lambda ^2(2\zeta _2(\sigma _\mu ^2 t_m + \xi )^2 \\&\quad \{+ \zeta _3(3 + \lambda t_m(\sigma _\mu ^2 t_m + \xi ))))\}\\&\quad \{\lambda t_m (3m + \lambda t_m (m-1)(\sigma _\mu ^2 t_m + \xi ))(C_b-C_{op}t_m)\}. \end{aligned}$$

    By straightforward calculations, we have \(\partial g_{1}(t_m,m)/\partial m = 0\), which is equivalent to the following quadratic equation

    $$\begin{aligned}&C_{mea} (\sigma _\mu ^2 t_m + \xi ) (3 + \lambda t_m (\sigma _\mu ^2 t_m + \xi )) ( 2 (\zeta _1 + \zeta _2 \lambda (\sigma _\mu ^2 t_m + \xi ))^2\\&\quad + \zeta _1^2 (3 + \lambda t_m (\sigma _\mu ^2 t_m + \xi ))) m^2 \\&\quad - 2 C_{mea} \lambda t_m (\sigma _\mu ^2 t_m + \xi )^2 ( 2 (\zeta _1 + \zeta _2 \lambda (\sigma _\mu ^2 t_m + \xi ))^2 + \zeta _1^2 (3 + \lambda t_m (\sigma _\mu ^2 t_m + \xi ))) m \\&\quad + \lambda t_m \{C_{mea} \lambda t_m (\sigma _\mu ^2 t_m + \xi )\{ (2\zeta _3 \lambda - \zeta _1 (\sigma _\mu ^2 t_m + \xi ))^2 - 2 \zeta _3 \lambda ^2 (2 \zeta _2 (\sigma _\mu ^2 t_m + \xi )^2 \\&\quad + \zeta _3 (5 + t_m \lambda (\sigma _\mu ^2 t_m + \xi )))\} \\&\quad - 2 C_{it} (3 \zeta _3 \lambda + (\sigma _\mu ^2 t_m + \xi ) (\zeta _1 + \lambda (\zeta _2 \sigma _\mu ^2 t_m + \zeta _3 t_m \lambda + \zeta _2 \xi )))^2\} = 0. \end{aligned}$$

    After some algebraic manipulations, it can be verified that the discriminant of the quadratic equation in m is given by

    $$\begin{aligned}&8 C_{mea} \lambda t_m (\sigma _\mu ^2 t_m + \xi ) (\lambda t_m (C_{mea}+ C_{it})(\sigma _\mu ^2 t_m + \xi ) + 3 C_{it}) \\&\quad \times \{2(\zeta _1 + \zeta _2 \lambda (\sigma _\mu ^2 t_m + \xi ))^2 + \zeta _1^2 (3 + \lambda t_m (\sigma _\mu ^2 t_m + \xi ))\} \\&\quad \times \{3 \zeta _3 \lambda + (\sigma _\mu ^2 t_m + \xi ) (\zeta _1 + \lambda (\zeta _3 \lambda t_m + \zeta _2 (\sigma _\mu ^2 t_m + \xi )))\}^2 > 0. \end{aligned}$$

    Thus it is easy to check the root \(m(t_m)\) as shown in (15) for feasibility. Again, substituting \(m(t_m)\) into \(g_{1}(t_m, m)\) gives the profile objective function in (14). The result of the V-optimum test plan follows using Proposition 1(i) in Supplementary Section 3.

  2. (ii)

    Substituting \(m = 1\) and \(n = (C_b-C_{op}t_m)/(C_{mea} + C_{it})\) into \(g_0(t_m, n, m)\) gives the objective function in (16). By using Proposition 1(ii) in Supplementary Section 3, the result follows directly.

  3. (iii)

    Substituting \(n = 1\) and \(m = (C_b - C_{it} - C_{op}t)/C_{mea}\) into \(g_0(t_m, n, m)\) gives the objective function in (17). We have the desired result by using Proposition 1(iii) in Supplementary Section 3.

The last case (iv) is trivial. This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, CY., Nagatsuka, H. & Cheng, YS. Optimum test planning for heterogeneous inverse Gaussian processes. Lifetime Data Anal 28, 401–427 (2022). https://doi.org/10.1007/s10985-022-09556-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10985-022-09556-6

Keywords

Navigation