Skip to main content
Log in

A method for selecting surrogate models in crashworthiness optimization

  • Research Paper
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

Surrogate model or response surface based design optimization has been widely adopted as a common process in automotive industry, as large-scale, high fidelity models are often required. However, most surrogate models are built by using a limited number of design points without considering data uncertainty. In addition, the selection of surrogate model in the literature is often arbitrary. This paper presents a Bayesian metric to complement root mean square error for selecting the best surrogate model among several candidates in a library under data uncertainty. A strategy for automatically selecting the best surrogate model and determining a reasonable sample size was proposed for design optimization of large-scale complex problems. Lastly, a vehicle example with full-frontal and offset-frontal impacts was presented to demonstrate the proposed methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Abbreviations

DOE:

Design of Experiment

SSR:

Subset Selection Regression

RBF:

Radial Basis Function

RBF-GP:

Radial Basis Function with Gaussian basic function

RBF-MQ:

Radial Basis Function with Multiquadric basic function

ULHS:

Uniform Latin Hypercube Sampling

RMSE:

Root Mean Square Error

N:

Number of design variables

X :

Input vector

Y :

Output vector

y :

Response data

δ :

Random factor

\(\boldsymbol{\lambda}\) :

Vector of surrogate model parameter

F, A, B :

Surrogate model

b :

Parameter of surrogate model B

D :

Simulation data

I :

Any prior information

\(prob {(\cdot)}\) :

Probability measure

m :

Number of model parameters

n :

Sample size

a :

Parameter of surrogate model A

\(\hat{\bf a}\) :

Parameter space of surrogate model A

a min :

Minimum value of parameter for surrogate model A

a max :

Maximum value of parameter for surrogate model A

a opt :

Optimized parameter in the space \(\hat{\bf a}\)

σ :

Data uncertainty

ε :

Random Gaussian noise

K :

Matrix of Bayesian metric

K i,j :

Element of matrix K

lnQ :

Bayesian metric

x i :

ith basis function center

ψ :

RBF Basis function

r :

Euclidean distance

λ :

Regularization parameter

E :

Identity matrix

α :

Model parameter of RBF basis function

References

  • Apley D, Liu J, Chen W (2006) Understanding the effects of model uncertainty in robust design with computer experiments. ASME J Mech Des 128:945–958

    Article  Google Scholar 

  • Fang H, Rais-Rohani M, Liu Z, Horstemeyer MF (2005) A comparative study of metamodeling methods for multiobjective crashworthiness optimization. Comput Struct 83(25–26):2121–2136

    Article  Google Scholar 

  • Forrester AIJ, Keane J (2009) Recent advances in surrogate-based optimization. Prog Aerosp Sci 45:50–79

    Article  Google Scholar 

  • Forrester AIJ, Sobester A, Keane J (2008) Engineering design via surrogate modeling: a practical guide. John Wiley and Sons, Ltd. Publication, pp 45–49

  • Gearhart C, Wang BP (2001) Bayesian metrics for comparing response surface models of data with uncertainty. Struct Multidisc Optim 22:198–207

    Article  Google Scholar 

  • Gu L (2001) A comparison of polynomial based regression models in vehicle safety analysis. In: ASME design engineering technical conferences. ASME paper no.: DETC/DAC-21083

  • Gu L, Yang RJ, Tho CH, Makowskit M, Faruquet Q, Li Y (2001) Optimisation and robustness for crashworthiness of side impact. Int J Veh Des 26:348–360

    Article  Google Scholar 

  • Hardy RL (1971) Multiquadratic equations of topography and other irregular surface. J Ceophus Res (76):1905–1915

  • Keane AJ, Nair PB (2005) Computational approaches to aerospace design: the pursuit of excellence. Wiley, Chichester

    Book  Google Scholar 

  • Kurtaran H, Eskandarian A, Marzougui D, Bedewi NE (2002) Crashworthiness design optimization using successive response surface approximations. Comput Mech 29(4–5):409–421

    Article  MATH  Google Scholar 

  • Levenberg K (1944) A method for the solution of certain problems in least squares. Q Appl Math 2:164–168

    MathSciNet  MATH  Google Scholar 

  • Marquardt D (1963) An algorithm for least-squares estimation of nonlinear parameters. SIAM J Appl Math 11:431–441

    Article  MathSciNet  MATH  Google Scholar 

  • Mizuno K, Arai Y, Yamazaki K, Kubota H, Yonezawa H, Hosokawa N (2008) Effectiveness and evaluation of SEAS of SUV in frontal impact. Int J Crashworthiness 13:533–541

    Article  Google Scholar 

  • modeFRONTIER 4.2 user manual. Version 4.2, ESTECO Corporation (2010)

  • Myers RH (1990) Classical and modern regression with application. Duxbury Press, Boston

    Google Scholar 

  • National Crash Analysis Center (NCAC) (2001) Public finite element model archive. www.ncac.gwu.edu/archives/model/index.html

  • Oberkampf WL, Diegert KV, Alvin KF, Rutherford BM (1998) Variability, uncertainty, and error in computational simulation. In: ASME proc. 7th AIAA/ASME joint thermophysics and heat transfer conference. ASME-HTD-Vol. 357-2

  • Pan F, Zhu P (2011) Design optimization of vehicle roof structures: benefits of using multiple surrogates. Int J Crashworthiness 16:85–95

    Article  MathSciNet  Google Scholar 

  • Poggio T, Girosi F (1990) Regularization algorithms for learning that are equivalent to multilayer networks. Science 247(4945):978–982

    Article  MathSciNet  MATH  Google Scholar 

  • Sivia DS, Skilling J (2006) Data analysis: a Bayesian tutorial, 2th edn. Oxford University Press, New York

    MATH  Google Scholar 

  • Sobieski J, Kodiyalam S, Yang RJ (2001) Optimization of car body under constraints of noise, vibration and harshness (NVH) and crash. Struct Multidisc Optim 22:295–306

    Article  Google Scholar 

  • Stander N, Roux W, Giger M, Redge M, Fedorova N, Haarhoff J (2004) A comparison of metamodeling techniques for crashworthiness optimization. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference. Paper no.: AIAA-2004-4489

  • Yang RJ, Akkerman A, Anderson DF, Faruque OM, Gu L (2000) Robustness optimization for vehicular crash simulation. Comput Sci Eng 2(6):8–13

    Article  Google Scholar 

  • Yang RJ, Wang N, Tho CH, Bobineau JP (2005) Metamodeling development for vehicle frontal impact simulation. ASME J Mech Des 127(5):1014–1020

    Article  Google Scholar 

  • Zhu P, Zhang Y, Chen GL (2009) Metamodel-based lightweight design of an automotive front-body structure using robust optimization. Proc Inst Mech Eng. Part D J Automob Eng 223:1133–1147

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge the grant support from National Natural Science Foundation of China (grant no. 50875146). The authors also appreciate Drs. Guosong Li and Ping Chen of Ford Motor Company for their kind help in this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. J. Yang.

Appendices

Appendix A: Review of SSR and RBF surrogate models

Radial basis function (RBF) was first introduced by Hardy to fit irregular topographic contours of geographical data in 1971 (Hardy 1971). Many researchers have compared RBF models to other surrogate models, e.g., KG, SVR, and so on. Similar to Kriging, RBF is a linear combination of a radically symmetric function based on Euclidean distance to obtain an approximate response function. A classical radial basis function model A(x, a) with fitting noise-free data (Forrester et al. 2008) is expressed as:

$$ A({{\bf x,a}})={\bf a}^T{{\boldsymbol \uppsi }}=\sum\limits_{i=1}^n {a_i } \psi \left(\left\| {{{\bf x}}-{{\bf x}}_i } \right\|\right) $$
(13)

where x i denotes the ith of the n basis function centre and \(\boldsymbol{\uppsi} \) is the n-vector containing the values of the basis functions ψ themselves, evaluated at the Euclidean distance between the prediction site x and the center x i of the basis function. Generally, different types of basis function are discussion in Forrester et al. (2008). In this paper, the study is restricted to the following two basis functions:

$$ \mbox{Gaussian}:\psi (r)=\exp (-r^2/2\alpha^2) $$
(14)
$$ \mbox{Multiquadric}:\psi (r)=(r^2+\alpha^2)^{1/2} $$
(15)

where α is the model parameter and \(r=\left\| {{{\bf x}}-{{\bf x}}_i } \right\|\) is the Euclidean distance. Details on algorithm of solving the (13) can be referred to Forrester et al. (2008). If the response is corrupted by noise data, Poggio and Girosi (1990) introduce a regularization parameter λ. This is added to the main diagonal matrix ψ. As a result, the approximation will no longer pass through the training points and a will be the least squares solution of (16).

$$ {\bf a}=({{\boldsymbol{\uppsi} }}+\lambda {{\bf E}})^{-1}{{\bf y}} $$
(16)

where E is an n ×n identity matrix, and y is the response data. Keane and Nair (2005) gave a detailed discussion on solving parameter λ.

Subset selection regression (SSR) is useful for two reasons: variance reduction and simplicity (Gu 2001). Various procedures have been used in an attempt to find the best subset of a series of predictor equations. Gu (2001) proposed a sequential replacement algorithm in conjunction with Residual Sum of Squares (RSS) checking criterion to get the best fitting regression model. The basic idea is that once two or more terms have been selected, it is determined that any of those terms can be replaced with another that gives a smaller RSS (Myers 1990). The procedure must converge as each replacement reduces the RSS that is bounded below. Sequential replacement algorithm is normally used in conjunction with stepwise selection. It is obtained by taking the stepwise selection and applying a replacement procedure after each new term is added. The theory and application of SSR method were summarized in Gu (2001).

Appendix B: Derivation of matrix K for RBF

According to (8), the matrix K is composed of two items as follows:

$$ \begin{array}{rll} {{\bf K}}_1 &=&\sum\limits_{k=1}^n {\left[ {\frac{\partial A(x_k ,{\bf a})}{\partial a_i }\frac{\partial A(x_k ,{\bf a})}{\partial a_j }} \right]}\\ &=&\sum\limits_{k=1}^n {\left[ {{\begin{array}{*{20}c} {\frac{\partial A(x_k ,{\bf a})}{\partial a_1 }} \hfill \\ \vdots \hfill \\ {\frac{\partial A(x_k ,{\bf a})}{\partial a_n }} \hfill \\ {\frac{\partial A(x_k ,{\bf a})}{\partial \alpha }} \hfill \\ \end{array} }} \right]}\\ &&\times{\left[ {{\begin{array}{*{20}c} {\frac{\partial A(x_k ,{\bf a})}{\partial a_1 }} \hfill & \cdots \hfill & {\frac{\partial A(x_k ,{\bf a})}{\partial a_n }} \hfill &{\frac{\partial A(x_k ,{\bf a})}{\partial \alpha }} \hfill \end{array} }} \right]} \end{array} $$
(17)
$$ \begin{array}{rll} {{\bf K}}_2 &=&\sum\limits_{k=1}^n {\left[ {(y_k -A(x_k ,{\bf a}))\frac{\partial^2A(x_k ,{\bf a})}{\partial a_i \partial a_j }} \right]}\\ &=&\sum\limits_{k=1}^n {(y_k -A(x_k ,{\bf a}))\left[ {\frac{\partial^2A(x_k ,{\bf a})}{\partial a_i \partial a_j }} \right]} \\ &=&\sum\limits_{k=1}^n {(y_k -A(x_k ,{\bf a}))}\\ &&\times {\left[ {{\begin{array}{*{20}c} 0 \hfill & \cdots \hfill & 0 \hfill & {\frac{\partial^2A(x_k ,{\bf a})}{\partial a_1 \partial \alpha }} \hfill \\ \hfill & \ddots \hfill & \vdots \hfill & \vdots \hfill \\ \hfill & \hfill & 0 \hfill & {\frac{\partial^2A(x_k ,{\bf a})}{\partial a_N \partial \alpha }} \hfill \\ {\it Symmetry} \hfill & \hfill & \hfill & {\frac{\partial^2A(x_k ,{\bf a})}{\partial \alpha^2}} \hfill \end{array} }} \right]} \\ &=&{{\bf O}} \end{array} $$
(18)

Thus K = [K ij ] = K 1  − K 2  = K 1  − O = K 1 , to compute the matrix K, \(\frac{\partial A(x_k ,{\bf a})}{\partial a_i }\) and \(\frac{\partial A(x_k ,{\bf a})}{\partial \sigma }\) in (17) for RBF Gaussian and multiquadric function are denoted as:

For Gaussian basis function:

$$ \left\{ {{\begin{array}{l} \dfrac{\partial A(x_k ,{\bf a})}{\partial a_i }=\exp \left( {-\left\| {x-x_i } \right\|^2/2\alpha^2} \right) \\\\ \dfrac{\partial A(x_k ,{\bf a})}{\partial \alpha }=\dfrac{1}{\alpha ^3}\sum\limits_{i=1}^n {a_i (x-x_i )^2} \\\\ \,\,\,\quad\qquad\qquad\times\,\exp \left( {-\left\| {x-x_i } \right\|^2/2\alpha^2} \right) \end{array} }} \right.\;i=1,2,\cdots n. $$
(19)

For multiquadric basis function:

$$ \left\{ {{\begin{array}{l} {\dfrac{\partial A(x_k ,{\bf a})}{\partial a_i }=\left( {\left\| {x-x_i } \right\|^2+\alpha^2} \right)^{1/2}} \\\\ {\dfrac{\partial A(x_k ,{\bf a})}{\partial \alpha }=\sum\limits_{i=1}^n {a_i \dfrac{\alpha }{\left( {\left\| {x-x_i } \right\|^2+\alpha^2} \right)^{1/2}}} } \end{array} }} \right.\;i=1,2,\cdots n. $$
(20)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shi, L., Yang, R.J. & Zhu, P. A method for selecting surrogate models in crashworthiness optimization. Struct Multidisc Optim 46, 159–170 (2012). https://doi.org/10.1007/s00158-012-0760-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-012-0760-1

Keywords

Navigation