You are on page 1of 19

Engineering Optimization

ISSN: 0305-215X (Print) 1029-0273 (Online) Journal homepage: http://www.tandfonline.com/loi/geno20

Identification of vehicle suspension parameters by


design optimization

J.Y. Tey, R. Ramli, C.W. Kheng, S.Y. Chong & M.A.Z. Abidin

To cite this article: J.Y. Tey, R. Ramli, C.W. Kheng, S.Y. Chong & M.A.Z. Abidin (2014)
Identification of vehicle suspension parameters by design optimization, Engineering Optimization,
46:5, 669-686, DOI: 10.1080/0305215X.2013.795558

To link to this article: http://dx.doi.org/10.1080/0305215X.2013.795558

Published online: 19 Jun 2013.

Submit your article to this journal

Article views: 311

View related articles

View Crossmark data

Citing articles: 3 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=geno20

Download by: [Universiti Teknologi Malaysia] Date: 12 July 2017, At: 07:51
Engineering Optimization, 2014
Vol. 46, No. 5, 669686, http://dx.doi.org/10.1080/0305215X.2013.795558

Identification of vehicle suspension parameters by


design optimization
J.Y. Teya , R. Ramlia *, C.W. Khengb , S.Y. Chongc and M.A.Z. Abidind
a Advanced Computational and Applied Mechanics Research Group, Department of
Mechanical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia;
b Department of Computer Science, Faculty of Information and Communication Technology,
Universiti Tunku Abdul Rahman, Jalan Universiti, Perak, Malaysia; c School of Computer Science,
The University of Nottingham Malaysia Campus, Malaysia; d Proton Professor Office,
Proton Holdings Bhd., Malaysia

(Received 7 June 2012; final version received 25 March 2013)

The design of a vehicle suspension system through simulation requires accurate representation of the
design parameters. These parameters are usually difficult to measure or sometimes unavailable. This arti-
cle proposes an efficient approach to identify the unknown parameters through optimization based on
experimental results, where the covariance matrix adaptationevolutionary strategy (CMA-es) is utilized
to improve the simulation and experimental results against the kinematic and compliance tests. This speeds
up the design and development cycle by recovering all the unknown data with respect to a set of kine-
matic measurements through a single optimization process. A case study employing a McPherson strut
suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests
are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The
problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design param-
eters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number
of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value
is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective
optimization problem. The optimized results show a significant improvement in the correlation between
the simulated model and the experimental model. Once accurate representation of the vehicle suspension
model is achieved, further analysis, such as ride and handling performances, can be implemented for further
optimization.

Keywords: hierarchical clustering; global sensitivity analysis; design of experiments; kinematic and
compliance analysis

1. Introduction

In the vehicle development process, experimenting with physical prototype remains an important
task (Rauh 2003). Often, these experiments are conducted by designers to obtain parameters for
simulation and validation. However, the process of achieving accurate correlations between sim-
ulations and experiments is often difficult. According to Blundell (1997), this requires expensive
set-ups of instruments and testing facilities. It is a time-consuming task that can take up to 21
person-days per axle to complete a vehicle characterization. Once the design parameters have been
obtained, a fine-tuning process is necessary to simplify the model. This involves improving the
correlation between the simulated model and the experimental model repeatedly. Conventionally,

*Corresponding author. Email: rahizar@um.edu.my

2013 Taylor & Francis


670 J.Y. Tey et al.

final tuning is performed by experienced engineers through trial-and-error experimentation based


on human perception (Kuo et al. 2008). In addition, the nature of the vehicle dynamic problem is
rather complex and highly nonlinear, so the process is time consuming.
In the past, design sensitivity analysis was introduced to analyse a mechanical system for
redesign and modification of the vehicles passive suspension setting (Nalecz and Wicher 1988).
However, design sensitivity analysis has its limitations. It identifies the key parameters only but
does not provide the optimal solution. Other researchers have implemented design sensitivity anal-
ysis to identify the key parameters and combined them with gradient-based optimization to solve
suspension design problems (Lee, Won, and Kim 2009). However, this method has a drawback in
the context of kinematics and compliance. It often demands a set of design parameters to fulfil mul-
tiple objectives such as camber change, caster change and toe change, among others. In order to
perform a gradient-based optimization for the above setting, the multi-objective optimization
problem is usually formulated to a single-objective optimization problem through a weighted
sum model. However, the weighting required for each objective is not known beforehand and
it is difficult to determine a proper value for each objective that can produce the best global
minimum solution.
Furthermore, the gradient-based approach has drawbacks in optimizing nonlinear problems
(Datoussaid, Verlinden, and Conti 2002). Although the approach has gained a reputation in solving
various types of passive suspension system, it often requires auxiliary equations (derivation of the
kinematic equations of the suspension motion), which are usually difficult to derive and implement
owing to the complexity of the vehicle dynamic system. Rocca and Russo (2002) employed special
coding to formulate the suspension kinematic equations as the method of identification process so
that the gradient-based algorithm was able to optimize the limited number of design variables. In
general, gradient-based optimization approaches are more efficient than evolutionary algorithms,
but they could get trapped in local optima instead of the global optimum design. From this
perspective, the gradient-based approaches are less robust and lack explorative features compared
to evolutionary algorithms when employed to search for the optimal design in a large-scale
optimization problem. Furthermore, evolutionary algorithms such as evolution strategies have
advantages over classical gradient approaches in that analysis of the problem characteristic is
not required prior to the optimization process (Datoussaid, Verlinden, and Conti 2002). They
are capable of avoiding derivatives, which allows simpler implementation of the optimization
process for high-complexity vehicle models and efficiency in handling discontinuities in the design
space. This allows the optimization algorithm to be directly coupled with the existing multi-body
dynamic system in a form of generate-and-test framework, where the evolutionary algorithm
generates a new design and the objective value is evaluated through a software simulation such as
ADAMS. This process repeats until the termination condition is met, e.g. the correlation between
the simulated model and the experimental model is above a certain threshold.
In this article, a methodology is proposed that uses an evolutionary algorithm, the covariance
matrix adaptationevolutionary strategy (CMA-es), as the optimization tool. This tool was shown
to be robust in solving 25 black-box benchmark problems, as demonstrated at the Congress on Evo-
lutionary Computation (CEC) in 2005 (Auger and Hansen 2005; Hansen 2006a, 2010; Garca et al.
2009). In addition, it does not require any tedious parameter tuning (parameters of the algorithm
are derived statistically), except for population size, which has to be set by the user. This makes
the algorithm more efficient and suitable for engineering applications compared with the gradient-
based approach. The optimization algorithm incorporates global sensitivity analysis (GSA) and
hierarchical clustering to reduce the number of objective functions and dimensionality of the
problem by grouping similar affecting design parameters. The proposed methodology not only
allows for the recovery of the unknown design parameters but also rebuilds the physical equivalent
model in the multi-body dynamic simulation software (MSC.ADAMS) through correlation with
the experimental results.
Engineering Optimization 671

2. Background

2.1. Global sensitivity analysis

GSA is a statistical method of evaluating the correlation between the design parameters on each
objective function (Mastinu, Gobbi, and Miano 2006). It is different from conventional sensitivity
analysis, in which the analysis of the objective function centres on small variations in the system
parameters. GSA describes the behaviour of the system when parameters are varied across broad
ranges of the entire feasible design domain. It provides dimensionless measurements on how
strongly each design parameter correlates with the corresponding objective function. GSA can be
employed for a generic objective function with a generic design parameter, as well as between two
generic objectives in order to discover the relationship between the objective functions (Mastinu,
Gobbi, and Miano 2006). In this article, GSA (Spearmans rank correlation) is employed in the
early stage to identify the relationship between design parameters and each measured objective to
give a clear picture of how each design parameter contributes to the respective measured objective.
This is performed through evaluation of a set of different design parameter combinations generated
by the Latin hypercube sampling method.
Latin hypercube is a sampling method that subdivides the design parameter domain along each
dimension into n subintervals and ensures that one sample lies in each subinterval. This method
is a more efficient approach than pseudo-random sampling and is capable of generating disperse
and uniform samples across each dimension space.
Spearmans rank correlation is a quantitative GSA method based on rank regression analysis.
It provides robust estimation of global sensitivity as it can measure correlations of nonlinear
relationships between two sets of variables. For a sample of size n of two variables x and y, the
Spearmans rank is calculated as follows:
n
i=1 [Rx Rx ][Ryi Ry ]
i
=  (1)
n
i=1 [Rx Rx ] [Ry Ry ]
i 2 i 2

where Rxi and Ryi are ranks and Rx and Ry are the mean values of the ranks of x and y.
The value of can vary between +1 and 1. Values close to 1 indicates a strong correlation
between x and y, values close to 1 indicate strong inverse correlation, and values close to 0
indicate the absence of correlations or the presence of a non-monotonic correlation.

2.2. Hierarchical clustering

Cluster analysis is an important statistical method used in a variety of fields. It helps to group
either the data unit or the variables into clusters such that elements within a cluster have a high
degree of natural association among themselves while the clusters are relatively distinct from
one another (Anderberg 1973). There are several clustering methods, such as self-organizing
map (SOM), k-means, fuzzy c-mean and hierarchical clustering. However, there is no single best
clustering method that suits various types of problem (Milligan 1980; Mangiameli, Chen, and
West 1996). In this article, the idea of employing a clustering method is to group the objectives
to provide similar affecting design parameters. It aims to reduce the objective redundancy or
redundant objectives and simulation tests. This will also simplify the problem as the number of
objectives used in the optimization can be reduced significantly. Hierarchical clustering is chosen
because prior information found from GSA can be employed to form a cluster tree. This provides
a clear interpretation on how each objective is related to each other, which is important in the final
stage of fine-tuning the vehicle suspension.
672 J.Y. Tey et al.

The hierarchical clustering method groups the objectives with similar affecting design param-
eters (calculated based on Spearmans coefficient) into a cluster by creating a cluster tree or
dendrogram. In this article, an agglomerative hierarchical cluster strategy is employed. The
algorithm first calculates the distance matrix between each objective. Then, the cluster tree is
formed based on the linkage criteria (average distance, centroid distance, weighted distance,
median distance, etc.). Visualization of the cluster tree formation can be plotted using a den-
drogram. Finally, the number of clusters can be decided by the user or by using the maximum
threshold limit of linkage distance criteria to form a cluster.

2.3. Optimization and covariance matrix adaptationevolutionary strategy

Optimization is a process of locating the design parameter x, such that the objective function
f produces minimum output. Optimization that requires maximum output can be formulated
by changing the sign of the function output (Fletcher 1987). The optimization space is usually
bounded by a hypercube:

minx f (x), f : d ,
(2)
subject to xlowerbound x xupperbound

Modern optimization assumes the underlying problem characteristics to be a black box, where
the problem is hard to solve with mathematical analysis or has no analytical solution, for example
optimization involving simulation. In evolutionary computation, this can be resolved by adap-
tive sampling of the space according to its functional response. This process is outlined as a
generate-and-test framework, where the algorithm keeps a set of current best solutions (referred
to as population) and generates a new set of solutions randomly (commonly known as variation
or mutation) according to certain probability density functions based on the current best solu-
tions. This process is repeated for k generations (loops) until a termination condition is satisfied
(e.g. resource allocation is utilized). Out of the many evolutionary algorithms, e.g. evolutionary
programming (Xin, Yong, and Guangming 1999), genetic algorithm (Goldberg 1989) and evo-
lution strategies (Rechenberg 1971), CMA-es is selected because it uses statistical information
to define the search path towards the global minimum/maximum. CMA-es has been shown to
outperform other algorithms, for example in the CEC 2005 Session on Real-Parameter Optimiza-
tion. This method generated a good average score for solving 25 black-box benchmark problems
(Hansen 2006b). In addition, CMA-es has been successfully implemented to solve actual prob-
lems in various engineering applications (Salehi, Young, and Mousavi 2008; Colutto et al. 2010;
Gregory, Bayraktar, and Werner 2011). CMA-es is a nonlinear optimization method that uses
statistical techniques to optimize the objective function through an iterative process. Initially, it
generates a set of candidate samples from a predefined multivariate normal distribution. In the
following iteration, the mean vector and the covariance matrix of the multivariate normal distribu-
tion are updated using the best candidate solution found from the previous iteration. The updated
multivariate normal distribution is then used to sample a new population of candidates and the
iteration process continues until it converges (Hansen 2006a) (Figure 1).

3. Identification process

The proposed identification process begins with preliminary vehicle data to construct the suspen-
sion subsystem in MSC.ADAMS/CAR. Design parameters and objectives measurements required
to be optimized are identified. This is followed by Latin hypercube sampling and GSA to anal-
yse the relationship between the design parameters against the measured kinematic objectives.
Engineering Optimization 673

Set l (population size)


Initialize state variables (m, s , C = I, ps = 0, pc = 0)
While terminate not meet
For i=1: l //Sample new l and evaluate them
xi = sample multivariate normal distribution
(mean = m, covariance matrix s 2C )
fi = fitness (xi)
end
Sort fitness of new l samples, and calculate the new mean.
m //Update new mean points and move mean to better solution
p //Update isotropic evolution path
pc // Update anisotropic evolution path
C // Update covariance matrix
s // Update sigma, step-size using isotropic path length
return, m

Figure 1. Pseudo-code of CMA-es.

Through this analysis, the information on the relationship between the design parameters and the
objectives can be further examined through a clustering method to cluster those objectives that
have similar factors to the design parameters. This helps to reduce the number of objectives to be
optimized while enhancing the efficiency of the algorithms since a smaller number of objectives
is required during optimization. Finally, CMA-es is employed to optimize the design parameters
so that the measured objectives correlate with the experimental measurements. The workflow in
improving the correlation between simulation and experimental results is shown in Figure 2. The
interaction between each process will be further discussed and explained in the following sections.

3.1. Modelling and simulation

A McPherson strut suspension model is developed in the MSC.ADAMS/CAR environment. The


model is constructed using rigid body components connected with bushes and joints. The hard
point locations are based on an initial computer-aided design (CAD) drawing or are determined
through estimation. In addition, bushing profiles in all six degrees of freedom, i.e. x, y and z
translational and rotational about x, y and zare modelled as linear stiffness functions to simplify
the model.
In this study, the model consists of 21 degrees of freedom. The model topology is shown in
Figure 3 and general information about the vehicle set-up is given in Appendix 1 (Table A1).
An identification process is required to recover a total of 49 design parameters consisting of hard
points, bushing stiffness and anti-roll bar stiffness. The wheel is modelled as a simple linear spring
element. The wheel centre is fixed throughout the optimization process to maintain the track width
of the vehicle. The range of each selected parameter to be used during the optimization process
must be well defined. A large parameter range will cause the optimization search process to take
longer to converge; however, a small design range may not cover the location of the best solution
that compromises all measured objectives. In the case study, the design space of each design
variable is shown in Appendix 1 (Table A3) and Figure 3.
674 J.Y. Tey et al.

Figure 2. Workflow of design optimization.

Figure 3. Front suspension in MSC.ADAMS/CAR.


Engineering Optimization 675

Figure 4. Gradient information at static position measured for kinematic profiles curve.

In this model, three test cases are chosen based on the experimental kinematic and compliance
tests, namely, vertical parallel wheel travel, opposite wheel travel, and single wheel vertical travel
(Appendix 1, TableA2). Each of the test cases has its own measured objectives, as shown in Table 2.
All the measured objectives are in the form of response curves. It is necessary to translate each
curve into a single value representing a unique characteristic before conducting GSA. Therefore,
the objectives have to be measured in the form of gradients at the static position (Figure 4).
Gradient information has its own advantages over other measurement approaches such as the
absolute maximum, absolute minimum and root mean square values, as it inherits the shape and
characteristic of the curve in a single value. However, gradient information is not able to inherit
the offset information of the curve. Therefore, the values that measure the offset of the curve with
respect to the changes in design variables in the static position are also employed in optimization.

3.2. Latin hypercube sampling and global sensitivity analysis

The Latin hypercube sampling method is utilized to sample 500 samples of the input parameters
within a selected range given in Appendix 1 (Table A3). In order to maintain the feasibility of
various combinations or the shape of the suspension locations, the drop links to the strut, spring
upper mount point and lower mount point are varied with respect to the strut top vector and strut
slider axis point rather than having their own design spaces. This is to prevent illogical positioning
of those components where the spring position may yield large offset from the damper location
and form an independent component floating in space.
The relative movement of the location is calculated as follows:

Psup = ST + a dsu (3)

Pslp = ST + a dsl + Vsl (4)

Pdts = ST + a ddts + Vdts (5)

where a is a unit vector between the strut top and strut slider axis point, dsu , dsl and ddts are the
distances between the strut top and the location point (Figure 4), Vsl and Vdts are the perpendicular
distances between the coordinate and the vector between the strut top and strut slider axis point,


Psup is the vector of the spring upper mounting point, ST is the vector of the strut top point,

Pslp is the vector of the spring lower mounting point, and Pdts is the vector of the drop link to the
mounting strut point (Figure 5).
676 J.Y. Tey et al.

Figure 5. Schematic diagram of the strut linkage.

The entire sample is evaluated with this model and developed in MSC.ADAMS/CAR. Spear-
mans rank correlation is applied to examine the correlation between the design variables and
the objectives. A hypothesis test of no correlation against the alternative that there is a nonzero
correlation is conducted. A p value of more than 0.05 shows that it is significantly different from
zero and will be rejected as it indicates no correlation. This helps to filter out the non-correlated
design variables and measured objectives, hence simplifying the process of clustering.

3.3. Clustering

In hierarchical clustering, each observation is the objective function of design parameters. It


consists of the Spearman correlation coefficients with respect to the objective function. The
Euclidean distance calculates the distance matrix between the observed pairs. For two vectors
(x1 , x2 , . . . , xn ), (y1 , y2 , . . . , yn ) the Euclidean distance is calculated as follows:

 n

deuc =  (xi yi )2 (6)
i=1

A linkage function uses linkage criteria to define the linkages between observed pairs to form
the hierarchical cluster tree. Many methods of linkage criteria can be utilized, such as average
distance, centroid distance and median distance. The selection of a proper linkage criterion is
important to form an accurate tree. Therefore, the linkage function is selected based on the
cophenetic correlation coefficient (CPCC) (Sokal and Rohlf 1962). The CPCC is a measure
of how faithfully the tree represents the dissimilarities among observations. In this article, the
hierarchical tree is formed using the average distance linkage function as the linkage criterion.
It is found that the function gives the highest score of 0.957 in the CPCC. The closer the value
of the CPCC is to 1, the more accurately the tree will be represented compared to other linkage
criteria (complete linkage = 0.925, single linkage = 0.949, weighted linkage = 0.956).
In the dendrogram plot (Figure 6), linkage distance on the x-axis represents the similarity
between two objectives. Objectives with the closest affecting factor will have a shorter linkage
distance, as shown in Figure 6. By visualization, it is hard to determine the number of clusters as
they grow larger. Therefore, a new method of selecting the number of clusters is proposed using
Engineering Optimization 677

Figure 6. Dendrogram of the hierarchical cluster tree and cluster formation. VP = vertical parallel; VO = vertical
opposite; SW = single wheel; Pos = position; unless otherwise mentioned, all objectives are measured in gradients.

the silhouette index and rank method. The silhouette index measures the similarity of data within
its cluster by comparing it to data of other clusters. The silhouette index is defined by:

min(bi,k ) ai
k
s(i) = (7)
max(ai , min(bi,k ))
k k

where ai is the mean distance between the data i and the individual data in its cluster, and bi,k is
the minimum average distance between i and the individual data of another cluster, k (Lamrous
and Taileb 2006).
This measurement is in a continuous range of [1, 1]. Values closer to +1 indicate that the
object is distinctive from other clusters, showing that a good cluster is formed. In contrast, if the
value closer to 1, the object is badly grouped in the cluster. For values closer to zero, the object
is not distinctive from one cluster to another.
The number of clusters formed increases with increasing silhouette index. However, the interest
in this study is to reduce the number of objectives or clusters while maintaining the highest possible
silhouette index. This can be treated as a conflicting objective to be optimized. Therefore, a range
of maximum number of clusters is employed. The respective formation of the cluster group is
measured with its mean values of silhouette index. The set of the number of clusters formed and
silhouette index is ranked into a range of [0, 1]. The number of clusters formed is normalized with
the minimum set of 0 (Equation 8), whereas for the silhouette index, the maximum value is set
as 0 (Equation 9). The corresponding minimum summation of both values is the optimal result of
678 J.Y. Tey et al.

Figure 7. Plot of mean silhouette index versus number of clusters.

the number of clusters formed.


max(s) s(i)
RSi = (8)
max(s) min(s)
C(i) min(C)
RCi = (9)
max(C) min(C)
where RSi is a rank value of the silhouette index, RCi are rank values for the number of clusters,
and i refers to the set of combination between the silhouette index and number of clusters formed.
A plot of mean silhouette index versus the number of clusters formed is shown in Figure 7. The
plot shows the reliability of this silhouette index with the rank method. The plot indicates that
in general, increasing the number of clusters will result in the mean silhouette index increasing.
However, the increasing function is not linearly dependent. Increasing the number of clusters
formed will not necessarily result in an increase in better cluster formation as the mean silhouette
index may drop owing to overclustering of data. Through the method of calculation using the
silhouette index with the rank values, the result shows an optimal number of 30 clusters, which
corresponds to a mean silhouette index of 0.97. The silhouette plot of the 30 clusters is shown
in Figure 8. This suggests that each cluster is significantly distinctive from one another as the
silhouette index is close to 1. Figure 7 also shows that there is a significant increment in mean
silhouette below 30 clusters. After the optimal cluster formation, the increment of the silhouette
index becomes marginal, resulting in insignificant improvement as the number of clusters formed
increases.
Through this method, the number of objectives required to be optimized is reduced to 30
(Figure 6: one objective chosen per cluster) from the 40 objectives.

3.4. Optimization (CMA-es)

The exploitation and exploration of optimization are limited to the design variable bounds speci-
fied in Appendix 1 (Table A3). The goal of this optimization process is to minimize the difference
Engineering Optimization 679

Figure 8. Silhouette plot of 30 cluster forms.

between simulated and experimental results. Based on previous hierarchical clustering, the objec-
tives required to be minimized are grouped, and one objective is selected from each cluster to
form a vector of objective functions. Through this simplification process, the required objectives
to be optimized are reduced. This enhances the efficiency of the optimization algorithm as smaller
numbers of objectives are required to be evaluated during the test. However, the objectives can
be further simplified by introducing the dynamic summation of rank method. This reduces the
complexity of the multi-objective function into a single objective, which then can be coupled with
CMA-es. Through this method, a pseudo-objective replaces the vector of objective functions. This
method ranks each dimension objective based on current generation population size with the rank
values falling between 0 and 10.

Objij min(Objj )
j
Rij = 10 (10)
max objj min(Objj )
j j


n
Fitnessi = Rij (11)
j

where n is the total number of samples in the population size, i represents individual samples, and
j represents the number of objectives. The summation of the rank values represents the fitness
values of each sample.
This method is introduced to prevent the optimization algorithm from creating a wrong selection
pressure towards a region of dominated solutions. This is because the scale of each objective
value differs from one to another. For the same percentage difference between each objective, the
summation of objectives with large-scale values will dominate the contribution in fitness values
over the objectives with smaller scale values. For example, if Euclidean distance measurement
were employed, it would mislead the search direction of the optimization algorithm towards the
most dominant solution that only fulfils the dominant objectives (Figures 9 and 10).
680 J.Y. Tey et al.

Figure 9. Comparison between Euclidean distance measurement and rank function measurement for a population size
of 100 samples.

Figure 10. Histogram of absolute values of percentage difference against objective score.

An empirical study was conducted between the dynamic summation of rank method and the
Euclidean distance method. A random generation consisting of 100 sets of design parameters was
evaluated using both methods. In Figure 9, the results show that both measurement methods give
a different set of design parameters that produces the best minimum results (highlighted with
circles in Figure 9). The absolute value of the percentage difference between the target values and
simulated values was calculated for minimum points of both methods. Figure 10 indicates that the
dynamic summation of rank method obtained a significantly smaller percentage difference than
Engineering Optimization 681

the Euclidean distance measurement. Therefore, by employing the dynamic summation of rank
method, the optimization search direction will move towards the most non-dominant solution,
which is a compromise of all objectives targeted. In addition, a penalty method is introduced
to penalize those infeasible design variables that lead to kinematic motion lock-up. Suspension
linkage motion lock-up may occur owing to the poor design of suspension linkage connection
points, violating the four-bar linkage motion constraint. In this case, infeasible design variables
will be eliminated before the sorting process and thus avoid the search path moving towards
infeasible design parameters.
There is no single robust termination criterion that can be used to determine the conver-
gence of the optimization problems. The default termination condition employed by CMA-es
is limited by the number of function calls of 1000N 2 , where N is the number of design vari-
ables. N = 49 requires 2.401 106 seconds, making the solving time to terminate around 79.4
weeks (calculated based on 20 seconds measured per function call to complete the three tests in
MSC.ADAMS/CAR). Owing to the long termination time, the termination condition is reduced

Figure 11. Optimization process between CMA-es and MSC.ADAMS/CAR.


682 J.Y. Tey et al.

to 2250 function calls, which reduces the solving time to approximately 5 days. The population
size used in the optimization process is 42. An archive stores each generation solution. After
the termination condition has been reached, the archive data will be used to calculate the best
solution throughout the generation of the pseudo-objective fitness values using Equations (10)
and (11). Since the dynamic summation rank method for the optimization process is ranked based
on the current generation, the best solution in each generation is not comparable to the previous
generations best solution. Based on the fitness values, the best 20 solutions are selected and the
corresponding standard deviation and mean are calculated. Through this method, the best opti-
mized design variables and objectives values can be defined statistically and the robustness of the
solution can be examined.
The overall flow of the optimization process is shown in Figure 11. The software-in-the-loop
optimization approach is adopted in this optimization process. New samples generated from the
multivariate normal distribution will be employed to generate suspension assembly files and
bushing component files required by MSC.ADAMS/CAR to model the suspension subsystems.
Kinematic tests are performed in MSC.ADAMS/CAR to evaluate the new design points. The
resulting files are read by the algorithm to compute the gradient information of the kinematic
profiles; then, the dynamic rank method is used to compute the fitness values. The optimization
process will continue to iterate until the termination condition is met.

4. Results and discussion

4.1. Empirical results

The best solution is defined by the mean and standard deviation of the 20 optimized solutions. Its
corresponding objective values are shown in Tables 1 and 2. The standard deviations for the 20

Table 1. Optimized design variables.

Hard point Optimized mean coordinates (mm) Standard deviation

Lower ball joint (15.4, 688.6, 39.1) (0.0433, 0.0434, 0.1664)


Lower control arm (FR) (2.1, 383.2, 47.1) (0.3081, 0.0264, 0.1585)
Lower control arm (RR) (309.3, 346.7, 29.1) (0.1991, 0.1176, 0.1499)
Strut axis point (1.6, 581.8, 217.2) (0.2519, 0.0087, 0.0669)
Strut top (39.5, 536.9, 558.9) (0.0707, 0.0176, 0.0793)
Trackrod inner (163.0, 317.6, 85.9) (1.0177, 0.1718, 0.3296)
Trackrod outer (136.7, 642.7, 105.3) (0.423, 0.1429, 0.3747)
Stabilizer bar to droplink (65.1, 533.7, 70.6) (0.1064, 0.1126, 1.6639)
Bushing component Optimized mean bushing Standard deviation
stiffness (K-N/mm
R-N/deg)
Bushing FR (Kx,Ky,Rx,Ry,Rz) (9483.4, 8811.48, (0.7932, 2.3855, 0.3812,
32268.7, 760.3, 6019.3) 0.4409, 0.464)
Bushing RR (Kx,Ky,Rx,Ry,Rz) (3721.1, 21425.1, 5496.9, (1.9401, 1.9732, 3.991,
18353, 4399.7) 1.1673, 0.481)
Bushing ARB (Kx,KyKz,Rx,Ry,Rz) (36581.3, 2625.1, (1.4675, 3.5607, 0.3798,
39097.8, 24386, 10380, 1.7948, 0.8275, 0.2876)
1160.5)
Bushing at strut top (Kx,Ky,Kz,Rx,Ry,Rz) (5401.2, 33553.1, 2463.9, (0.3919, 0.7944, 0.411,
16494, 23121, 3396.7) 0.3721, 0.3027, 1.1289)
Torsional stiffness (N/deg) 930.6 0.0345
Camber angle (deg) 0.52 0.0248
Toe angle (deg) 0.0032 0.0243

Note: K = translational stiffness; R = rotational stiffness.


Engineering Optimization 683

Table 2. Optimized objective values.

Vertical Improvement Vertical Improvement Single wheel Improvement


parallel test gains (%) opposite test gains (%) travel gains (%)

Suspension ride rate 23.15 Toe change 124.08 Suspension ride rate 23.81
Suspension ride rate 2.70 Toe change (pos) 198.48 Suspension ride rate 3.08
(pos) (pos)
Toe change 137.59 Camber change 10.06 Toe change 1.24
Toe change (pos) 377.06 Camber change (pos) 64.40 Toe change (pos) 409.66
Camber change 13.91 Roll caster 43.93 Camber change 11.71
Camber change (pos) 68.04 Roll toe change 122.60 Camber change (pos) 69.53
Lateral displacement 81.83 Roll toe change (pos) 184.83 Lateral displacement 82.47
hub hub
Suspension lateral 249.00 Roll camber 0.08 Suspension lateral 451.89
displacement TCP displacement TCP
Suspension foreaft 96.36 Roll camber (pos) 8.45 Suspension foreaft 64.28
displacement hub displacement hub
Caster change 33.17 Roll force 20.33 Caster Change 44.12
Ride rate 20.51 Roll force (pos) 2.91 Ride rate 21.84
Lateral displacement 244.31 Roll rate 19.51 Ride rate (pos) 3.08
TCP
Foreaft displacement 42.36 Roll rate (pos) 2.91 Lateral displacement 383.14
TCP TCP
Foreaft displacement 117.93
TCP

Note: pos = values at static position; unless otherwise mentioned, all objectives are measured in gradients; TCP = tyre contact patch.

optimized solutions are small, indicating that the optimization converges near the best solution. In
Table 2, the corresponding optimal design parameters are able to fulfil most of the objectives, i.e.
by comparing the gradient of the simulated results with those from the experiments. Most of the
objectives show significant improvement. Only average experimental results at static positions
are taken as targeted values for optimization. It should be noted that the simulated results are
unable to fit into the experimental curve with hysteresis behaviour. The simulation environment
has perfect symmetry of the suspension system and linear bushing characteristics compared to the
experimental vehicle suspension, which has manufacturing tolerances and hysteresis of rubber
bushings. As such, it is hard to satisfy all objectives. Compromises of the results are found to
fit all objectives at their respective mean values. The phenomenon previously described can be
visualized through a comparison plot of experimental versus an optimized model and a non-
optimized model in Figure 12 (four objectives are selected out of 40 objectives to demonstrate
the phenomena described).
In the plot, only a single curve is plotted to represent the simulated result. Owing to the
symmetrical geometry topology of the suspension model, identical results will be generated for
both left and right wheels. From the plot, it is also demonstrated that gradient information as an
objective measurement can be used to represent the characteristic of the curve, i.e. the simulated
curve closely emulates the experimental results. In addition, the result of the optimized solution
suggests that the pseudo-objective (Equations 10 and 11) is able to generate the correct selection
pressure for CMA-es to search for the best solution that fits all objectives.
Using this methodology, the optimized design parameters of the vehicle suspension model
are reasonably well represented, i.e. they closely reproduce the physical suspension system. It
is also shown that the optimized model can recover the unknown bushing stiffness and design
hard points. It is crucial to have a well-correlated suspension model at the subsystem level in
the early design stage. For instance, the optimized solution can be further examined by robust
analysis as the solutions are statically determined. By analysing the standard deviation of the
design parameters, it is possible to identify the robustness of the suspension design against its
684 J.Y. Tey et al.

Figure 12. Comparison plot of measured objectives.

kinematic performance. The model can be further assembled into a full vehicle model by coupling
it with other subsystems in MSC.ADAMS/CAR for complete systems analysis of the vehicle.

5. Conclusion

The proposed methodology has been demonstrated to correlate the simulation model with the
physical suspension system through an evolutionary optimization method based on experimental
results. This method is capable of correlating large design parameters and objectives in a single-
run optimization process. A systematic workflow of the methodology is presented with the initial
model built in MSC.ADAMS/CAR. This is followed by executing the design of experiments
to explore the design space and analysing the correlation between the design parameters and
objectives measured. Then, using the results from GSA, hierarchical clustering is employed to
form a hierarchical cluster tree. This reduces the number of objectives from 40 to 30 objectives.
CMA-es is coupled with a pseudo-objective to optimize the design parameters. The optimized
results indicate that the optimized design parameters are able to accurately predict the kinematic
characteristics of the physical suspension system measured in the experiments. This shows that
the simulated model is accurate and can be coupled with full vehicle assembly for further analysis
on characteristics of the vehicle model at the systems level (i.e. based on ride and handling
performance). This will help to improve the results of full vehicle simulation as the subsystem
Engineering Optimization 685

suspension model has been validated with the experimental results. This methodology will reduce
the development process in vehicle suspension design.

Acknowledgements
The authors wish to express their gratitude to the Ministry of Science and Technology (MOSTI) of Malaysia for the
financial support extended to this research project (TF0608C073), entitled Computationally Optimized Fuel-Efficient
Concept (COFEC) Car. The authors would also like to convey their appreciation to PROTON BHD, Universiti Malaya
(UM), Universiti Kebangsaan Malaysia (UKM), Universiti Teknologi Malaysia (UTM), Universiti Putra Malaysia (UPM),
UniKL and MIMOS BHD for collaborative work.

References

Anderberg, M. R. 1973. Cluster Analysis for Applications. New York: Academic Press.
Auger, A., and N. Hansen. 2005. A Restart CMA Evolution Strategy with Increasing Population Size. In IEEE Congress
on CEC 2005, Edinburgh, UK, September 25, 2: 17691776. Zurich, Switzerland: IEEE.
Blundell, M. V. 1997. Influence of Rubber Bush Compliance on Vehicle Suspension Movement. Materials & Design
19 (1): 2937.
Colutto, S., F. Fruhauf, M. Fuchs, and O. Scherzer. 2010. The CMA-ES on Riemannian Manifolds to Reconstruct Shapes
in 3-D Voxel Images. IEEE Transactions on Evolutionary Computation 14 (2): 227245.
Datoussaid, S., O. Verlinden, and C. Conti. 2002. Application of Evolutionary Strategies to Optimal Design of Multibody
Systems. Multibody System Dynamics 8 (4): 393408.
Fletcher, R. 1987. Practical Methods of Optimization: Vol. 2. Constrained Optimization. Chichester: John Wiley & Sons.
Garca, S., D. Molina, M. Lozano, and F. Herrera. 2009. A Study on the Use of Non-parametric Tests for Analyz-
ing the Evolutionary Algorithms Behaviour: A Case Study on the CEC2005 Special Session on Real Parameter
Optimization. Journal of Heuristics 15 (6): 617644.
Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization & Machine Learning. Reading, MA: Addison-Wesley.
Gregory, M. D., Z. Bayraktar, and D. H. Werner. 2011. Fast Optimization of Electromagnetic Design Problems Using
the Covariance Matrix Adaptation Evolutionary Strategy. IEEE Transactions on Antennas and Propagation 59 (4):
12751285.
Hansen, N. 2006a. The CMA Evolution Strategy: A Comparing Review. Dordrecht: Springer.
Hansen, N. 2006b. Compilation of Results on the 2005 CEC Benchmark Function Set. ETH Zurich, Switzerland: Institute
of Computational Science.
Hansen, N. 2010. The CMA Evolution Strategy: A Tutorial, 2005[online]. Available from http://www.bionik.tu-berlin.de/
user/niko/cmatutorial.pdf
Kuo, Y. P., N. S. Pai, J. S. Lin, and C. Y. Yang. 2008. Passive Vehicle Suspension System Design Using Evolutionary
Algorithm. In IEEE International Symposium on Knowledge Acquisition and Modeling Workshop Proceedings,
Wuhan, China, December 2122, 292295. Piscataway, NJ: IEEE Press.
Lamrous, S., and M. Taileb. 2006. Divisive Hierarchical K-Means. Article presented at Computational Intelligence for
Modelling, Control and Automation, 2006 and International Conference on Intelligent Agents, Web Technologies
and Internet Commerce, International Conference, Sydney, Australia, November 28December 1, 18. Piscataway,
NJ: IEEE Press.
Lee, H. G., C. J. Won, and J. W. Kim. 2009. Design Sensitivity Analysis and Optimization of McPherson Suspension
Systems. In Proceedings of the World Congress on Engineering, 2, London, UK, July 13.
Mangiameli, P., S. K. Chen, and D. West. 1996. A Comparison of SOM Neural Network and Hierarchical Clustering
Methods. European Journal of Operational Research 93 (2): 402417.
Mastinu, G., M. Gobbi, and C. Miano. 2006. Optimal Design of Complex Mechanical Systems with Applications to Vehicle
Engineering. Berlin: Springer.
Milligan, G. W. 1980. An Examination of the Effect of Six Types of Error Perturbation on Fifteen Clustering Algorithms.
Psychometrika 43 (5): 325342.
Nalecz, A. G., and J. Wicher. 1988. Design Sensitivity Analysis of Mechanical Systems in Frequency Domain. Journal
of Sound and Vibration 120 (3): 517526.
Rauh, J. 2003. Virtual Development of Ride and Handling Characteristics for Advanced Passenger Cars. Vehicle System
Dynamic 40: 135155.
Rechenberg, I. 1971. EvolutionsstrategieOptimierung technischer Systeme nach Prinzipien der biologischen Evolution.
Stuttgart-Bad Cannstatt, Germany: Frommann-Holzboog.
Rocca, E., and R. Russo. 2002. A Feasibility Study on Elastokinematic Parameter Identification for a Multilink Suspen-
sion. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 216 (2):
153160.
Salehi, M., P. G. Young, and P. Mousavi. 2008. Reverse Engineering of the Transcriptional Subnetwork in the Yeast Cell
Cycle Pathway Using Dynamic Bayesian Networks and Evolutionary Search. In IEEE Symposium on Computational
686 J.Y. Tey et al.

Intelligence in Bioinformatics and Computational Biology, 2008 (CIBCB 08), Sun Valley, Idaho, USA, September
1517, 106111. Piscataway, NJ: IEEE Press.
Sokal, R. R., and F. J. Rohlf. 1962. The Comparison of Dendrograms by Objective Methods. Taxon 11 (2): 3340.
Xin,Y., L.Yong, and L. Guangming. 1999. Evolutionary Programming Made Faster. IEEE Transactions on Evolutionary
Computation 3 (2): 82102.

Appendix 1

Table A1. Vehicle set-up.

Vehicle data Value

Tyre stiffness (N/mm) 252.18


Wheel radius (mm) 277
Total sprung mass (kg) 1175.6
Wheel base (mm) 2605
Wheel mass (kg) 27.5

Table A2. Test case set-up.

Test case Test case condition

Vertical parallel/opposite/single wheel test Bump distance 40 mm


Rebound distance 40 mm
Number of steps 30

Table A3. Variables of input parameters and the design range employed during optimization.

Design range (mm)


Point x y z

Lower ball joint [15, 5] [710, 690] [50, 30]


Lower control arm (FR) [0, 20] [380, 360] [50, 30]
Lower control arm (RR) [310, 330] [370, 350] [40, 20]
Strut axis point [5, 15] [610, 590] [125, 145]
Strut top [20, 40] [560, 540] [560, 580]
Trackrod inner [150, 170] [320, 300] [85, 105]
Trackrod outer [120, 140] [660, 640] [100, 120]
Stabilizer bar to droplink [45, 65] [550, 530] [50, 70]
Wheel centre Fix
Spring lower mount Varies relative to the line between strut top and strut
Spring upper mount axis point
Droplink to strut
Brushing component Estimated values Design range
(K-N/mm R-N/deg) (K-N/mm R-N/deg)
Bushing FR (Kx,Ky,Rx,Ry,Rz) 500 [0, 50000]
Bushing RR (Kx,Ky,Rx,Ry,Rz)
Bushing ARB (Kx,KyKz,Rx,Ry,Rz)
Bushing at strut top (Kx,Ky,Kz,Rx,Ry,Rz)
Torsional stiffness 300 [30, 3000]
Characteristics Estimated values (deg) Design range (deg)
Camber angle 0 [1, 1]
Toe angle 0

You might also like