Professional Documents
Culture Documents
AND PROJECTIONS?
AIMED K. NOOR,*
HUSSEIN A. KMEL! and ROBERT E. FULTON~
particularly well suited for this evolving computer The internal degrees of freedom [U,‘i’] are expressed
environment. This paper summarizes the status and in terms of interface degrees of freedom [U,“‘] and are
recent developments in static substructuring techniques eliminated from eqn (1) giving (superscript j is omitted
and their application to structural analysis and design. for convenience):
Since the subject is very broad, discussion focuses
herein on a number of aspects which are of interest to [&I = - [Kil-‘[&I [ubl + [Kiil-‘[pil, (2)
the authors and which have not been fully covered in the
literature. These aspects include: multilevel substructur- and
ing algorithms, use of hypermatrix and other sparse
matrix schemes, substNcturing techniques for automated [&I [Ubl = rs1 (3)
design systems and substructuring techniques on new
computing systems such as the CDC STAR-100 and where [&I is the effective substructure stiffness matrix,
minicomputer systems.
[&I = iKbbl- [K&l[&-‘[&I9 (4)
SUMMARY OF SUBSt’RUCTUlUNGMETHODSIN
SrATIC SFBUCTURAL. ANALe3Is
and [&,I is the effective SubstNcture load matrix,
The method of substructuring for static structural
analysis is based on subdividing the large structure into
IsI = lpbl - LKbil[Kiil-‘[PiI. (5)
smaller parts which are analyzed separately to obtain
relationships between forces and displacements at the The matrix inversion in eqns (4) and (5) is not normally
part interfaces. These interface variables are then done explicitly but the products [Kii]-‘[Kib] = [Ml] and
determined and the results used to obtain the unknowns [K,]-‘[Pi] = [Q] are obtained by solving:
within each substructure. Whilst substructuring tech-
niques can be applied with the force or mixed methods
IKiilIM j Qil = [Kb \ pil (6)
of structural analysis (see, e.g. Refs.[lO, 1l]), the dis-
cussion herein is limited to the displacement method. An via decomposition, forward reduction and back sub-
historical account of the development of substructuring stitution.
techniques is given in Ref. [12] where it is shown that Matrix [I&] can also be obtained by reducing the
these techniques are closely related to matrix stiffness matrix of eqn (1) to an upper triangular form
partitioning[ 131 and that Kron’s method of tearing[l4] using Gaussian elimination but terminating elimination
represents the first computational procedure of the when the final row of the rectangular matrix [Kii Kib]
modern substructuring analysis. This section gives a has been reduced. The matrix [&I then occupies the
brief review of the mathematical theory of substructur- location of the original matrix [&,,I. This method is
ing for static structural analysis together with some discussed in Ref. [lo] and is referred to as partial tri-
recent developments relative to sparse matrices, hyper- angulation.
matrices and introduction of constraints. The governing equations for the whole structure are
obtained by assembling the stiffness and load matrices
Review of static substructuring theory [&I and [Pb] of the m substructures. The equations can
For a structure composed of m substructures (Fig. I), be written in the following form:
the well-known force equilibrium equations for the jth
substructure (j = 1 to m) are: P-l [%I = [PI (7)
ing) can be obtained by combining (assembling) goups (11) are applied in a recursive manner, the equivalence
of substructures and eliminating the internal unknowns between multilevel substructuring and nested dissection
within them. This process amounts to applying eqns can be established. The equivalence between nested dis-
(l)-(lO) in a recursive manner. When the lowest (first) section and multilevel substructuring suggest the follow-
level substructure consists of only few elements, arran- ing consequences:
ged in a standard pattern, it is usually denoted a super 1. The rules for numbering the nodes to reduce the
element, i.e. a substructure may be thought of as consist- number of arithmetic operations and the fill-in using
ing of a number of super elements. The te.rm multilevel nested dissection scheme[lS, 161can be used in dividing
super element is sometimes used 10 refer to multilevel the structure into substructures and in numbering the
substructuring[31. nodes of individual substructures. This latter aspect is
particularly important since poor numbering of the nodes
Relationship between substructuring and sparse matrix of individual substructures can offset the computational
techniques advantages of substructuring techniques.
Recent work has shown that the banded, profile and 2. If all substructures are different, then the number of
wave front schemes are not the most efficient techniques arithmetic operations involved in the nested dissection
for solving a system of algebraic equations[lS, 161where scheme is identical with those involved in the substruc-
efficiency is measured in terms of the number of arith- turing technique, provided of course, that the internal
metic operations and fill-in (number of zero terms in the nodes are numbered in the same way. On the other hand,
original equations which become nonzero in the solution if a number of substructures are identical, then the
process). References115 and 161 conclude that more substructuring technique involves significantly less
efficient solution techniques are based on an ordering arithmetic operations than the nested dissection scheme.
strategy of the fundamental unknowns called nested dis- This situation is true even when the substructures are
section which results in a highly sparse nonbanded differently oriented with respect to the global (structure)
matrix of the equations. axes in which case the effective substructure and load
The basic idea of nested dissection is to divide the matrices have to be transformed to the global axes.
nodes of the complete structure into sets of internal and These transformation usually involve considerably less
interface nodes. Each set of internal nodes is further arithmetic than that involved in the generation of
subdivided into internal and interface nodes of the effective substructure matrices.
second level. The process is continued until it is no
longer possible to divide the internal node sets. The Hypermatrir storage schemes
numbering of the nodes starts with the node sets of the The hypermatrix (or block matrix) scheme based on
highest level. The internal nodes of any level are num- partitioning the structural matrices in both the row and
bered before the interface nodes of that level. The zero column directions[l8] has proved to be a versatile and
elements in the resulting algebraic equations are neither effective technique for handling large systems of equa-
stored nor operated upon in the process of solution. tions resulting in the finite element analysis of large and
A comparison between sparse matrix schemes and complex structures. The location of the nonzero sub
substructuring techniques is given in [VI. In this section matrices in the hypermatrix is identified by an address or
the equivalence of multilevel substructuring and nested a pointer matrix (see Fig. 2). A zero entry in the addresc
dissection schemes is discussed, and in succeeding sec- matrix denotes a zero submatrix which is neither stored
tions numerical examples are presented to demonstrate nor operated upon. Among the advantages of the
the reduction in arithmetic operations and disk storage hypermahix scheme are the independence of the central
requirements obtained by using multilevel substructuring memory requirements from the problem type and size, a
techniques. controllable ratio between CPU and I/O times and a high
Assume the structure is broken into m substructures. modularity.
If all the equilibrium equations corresponding to the m The use of hypermatrix storage schemes in conjunc-
substructures are grouped together, one obtains: tion with substructuring techniques has the additional
I-
[K;;‘] ,/”
p?
VT’1 = pcm’
h ;
where eqn (8) has already been used to express the advantage of allowing exploitation of the sparsity of the
interface unknowns for the individual substructures substructure matrices in the analysis, an important fea-
[&,“I in terms of the structure unknowns (matrix of ture for complicated substructures. Moreover, the use of
displacements at all interfaces) [%I. The same set of hypermatrix storage scheme appears to hold promise for
eqns (11) results from the nested dissection approach[lS] use on fourth-generation computers (e.g. CDC STAR-
and numbering the internal nodes for all of the m sub 100) where a large central memory is available for the
structures first followed by the interface nodes. If eqns analysis. The identification of the substructure shapes
624 A. K. Noon et al.
KI.15.2 0 0 Q 5.”
+Yl
1cr
.2 K2.3 0 Kzn-1 K2.n
o INIERNALNODES
5.3 K3.4 0 0 0 CONSTRAINEDNODE
OEXlERNALraE
K4.4 0 K4.n
~SYMtAETRIC
-5.
K
. 1 ‘“qj
il-l.ll-l
O
K
"." ADDRESS IPOINTERI
MAMTRIX
HYPERMATRIX Ii,/
= BEGINNINGADDRESS
OF SUMATRIX KLj
Fig. 2. Hypermatrix and address (pointer) matrix. Fig. 3. Representative constrained substructure model.
and levels can be accomplished by proper numbering of (13) has been used):
the nodes. The governing equations for individual sub-
structures are written (in the format of eqns 11) or (14)
backing storage before starting the reduction process.
The use of substructuring techniques on the STAR-100 where
computer is discussed in succeeding sections.
- CfflTRM.
kiif
-----DATA FLOW
SEAMELEMENT
0 MTERNAL NODES
* CONSTRAINEDNUDES
Table 1, and Ref. [22] describes a computer program to model assembly and solution to proceed in a certain
identify the number and size of substructures that results order (see Fig. 5 for the sequence of operations in an
in the least cost in a given computing system. example two-level substructuring problem).
The data management aspects in a general purpose
finite element program with substructuring capability are Simulator generated numerical examples
more important than in direct analysis. Data associated To assess the computational advantages of substruc-
with the problem is usually stored in a data base com- turing techniques, a computer program was developed to
posed of many distinct and often complex files. For simulate the substructuring analysis process. No actual
example, a substructure is a finite element model of a floating-point computations were carried out but
structure and requires the complete set of data files measurements were made of computing resource use. In
necessary for model description. The substructure, particular, the following information was determined:
however, also requires storage of the files associated 1. Number of floating-point arithmetic operations
with its formation (such as the condensed stiffness, load (multiplications, additions and divisions),
and mass matrix files and intermediate files used during 2. Number of I/O transfers to and from central
the process of static condensation) and alphanumeric memory and the number of words transferred and
identifiers to allow the program, during the main analysis, 3. Disk storage use which is measured by the space
to access the substructure files for automatic assembly of required for the substructure and structure stiffness
substructure matrices. Furthermore, the constrained matrices (equal to the space required by the decomposed
substructure approach also requires storage of additional matrix on the left-hand side of eqns (11)).
quantities such as kinematic constraints and connec- A large number of problems were solved using both
tivities. In multilevel substructuring, cross references direct analysis (zero-level substructuring) and various
must also be provided between a primary level structure levels of substructures. In all cases the hypermatrix
and subsequent lower level substructures to permit storage scheme was used to exploit the sparsity pattern
of the substructure and structure matrices. Each of the A$ = NUMBER OF WLTIPLICATIONS SD = DISK STORAGE REQUIREMENTS
IN DIRECT ANALYSIS FOR DIRECT ANALYSIS
matrices was partitioned in the row and column direc- WITH ONE NODE PER WITH ONE NODE PER
tions (see Fig. 2). Zero blocks were neither formed nor BLOCK BLOCK
MULTlPLlCATlON
RATIO
the equations before and after decomposition for an 8 x 8 sitivity analysis vectors, measuring the rates of change of
grid. The two cases of direct analysis and two-level the response variables with respect to the design
substructuring are shown. In the latter case the equilib- variables; dp are the original design variables (I = I to s)
rium equations for the individual substructures are and an asterisk refers to a modified quantity. The sen-
grouped together in a manner similar to that of eqns (I 1). sitivity analysis vectors can be obtained by first differen-
The reduction in the fi/f in obtained by using substruc- tiating the substructure equations, eqns 1, with respect to
turing techniques is noted in the table in Fig. 8. These design variables, eliminating the internal unknowns, and
computational gains obtained by substructuring are assembling the different substructures in exactly the
similar to those reported in Refs. 123,241for more com- same manner as in the original substructure analysis (see
plex structures. Ref.[30]). The computational cost of obtaining the sen-
The computational advantages of substructuring tech- sitivity analysis vectors is only a fraction of the cost of
niques are strongly dependent on the selection of the size the full analysis.
of individual substructures and the numbering of their A large number of numerical experiments on statically
nodes. A number of attempts have been made to indeterminate trusses and wing-box finite element models
rationalize the selection of substructures (see, e.g. have shown that the Taylor series expansions provide
Refs.[25-271 which only treat simple structures). For highly accurate approximations for (UY} and (IQ} for
guidelines regarding the numbering of the nodes in sim- moderate changes in the design variables, provided the
ple structures, reference can be made to nested dis- design variables are chosen as reciprocals of the sizing
section schemes[lS, 161. A heuristic approach for the variables. For large modifications in the design variables
selection of substructures for complex structures in the reduced basis technique described subsequently was
order to minimize the cost of the analysis is presented in found to be more appropriate.
Ref. [27]. Reduced basis technique. In this technique each of the
displacement vectors (I-J:} and (U:} are approximated
by a linear combination of s linearly independent sen-
Numerous applications have been made of nonlinear sitivity analysis vectors normalized to reduce numerical
mathematical programming techniques to the optimum roundoff error, in which s is the number of design
design of small and medium-sized structures. For large variabks and is much smaller than the dimensions of the
structural systems, however, excessive computer substructure stiffness matrices. This process is expressed
requirements limit the usefulness of such techniques. A by the following equations:
number of techniques have been proposed in recent
years to reduce computational effort. Among the more
promising approaches which use substructuring concepts (19)
are: (a) use of multilevel optimization techniques, and (b)
use of approximate reanalysis techniques with substruc- where
turing.
The first approach is based on decomposing the struc-
ture into substructures each with its own objective func- ~~~l=[[~},[~]....[~}] (20)
tion and constraints (see, e.g.[28,29]), the vector of
design variables is partitioned into substructure and in-
teraction variables. The optimization problems for the
~~*I=[[~},[~]....[~]] (21)
individual substructures are tirst solved independently
and then the interaction solutions are obtained itera- and (c},., is a vector of undetermined participation
tively. Different optimization techniques can be used for coefficients and a bar denotes a normalized vector.
individual substructures and the interaction problem. The governing equations for (c) may be obtained by
Success with this method has been limited and further applying the stationary potential energy principle-after
work is needed to facilitate its usefulness. expressing the total potential energy in terms of the trial
The second approach which can be used in conjunc- displacements of eqns (19).
tion with, or independent of, the tirst approach is to use
reanalysis techniques to generate the solutions for the
mod&d structure (corresponding to a modified set of
design variables) in significantly less computer time than in which
that required to solve the full structure equations. Two
efficient reanalysis procedures which have potential in
substructuring are the Taylor series expansion and
WY = cm ICI (2%
reduced basis techniques, a discussion of each follows.
Taylor series expansion. In this technique the new
displacement vectors {Uf} and {Ut} for each substruc-
ture corresponding to a mod&d set of design variables
d? (I= 1 to S) are approximated by a truncated Taylor’s
series
(Pr}= [+ilil’{Ptl
+ [@blr(pbl (26)
where{Wddd and (W,/~dJ are the first-order sen- and m is the number of substructures.
628 A. K. Noon ef al.
The major savings obtained by using the reduced basis elastic substructures are reduced only once prior to the
technique are: (a) avoiding the decomposition of the start of the nonlinear analysis. On the other hand, the
modified matrices [K$]; (b) reducing the assembly of the plastic substructures are mod&d during load incremen-
[K*,] and {Pr} matrices to straight forward matrix addi- tation and equilibrium iterations. Such an approach can
tion; and (c) significantly reducing the size of the struc- be mathematically described by the following set of
ture matrix [YL’$,..The reduced basis technique appears incremental equilibrium equations (see Refs. [32,33]).
to be well suited for problems where the number of
design variables has been reduced by design variable
linking, a process by which some design variables are
expressed in terms of others (see Ref. [311).The accuracy
of the two reanalysis techniques for a transmission tower where the K’s are incremental stiffness matrices, AU’s
(Fig. 9) is given in Fig. 10. For example, a reduction of and AP’s are incremental displacements and external
50% in the cross sectional areas of the vertical members forces, and subscripts e and p refer to the elastic and
of substructure IV, results in less than 35% change in the plastic substructures, respectively.
maximum displacement of the tower. The Taylor series If AU, are eliminated from eqns (27) one obtains:
expansion and reduced basis techniques calculate this
displacement to within 0.7% and 0.2%, respectively.
hnNtcoMPuTgR-MDEDstlWrRuclwRAL
ANALYSIS
The recent advances in minicomputers provide a new
dimension to cost effective computer-aided structural
analysis and design. Such minicomputers may be pur-
chased for less than $100,000, operate in a normal office
environment, and carry out selected structural analysis
and design tasks for only a fraction of the cost for the
same task done on a large-scale computer (see, e.g.
Ref.[36]). However, minicomputers do not, in general,
replace large-scale computers, but provide a less power-
ful, but inexpensive, adjunct to such capability.
Fig. 9. Substructuringanalysisof a transmissiontower (219bars). Reference[36] shows substantial benefit in the use of
minicomputers for such tasks as data management, in-
teractive graphics support and structural analysis of
moderate-sized problems. Furthermore, some organiza-
tions having purchased a minicomputer will likely seek to
stretch its capabilities to very demanding activities
and/or classes of problems.
This section discusses some potential minicomputer
applications for substructuring both as an independent
computer and as an adjunct to a large computer. To
identify the cost-effective features of the mini, Table 3
+I\ I IN SUBSTRUCTURE IV
I gives comparative costs of conducting basic arithmetic
tasks on three computers: a minicomputer without float-
lot
‘jy:I
D 50 -5D -10 0 IO
+ MODlFlCATlON OF A5
30 x)
ing point hardware (PDP 15 at the University of
Arizona-Tucson), a minicomputer with floating point
hardware (PDP 15), and a large-scale computer (CDC
6400 at the University of Arizona). While the large
computer results are dependent on specific installation
and user demands at the time, the results, nevertheless,
Fig. 10. Accuracy of maximum displacement obtained by the indicate that for many activities the minicomputer, both
reanalysistechniques. with and without floating point hardware, can be a cost-
Substructuringtechniques-status and projections
effective tool for many computational tasks. Although process of (1) substructure decomposition, (2) main sys-
the mini is slower, the fact that it may operate in a tem analysis and (3) detailed analysis of the specific
dedicated mode means that the turn-aroundtime may be substructure are shown in Table 4. These results indicate
shorter than with a heavily loaded large computer. The that it is practical to conduct substructure analysis for
minicomputer may be used for moderately large struc- such moderate-sized structures using a minicomputer
tural analysis problems through use of substructuring. and the total time of about 1 hr and 11 min is not unac-
The following discusses a specific example. ceptably large. Furthermore, a floating-point processor is
Consider the uniformly thick plane stress problem available on the market which would significantly reduce
shown in Fig. 11 which represents a metal plate per- this time.
forated at equal intervals and subjected to in-plane load- The computer times in Table 4 indicate that almost 50
ing. Two static load cases are considered: (I) inertial and 19 min. respectively, were required for steps (1) and
loading in the horizontal direction; and (2) inertial load- (2), while only about 3min were required for step (3).
ing in the vertical direction. The structure is separated The results also show that the major time consumers
into 16 substructures as shown with each substructure (denoted with an asterisk) are reduction activities for the
having 200 elements with two in-plane degrees of substructure and decomposition for the main system
freedom at each node. Because of the repetitiveness, analysis. These tasks are essentially independent and can
only one constrained substructure was formed and its be separated in a modularized structural analysis such as
stiffness reduced to eight main analysis node points done here. This situation suggests that future structural
around its boundary. Normal displacements on a boun- analysis programs should be organized so that large,
dary between two external substructure nodes are con- calculation-oriented tasks are carried out on a large-scale
strained to a cubic displacement function whereas tan- computer well suited for number crunching, and assembly
gential displacements are assumed to vary linearly. processes, which are primarily data management and
Loads were applied to the substructure nodes and could be thought of as pre- or post-processing, are car-
automatically condensed and assembled for the system ried out on a minicomputer. Furthermore, the calculation
analysis. The overall system structural analysis was car- phases, such as those noted with the asterisk, are pri-
ried out followed by a detailed analysis of a represen- marily vector-oriented and may be well suited for a
tative substructure at the bottom right corner. Figure 12 future distributed computing environment where the
shows overall deflections for the second load case, as large computer is a vector-processing computer such as
well as deflection and stress results for the representative the STAR-1001371.
substructure. Such distributed computing concepts which would
Minicomputer times required for the three step facilitate independent calculations on different machines
Fig. It. Substructuremodel of a perforatedplate. Fig. 12. Stress and deflectionresults for perforatedplate model.
630 A. K. NOOR et al.
are still evolving and software needs to be developed to To assess the effect of using multilevel substructuring
support routine data transfers among minicomputers, on the CPU time and storage requirements on the STAR,
large computers, and vector processors. Nevertheless, a computer program was developed to simulate the sub
these results suggest that future finite element work structuring analysis process and estimate the CPU time
should be oriented toward development of finite element and storage requirements (see Table 5 for the scalar and
substructure analysis methods in highly modular form vector timings on the STAR). The three square grids
with standardized procedures for module input/output so (8 x 8, I6 x I6 and 32 x 32) considered in the preceding
that specific tasks, such as those noted in Table 4, can be sections were analyzed. In each case the hypermatrix
assigned to the most appropriate computer hardware. It storage scheme was used to exploit the sparsity pattern
also suggests that the concept of pre- and post-process- of the substructure and structure matrices. The opera-
ing should be redefined to encompass a vast array of tions on submatrices were vectorized in the manner
specialized data management oriented tasks at the described in Ref.[38]. Zero blocks were neither formed
beginning or end of each modular analysis and that such nor operated upon. However, the zeros within each
tasks be assigned where possible to a minicomputer as block were not exploited. To exploit these zeros the
the most cost-effective. timely way of conducting these sparse vector capability of the STAR can be used, but
tasks. since this capability is still evolving, no timing infor-
stJBsmucruRlh.cON CLK SrAR-1m mation is available for sparse vector operations, and
COMPUTER therefore, to simplify the comparison, sparse vectors were
The fourth generation CDC STAR-100 computer has not considered.
many new features which distinguish it froin current Typical results are shown in Figs. 13(a)and (b). &ure
third-generation hardware. Features which appear to 13(a) shows the ratio of the estimated CPU time in the
influence most significantly finite element analysis in substructuring analysis to that in the direct analysis. For
general and substructuring techniques in particular are small problems, the reduction in the CPU time obtained
the vector and sparse vector pipeline processing capabil- by substructuring is small due to the use of short vectors.
ity and the virtual memory. These concepts have been As the problem size increases, the reduction in CPU time
discussed in Refs.[37,38] and are summarized in becomes more pronounced-up to a point, after which
Appendix A. Some numerical results based on the very small gain in CPU time is dbtained. Figure 13(b)
STAR-100 computer configuration at the NASA-Langley shows the ratio of storage requirements in substructuring
Research Center are given in this section. analysis to that in the direct analysis. As is clear from
Substructuringtechniques-status and projections 631
ICPl+ = CPU TIME means of exploiting some of the features of new computer
REQUIRE0 FOR
SURSTRUCNRING
hardware. For example
ANALYSIS SUBSTRUCTURING (a) Minicomputers are eminently suited for carrying
out the prelpost processing and data management tasks
in the substructuring analysis.
(b) The virtual memory of the CDC STAR-100 com-
puter is particularly effective for use with substructuring.
However, for large problems, data management costs are
expected to lead to less levels of substructuring on a
STAR-100 than on third-generation computers.
4. Reanalysis procedures used in conjunction with
substructuring can substantially reduce computational
0 P 0 4 requirements for automated (optimum) design of large
NU:BER 0: dS Gf N”hhER 0: LEVEL: OF
SUBSTRUCTURING SUBSTRUCNRING
structures.