Professional Documents
Culture Documents
ABSTRACT
INTRODUCTION
In advanced applications of ADAMS’ mechanical system simulation technology, it often occurs that
a user already has a sizeable investment in another computer code that solves some part of the
problem, for example to compute electromagnetic or aerodynamic loads, control variables or sub-
system motion. This may be a commercially available program or a specially developed, highly
proprietary in-house product. This other code will generally have a solution methodology very dif-
ferent from ADAMS’, and may be running on different hardware, under a different operating system
or even at a different site.
Externally – ADAMS/Solver
Computed Mechanical
Loads System
Fully –
Coupled
Solution
One example of a problem that might take advantage of this methodology are aeroservoelastic re-
sponse, where the aerodynamic loads are computed in a finite difference CFD code and the servo
feedback is determined in a separate controls code. Another example could be interactive simula-
In such cases, it is usually not possible to convert the other code to run directly as a standard
ADAMS/Solver user-written subroutine. However, we still need to be able to connect the other
code to ADAMS/Solver in such a way that the two programs can communicate with each other dur-
ing the system simulation to get a fully coupled response. Further, because this kind of problem
tends to be large and complex, the connection needs to be made in an efficient manner which will
allow both programs to run at the best possible speed, unless there are no constraints on time and
hardware availability.
HISTORICAL REVIEW
In the previous work, we showed, using a very simple demonstration problem, many of the things
that can go wrong with a poorly arranged co-simulation. Then, using the superior interpola-
tion/extrapolation approach, we showed how we can greatly improve the results.
There are a variety of things that can go wrong with a co-simulation. The most common is that
there will be a communications bottleneck between the codes and the combined solution will sim-
ply run very slowly. This kind of slowdown can also be caused by difficulties that one code has in
“digesting” the data provided by the other code. Although the interpolation/extrapolation approach
usually fixes these problems, it is not the difficulty of real interest.
Various problems, however, can lead to the co-simulation producing incorrect answers. These in-
clude:
• simulation synchronization failure between the two codes
• aliasing due to inappropriate sampling interval
• numerical “pinging” in ADAMS caused by discrete inputs
• artificial instability caused by incompatible error control
The following plot compares the displacements of the mass with true continuous forcing to the that
with co-simulated discrete forcing with these setups:
Since this is not a very good result, we looked first at decreasing the Solver step size. Decreasing
the Solver step size to .02 or even .01 seconds did not improve the response. In fact, it had very
little effect on the co-simulated response except for increasing the run time 10 or 20%. However,
going to a .005 second or smaller step size for Solver allows the 100 Hz mechanical system to be
numerically excited by the discrete forcing the as shown in the following plot.
Increasing the sampling rate in the forcing code even more produced an even stronger instability,
with the numerics getting so bad that the response coupled into the supposedly good side of the
model and is sending it unstable also!
There is fortunately a fairly straightforward solution to the problem. This is to place an interpolat-
ing/extrapolating interface between ADAMS/Solver and the other code. When done properly, this
can be very efficient and as a by-product adds the capability to run the two primary codes on sepa-
rate computers.
First, let’s remember that all digital computer solutions to these kinds of problems are actually dis-
cretely computed approximations to continuous physics. The differences between the various
kinds of solution tools is mainly in the order of the functions that are used to approximate the true
solution between the discrete points where it is computed.
A finite difference code or a digital controller simulation may make no attempt at all to interpolate
between solution points. On the other hand ADAMS/Solver uses polynomials of up to 6th order in
the predictor/corrector solution, both to help the integrator advance and interpolate the response
between solution points. We will use this same approach for our co-simulation “glue”.
Consider the case of the other code being a digital controller. Note that this is demonstrative, not
restrictive, and the discussion applies generally to all coupled cosimulation codes (CCC).
Also, the ADAMS/Solver solution needs to be able to interrogate the other code for data at any
specific time, not only at some fixed interval, and not even always stepping forward in time. But,
the CCC typically has only discrete outputs, so it can only respond with those. We could try to
make the other code take such tiny steps that it was very close to continuous, but as we have seen
earlier that this will not always produce a good coupled solution.
Further, as Solver advances, its predictor needs the CCC response at future times. Of course the
other code can not provide those. What we need then is a way to extend the response of the
CCC into the future – an extrapolator.
So one solution to the co-simulation problem is to create a highly-efficient “glue” routine which
connects the two codes during a co-simulation, and can do both interpolation and extrapolation on
the data that passes between the codes. In practice, this has been shown to work extremely well
and is described in the following section.
The required functionality from our “glue” routine is shown in the following diagram.
ADAMS/Solver Þ
Interface Þ
Other Code Þ
1. ADAMS/Solver updates its side of the interface with mechanical response data at each
successful integration step. Because of the way that Solver is programmed, this requires
some inventive use of the Solver utility subroutines.
2. The CCC gets an interpolated ADAMS response from the interface whenever it wants to
sample.
3. The CCC advances until it is within one time step of the Solver simulation time. At each
step, it updates its side of the interface with its computed response.
4. Then A/Solver advances to the next time, extracting continuous extrapolated CCC re-
sponse data from interface to build the solution matrices.
Because the other code never quite catches up to Solver, this is sometimes called a “half-step
lead” method. The glue code takes care of all the synchronization, so that minimal modifications
have to be made to either Solver or the CCC.
Note that if the other code is actually simulating a true discrete process, you would not use the ex-
trapolator part of the interface. This is shown as the “bypass” in the above diagram. Similarly, if
the CCC’s response were not dependent on any ADAMS system states, but only on time, there
would be no need to use the interpolator part.
The interpolator and extrapolator both use quadratic functions to avoid the “spline buckling” prob-
lem that can occur with higher order polynomials. These have been coded carefully in the interpo-
lator/extrapolator to get the best possible response from the interface.
RESULTS
This is one of those few times where you can have it faster, cheaper and better! In our tests, the
interpolated co-simulation has always run faster than the non-interpolated one. For example, here
are run-time results for the 4-second example simulation using 50 steps/sec in Solver and 1000 Hz
sampling in the forcing code, on a 400 MHz NT machine:
The interpolated co-simulation also converges nicely to the continuous solution as the computa-
tional step size (or sampling interval) is decreased in the other code. This is shown in the following
plot, where the curve for sampling at .001 seconds with interpolations is not visible because it di-
rectly overlays the true continuous solution. The improvement is especially noticeable in the 1st
and 2nd derivatives.
Recent work with customers using the co-simulation toolkit has resulted in significant additional
capabilities being added, while some existing capabilities have be fine-tuned for better perform-
ance. These will be covered below:
o MI/MO (multiple input / multiple output) abilities have been added with large numbers of
passed variables.
o Force/Force coupling has been proven to work in addition to the originally demonstrated
motion/force and motion/control set-ups. Attempts to use the CCC to drive ADAMS mo-
tions and return forces (reversed roles) have been partially successful.
o BSD sockets have been implemented as a communications protocol for both UNIX and
Windows, allowing for non-Windows networked co-simulations, as well as co-simulations
between dissimilar computing platforms.
o The code has been cleaned-up and reorganized, making the toolkit easier to use.
MI/MO
The cosimulation toolkit has been standardized so that all data passed between Solver and the
glue code is now done using ADAMS/Solver VARIABLE elements, both for inputs to ADAMS and
for outputs from ADAMS. This simplifies the internal programming on the Solver side and the
VARIABLEs can be referenced freely in any Solver function expression. In an approach similar to
the way that ADAMS/Controls uses the PINPUT and POUTPUT statements, the ADAMS identifiers
of the VARIABLEs are listed in two general-purpose ARRAYs with fixed numbering.
Note that aside from the normal considerations of ADAMS modeling, there are no special restric-
tions on the use of these VARIABLEs (see below) . The code is currently set-up for up to 61
passed variables in each direction, but this could be easily expanded if needed.
In its original conception, the co-simulation toolkit was structured under the assumption that
ADAMS would be simulating the mechanical part of the coupled system, and the other code would
be either computing forces or controls to be applied to it. The wide variety of in-house and legacy
codes that could be coupled into ADAMS has shown the need to remove that assumption and al-
low for more general co-simulations. Note that in all the discussions below, “force” is used generi-
cally for any non-motion data.
FORCE/FORCE COUPLING – Many larger existing simulation programs, especially flight dynam-
ics codes, were developed on the assumption that they exclusively will be integrating the system
equations. Such codes (including ADAMS, by the way) have some difficulty accepting command
motion data from an external source, so that the original motion/force approach can not work. For-
tunately, a simple paradigm shift was all that was necessary to solve this one.
Consider this very simple conceptual example problem. Let’s begin looking at the original mo-
tion/force approach by assuming that M2, M3 and the connecting spring are modeled in ADAMS,
while M1 and K12 are modeled in some other code with which we need to co-simulate.
M1 M2 M3
K12 K23
In the motion/force case, ADAMS should compute the motion of M2. M2’s motion is applied to other
code; a reaction force is computed and returned to ADAMS. If the CCC can not accept motion in-
put, but only force input, we need only to re-cast the problem slightly, by including the “interface”
mass, M2 in both codes. This is shown here:
M1 M2 M3
K12 K23
Now ADAMS transmits the force at the left end of K23, which is applied to M2 in the other code.
The other code does its thing, computes the force at the right end of K12, and sends that to Solver
to be applied to M2 on the ADAMS side. Simple, and it works.
For example. let’s look at the coupled problem of an aircraft landing gear touching down simula-
tion, where the landing gear mechanism, structure and contact forces are modeled in ADAMS, but
the flight dynamics of the plane to which the gear is attached are modeled in the CCC. This prob-
lem has the characteristic that the position (especially vertical) and velocity of the gear must be
known very accurately in both codes to get good results.
ADAMS model
Because ADAMS MOTION elements are constraints, it is not correct to make them a function of
anything but time. So we can not simply take the motion output from the flight simulator and use it
directly in a MOTION statement. In addition, the position output from the flight simulator, even us-
ing the extrapolator, is not 2nd-order continuous, which would lead to discontinuous accelerations in
ADAMS and likely integration failures.
The solution to this problem is very closely related to the force/force coupling problem, and takes
advantage of the “action-only” force capability in ADAMS. (Action-only forces are reacted against
the inertial reference frame, ground.)
We first need to include the entire mass of the plane and gear in both codes. This allows the CCC
to correctly predict overall flight dynamics. It also gives us an “interface” part for ADAMS, like in
the force/force approach.
Next, we create an autopilot-like action-only feedback GFORCE on the airframe part in ADAMS.
The inputs to this force are the position, velocity and acceleration data from the flight simulator
code. Note that depending on how these data are computed and output in the CCC, it may be
necessary to do additional smoothing on part of them.
Of course, the key to this approach is finding gain matrices (K1, K2, K3) that will work tightly enough
for realistic coupling to occur, but not so high that the simulation is unstable. We have had best
success in using actual inertias in the K3 matrix, and using lower gains in velocity and position to
trim the response. It still requires a significant amount of “tweaking”, but can be made to work.
Originally, the co-simulation code details were developed on the Windows platform using pipes for
communications between the glue code, the CCC and Solver. pipes are well-named, and can be
conveniently thought of as a conduit that takes data in on one end and lets it out on the other. Un-
der Windows, pipes are implemented as shared memory, are very efficient and can also automati-
cally connect multiple computers in a workgroup across a local area network (LAN).
Under UNIX, however, pipes are implemented as a special file type and are slower than on Win-
dows, even when created in a file structure local to the computer. Further, UNIX pipes are gener-
ally restricted to use on a single machine. Finally, it seems that UNIX pipes are reliable only when
used in the one-way mode, so that a pair are need to replace a single, two-way Windows pipe.
sockets are another inter-process communication protocol that runs over the TCP/IP base layer
common to almost all modern networks and are available on nearly all UNIX and Windows ma-
chine. The programming details of sockets are a bit different from pipes, but not too much so.
sockets are very similar to one-way pipes and are usually created in pairs for two-way communica-
tions. Unlike pipes, which are a peer-to-peer protocol, sockets work on a client-server model famil-
iar to most network applications. When run on a single UNIX machine, sockets and properly cre-
ated pipes have very similar run-time performance.
The great advantage of sockets, however, is that you can use them to connect multiple and dis-
similar platforms. That is, you can run ADAMS on one machine and your other code on a different
machine on the LAN. You can even run ADAMS, for example, on a UNIX box and the other code
on a Windows or Linux platform, or vice versa. In one case, we have even connected a hardware
controller with Ethernet capability to an ADAMS model.
In summary, both pipes and sockets connections are now available for co-simulation on both Win-
dows and UNIX platforms and can be mixed fairly freely. That is, in addition to running the co-
simulation on a single box, or using multiple boxes with the same operating system, it is now pos-
sible, for example, to run the glue code and ADAMS on a Windows computer and the CCC on a
UNIX computer, with the glue code talking to ADAMS via pipes, and to the CCC via sockets.
Note that you must be careful about the machine representation of numbers when mixing operating
systems, since some are “little-endian” and others are “big-endian”. Conversion subroutines are
available, and are usually added into the glue code to avoid having to change the ADAMS subrou-
tine or the CCC executive.
o ADAMS/Solver is solves the system equations in continuous time and produces a continuous
result. Most ADAMS integrators use a variable time step. This is true even though we may
choose to request output from Solver only in fixed time increments. Further, the solution meth-
odology in ADAMS/Solver is arranged in such a way that internal simulation time may actually
go backwards when the corrector is having trouble converging.
o The other code you want to connect to Solver will often solve its part of the problem in discrete
time and results are not available between time steps. This can be true if the other code is us-
1. Co-simulation is a widely useful and extremely powerful technique for joining nearly any type of
existing external computations with an ADAMS model.
2. The key technique is to use a two-way quadratic interpolation/extrapolation scheme in the co-
simulation interface to greatly improve both simulation fidelity and computational speed.
3. Recent changes allow for any number of variables to be passed between the co-simulating
codes in both directions. These can be force, motion or controls data.
4. The current implementation allows for co-simulation across the Ethernet, between different
hardware platforms and across operating systems.
BIBLIOGRAPHY