You are on page 1of 19

SYSTEM MODELING AND SIMULATION FOR

PREDICTIVE MAINTENANCE

EDWARD SZCZERBICKI
WARREN WHITE
Department of Mechanical Engineering, The University
of Newcastle, Callaghan, New South Wales, Australia

Condition monitoring, the process of data collection for the evaluation of


machinery performance and reliability, is an essential part of today’ s on-line
predictive maintenance for complex manufacturing systems. Together with
the increasing complexity of machinery systems, the number of assets that
need to be diagnosed is also rapidly increasing. This is why the management
of condition monitoring became a complex cooperation and resource alloca-
tion problem. This paper describes an implementation of computer simula-
tion as a modeling and decision support tool for the management of a
condition-monitoring service group. The implementation was deve loped
using the SLAMSYSTEM modeling environment. It provides an easily
adaptable management-style program that may be applied to a company
with a monitoring syste m that varies in versatility, expertise, and availability.

Condition monitoring is the process of systematically collecting data for


the evaluation of asset performance, reliability, and r or maintenance
needs for the planning of maintenance actions s Beebe, 1988.. The

The authors are indebted to the members of Pacific Power International Technical
Service Group. Without their kind assistance this simulation project would not have
started.
Address correspondence to Edward Szczerbicki, Department of Me chanical Engi-
neering, The University of Newcastle, Newcastle, NSW 2308, Australia. E-mail: mees@
cc.newcastle.edu.au

Cybernetics and Systems: An International Journal, 29:481 ] 498, 1998


Copyright Q 1998 Taylor & Francis
0196-9722 ¤98 $12.00 + .00 481
482 E. SZCZERBICKI AND W. WHITE

following types of condition monitoring can be listed:

Inspection analysis: A process involving the use of senses such as sight,


smell, touch, and hearing. With the use of these aids, the condition
of the machine and the performance capabilities are estimated.
Vibratio n analysis: This process requires the gathering of vibration data
and the interpretation of these data. Diagnosing a problem through
vibration analysis requires a great deal of skill and experience, but
if used correctly it is one of the most effective methods available.
Oil analysis: Sampling the oil from a gearbox or sump on a machine and
having the sample chemically tested gives important information
about the type of defect or the specific parts causing the problem.
Wear debris analysis: Anothe r method for finding information on the
type of defect by microscopically looking at the shape and form of
the wear particles.
Moto r current spectral analysis: A current transformer placed on a single
supply lead or a secondary lead s ammeter, for example ., which is
also connected through a conditioning card to a data collector. This
allows an analysis of the condition of the motor to be performed.
Therm ography: This method uses a special camera that indicates tem-
perature contours for the purpose of identifying so-called hot spots,
or areas of intense heat.
Laser align m ent corrections: This involve s using laser equipment to align
couplings and shafts on the machine accurately to alleviate the
problems caused by misalignment or eccentricity.
Nondestructiv e testing: A method that uses techniques such as ultrason-
ics or X-rays to find possible cracks or flaws in machine parts.

In this paper a simulation model is developed that mimics the


operations of a condition-monitorin g service group that performs in-
spection, vibration, oil, and wear debris analysis. The model is then used
to simulate the monitoring service and optimize the manning scale and
instrument resources for this service. It provides an adoptable manage-
ment framework that may be applied to any group or company with a
condition-monitorin g workforce that varies in versatility, expertise, and
availability.
There has been a resurfacing of interest in maintenance practice
over the past 10 years s Moubray, 1991.. The increasing complexity of
condition-monitorin g and maintenance problems in today’s machinery
SYSTEM SIMULATION FOR MAINTENANCE 483

systems calls for new sophisticated decision support and problem-solv-


ing tools. Expert systems, decision trees, neural networks, and trans-
puter networks are all used for machine fault diagnostic and predictive
maintenance s Chen & Li, 1996; Huang & Wang, 1996; Lin & Wang,
1996; Patel & Komrani, 1996; Shyur et al., 1996.. Also, the more general
problem of developing maintenance strategies that can contribute to
higher quality products manufactured at lower cost is addressed in the
literature s Pujadas & Chen, 1996.. In this paper, the SLAMSYSTEM
modeling environment s Pritsker, 1995. is used to examine the utilization
and staffing levels for the section supplying condition monitoring ser-
vices for rotating machinery except for motor monitoring. The SLAM II
simulation language is used as a modeling tool.

SYSTEM DESCRIPTION AND MODELING ASSUMPTIONS


The real-life system that is modeled involve s a condition monitoring
service group, consisting of 18 staff members. The group is divided into
five area locations s location A, B, C, D, and E . and some staff are
utilized in more than one area. The locations are geographically sepa-
rated, and this creates time differentials when complete job duration
times are calculated. For this reason the model simulates only the
actual tasks at hand, without the extra variable s that have no direct
impact on the results. There are some other model confinements that
have to be taken into account, and these are explained in more detail in
the assumptions in the following.
Apart from different locations, the staff are skilled in different
levels pertaining to qualification and staffing requirements. Levels in-
clude a technical level s Lvl 1, seven staff members. at which staff are
required only to collect data or samples. The next staffing level s Lvl 2,
four staff members. is that of the staff that has the knowledge to
analyze the data, a task that absorbs most of the time allocated to this
type of service. At the third level s Lvl 3, seven staff members., the
ability to combine all the results for a monthly report is required. These
staff also communicate with the clients and discuss the effectiveness of
the service, improvements, limitations, and changes that may be applica-
ble to any process within the service. The skill levels are distributed
among all staff members, some overlapping to perform duties on both
vibration and oil processes. Table 1 shows skill level requirements
484 E. SZCZERBICKI AND W. WHITE

Table 1. Skill level needs and service requirements at different locations

Location A Location B Location C Location D Location E

No. Skill level No. Skill level No. Skill level No. Skill level No. Skill level

4 Lvl 1 2 Lvl 1 2 Lvl 1 4 Lvl 1 1 Lvl 1


2 Lvl 3 1 Lvl 2 1 Lvl 2 4 Lvl 2 2 Lvl 3
3 Lvl 3 2 Lvl 3 3 Lvl 3
Service requirements
470 machines 278 machines 211 machines 691 machines 230 machines
per 3 months per 3 months per 3 months per 3 months per 3 months

across locations together with service requirements s number of ma-


chines serviced..
The model of the monitoring service with the needs and require-
ments as specified in Table 1 was developed under the following
assumptions. Assumptions regarding staff members:

The simulation model looks at only the durations when a staff resource
is being utilized by performing duties of condition monitoring s i.e.,
disregarding meal breaks, staff meetings, other duties, etc...
Staff are available when requested s e.g., not on leave ..
Staff numbers are not changed during the simulation apart from the
alterations in the program.
Ratios of staff members at the various locations do not change during
the simulation.
Staff skill levels do not change during the simulation.
Staff are safety aware and have no accidents during the simulation.
For staff at the same skill level, no priority is given for any machine s s..
Any staff member capable of performing a given task will have no
priority regardless of skill level.
The percentage of workload per staff member for the tasks is constant
throughout the simulation.
All staff at a given skill level have obtained the necessary training and
qualification s s. required to perform their duties.

Assumptions regarding machines:

All machines, or the number of machines required for each 3-monthly


period, are available s in-service operational..
SYSTEM SIMULATION FOR MAINTENANCE 485

All machines are in good working order and safe from causing injury to
personnel.
All machines have designated and labeled targets for vibration and
some form of oil access s no setting up required..
All proximity switches on machines are in working order.
All machines consist of s on average . one motor, one gearbox, and a
drive of some sort.
Each machine s on average . has six reading points as shown in Figure 1
in accordance with the Standards Association of Australia AS
2625-198 3 s AS 2625-1983 , 1983..
Each point where applicable has horizontal, vertical, and axial direc-
tions for the vibration readings in accordance with AS 2625-198 3
s Figure 2..

Assumptions concerning travel and time duration:

It is assumed that if a resource is required and available, then that staff


member is on site already.
Similarly, if equipment such as data collectors and oil sample pumps is
required, then, if available, it is assumed that the equipment is on
the site where it is required.
The duration times for the various tasks are set at the beginning of the
program and are described later in this article.

Figure 1. The average number of sampling positions and their location.


486 E. SZCZERBICKI AND W. WHITE

Figure 2. A bearing showing the three directions for vibration measurement.

The model uses only the duration times for tasks performed in pro-
cesses such as data collecting, oil sampling, analyses, filtergram
preparation, discussions with plant owners, and reporting s i.e., no
delays, no breaks, no miscellaneous functions that require time ..
There are no times for the chemical analysis; it is assumed that the
results have returned from the laboratory before a final analysis is
performed.
The chemical analysis is assumed to be correct and any question about
the validity means that a new sample must be sent to the laboratory
for a new result.

Assumptions concerning instrumentation and computers:

All instrumentation and computers are in good working order through-


out the simulation s including proximity switches, accelerometers,
data collectors, oil sample pumps, computers and their respective
software packages, microscopes, etc...
SYSTEM SIMULATION FOR MAINTENANCE 487

If instruments such as data collectors or oil sample pumps are re-


quested and are available , it is assumed that they will be utilized
immediately.

For various real-life systems the preceding assumptions may vary.


The model, however, represents a platform that is easily adaptable to
suit any process or group of processes for which multiple skill levels and
a variety of tasks are needed.

MODEL DESCRIPTION
The model was developed in the SLAM II simulation language and
implemented in the SLAMSYSTEM modeling environment s Pritsker,
1995.. It creates single entities at set intervals and then splits into five
parts, one for each of the locations. From there they are again split into
two parts, one for vibration analysis and the other for oil analysis
including wear debris. This enables the model to keep track of entity
locations and type of analysis being conducted.
From here the model tests various resources s staffing levels, data
collectors and oil sample pumps. for availability and utilizes them if
possible, or they wait in queues. Once the entity has accepted a
resource, it is put through various activities corresponding to the
flowchart diagrams, which have duration times and conditions applied.
At the end, various relevant forms of data are collected and the results
of the data displayed in a summary report, another aspect compiled
automatically by SLAM II.

Attributes
The attributes allow the individual entities to be traced during the
simulation model execution. The attributes used are specified in Table
2.

Files
These are the queues and await nodes that indicate where particular
entities are being stored while requesting some action or resource. The
files used in the model are listed in Table 3.
488 E. SZCZERBICKI AND W. WHITE

Table 2. Attribute description

Attribute
number Definition Values Description

1 Marking Time Various Creation time


2 Entity type 1 or 2 Vibration or oil
3 Entity location 1, 2, 3, 4, 5 Physical locations
4 Staff resource s vib. 2, 4, 5, 6, 7, 8, Staffing group being utilized for vib
10, 11, 13, 14
5 Data collector 15, 16, 17, 18 Data collector being utilized for vib
Resource
6 Staff resource s oil. 1, 2, 3, 5, 6, 8, Staffing group being utilized for oil
9, 12, 14
7 Redo machine s vib. 0ª ? Number of failed data audits on
machines s counter.
8 Invalid data s vib. 0ª ? Number of failed data audits after
download s counter.
9 Extra tests s vib. 0ª ? Number of further testings required
for a diagnosis s counter.
10 Extra staff required 0ª ? Number of assisting staff for staff111
and staff5 s counter .

Duration Times
Duration times are measurements of how much time is allocated for
each task or delay, and some have included probability functions used
to determine the number of entities or the type of entity that moves
through that activity. Table 4 specifies duration times used in the
model. Global variables XX are listed in Table 7.

Resources
The resources are the staffing groups availabl e and the equipment
needed to simulate the system. Each group has separate tasks or
interactions with various other groups. Table 5 describes resources.
Resource allocation shows where the various staffing groups and
equipment are allocated to perform or be utilized in the system s Table
6.. It is easy to note some of the overlaps or common interactions.
SYSTEM SIMULATION FOR MAINTENANCE 489

Table 3. File description

File number Labe l s s. Type Description

1 OS1D AWAIT Waiting for STAFF1


2 VS1A, OS1A AWAIT Waiting for STAFF2
3 VS1B, VS5A AWAIT Waiting for STAFF4
4 VS2A, OS2A AWAIT Waiting for STAFF6
5 VS2B AWAIT Waiting for STAFF10
6 VS3A AWAIT Waiting for STAFF13
7 VS3B, OS2B, OS4B AWAIT Waiting for STAFF8
8 VS3C, OS4A AWAIT Waiting for STAFF14
9 VS4A, VS5C AWAIT Waiting for STAFF11
10 VS4B, OS1C, OS5, OS3B, OS3D AWAIT Waiting for STAFF5
11 DCT1 AWAIT Waiting for DATALOG1
12 DCT3 AWAIT Waiting for DATALOG2
13 DCT2 AWAIT Waiting for DATALOG3
14 DCT4 AWAIT Waiting for DATALOG4
15 VS5B AWAIT Waiting for STAFF7
16 CMO1 QUEUE Queuing for activity 19
17 OS1B, OS3A AWAIT Waiting for STAFF3
18 OS3 AWAIT Waiting for STAFF12
19 OSP2 AWAIT Waiting for OILPUMP2
20 OSP1 AWAIT Waiting for OILPUMP1
21 OS3C AWAIT Waiting for STAFF9
22 VFQA QUEUE Queue for matching RPTA
23 OFQA QUEUE Queue for matching RPTA
24 VFQB QUEUE Queue for matching RPTB
25 OFQB QUEUE Queue for matching RPTB
26 VFQC QUEUE Queue for matching RPTC
27 OFQC QUEUE Queue for matching RPTC
28 VFQD QUEUE Queue for matching RPTD
29 OFQD QUEUE Queue for matching RPTD
30 VFQE QUEUE Queue for matching RPTE
31 OFQE QUEUE Queue for matching RPTE

Global Variables
These variable s are used to apply probability distribution duration times
to activities and also determine the probability of certain events hap-
pening, such as data audit failure. They keep the same values anywhere
in the program at any one instant in time s Table 7..
490 E. SZCZERBICKI AND W. WHITE

Table 4. Description of duration times

Activity number Description Duration s min. Probability

1 Entity creation 0 1
2 Number of machines for location A 0 }
3 Number of machines for location B 0 }
4 Number of machines for location C 0 }
5 Number of machines for location D 0 }
6 Number of machines for location E 0 }
7 Taking vibration reading XXs 3. 1
8 Downloading to computer s Vib. 2 1-XXs 2.
9, 11, 12, 13 Check data validity and audit s Vib. XXs 4. 1
10 Recheck of equipment and method RNORMs 3, 0.35. 1
s Vib.
14 Analyse vibration results XXs 5. 1-XXs 1.
15 Check results and compare data 4 0.99
history
16 Update database s machine `ok’ ., 2 0.08
s Vib.
17 Discussions with plant owner XXs 6. 1
s Vib.
18 Further testing and inspection XXs 7. 0.2
required
19 Blank r dummy 0 1
20 ª 29 Sample preparation XXs 8. 1
30, 32 Sample collection XXs 3. 1
31, 33, 34, 35 Filtergram preparation XXs 9. 1
36 Preview results s oil. XXs 10 . 1
37 Update database s oil. 2 1-XXs 1.
38 Check results s oil. 3 XXs 1.
39 Update database s oil. 2 XXs 1.
40 Compare history in database s oil. 3 1-XXs 1.
41 Discussions with plant owner s oil. XXs 9. 1
42 Inspection and r or additional XXs 9. 1
testing s oil.
43 Complete testing required s oil. XXs 7. 0.05
44 No further testing required s oil. 0 0.95
45 Non urgent-update database s oil. XXs 11 . 0.99
46 Specialist advice required s oil. XXs 12 . 0.01
47 Duration time for additional staff 480 1
s 11.
48 Duration time for additional staff 960 1
s 5.
SYSTEM SIMULATION FOR MAINTENANCE 491

Table 5. Resource description

Resource Resource File Resource Resource File


number label Capacity number number label Capacity number

1 STAFF1 10 1 11 STAFF11 1 9
2 STAFF2 2 2 12 STAFF12 1 18
3 STAFF3 1 17 13 STAFF13 2 6
4 STAFF4 1 3 14 STAFF14 4 8
5 STAFF5 1 10 15 DATALOG1 2 11
6 STAFF6 1 4 16 DATALOG2 3 12
7 STAFF7 1 15 17 DATALOG3 1 13
8 STAFF8 1 7 18 DATALOG4 2 14
9 STAFF9 1 21 19 OILPUMP1 1 20
10 STAFF10 1 5 20 OILPUMP2 1 19

Data Collection
These are collection nodes on the simulation model and are used to
collect information for the results printed out in the summary report,
automatically produced by the SLAMSYSTEM package s Pritsker, 1995..
The model uses them to collect various duration times for entities in the
system, times between entities leaving the system, count the number of
extra staff required for staff5, and count the number of repeat loops
caused by invalid or inadequate data s Table 8.. Another aspect of the
collection nodes is that they are able to produce histograms of the data
that are being collected, allowing the results to be displaye d graphically
as well as tabulated.

Table 6. Resource allocation numbers

Vibration s staff. Oil s staff .

Location Data collection Analysis Oil sampling Analysis

A 1 and 4 4 1, 2, 3, and 5 3
B 6 7 6, 5, and 8 5 and 9
C 10 11 5 and 12 5 and 9
D 8, 13, and 14 8 and 14 5, 8, and 14 5 and 9
E 11 and 5 11 5 5
492 E. SZCZERBICKI AND W. WHITE

Table 7. Global variable s description

Variable Value Description a

XXs 1. TRIAGs 0, 0.08, 1.0. Probability that downloaded data are not
acceptable for vibration analysis
Probability problems s. in the oil wear debris
analysis
XXs 2. TRIAGs 0, 0.06, 1.0. Probability of extra tests; vibration data are
inadequate for the machine.
XXs 3. RNORMs 9.5, 1.5. Duration to take vibration reading
Duration to take an oil sample
XXs 4. UNFRMs 2.0, 5.0. Duration to download vibration data into
computer
XXs 5. RNORMs 4.50, 0.75. Duration to scan the results for potential
problems, especially those listed in the exception
report s Vib.
XXs 6. TRIAGs 10.0, 15.0, 16.5. Duration for discussions with plant owner s Vib.
XXs 7. RNORMs 20.0, 1.2. Duration of further testing s Vib.
Duration of complete testing s Oil .
XXs 8. TRIAGs 3.5, 5.0, 6.2. Duration to prepare oil samples
XXs 9. UNFRMs 8.0, 11.0. Duration for filtergram preparation
Duration for discussions with plant owner s Oil.
Duration for additional testing s Oil .
XXs 10. RNORMs 7.0, 0.75. Duration to preview oil results
XXs 11. RNORMs 8.0, 1.5. Duration to update database with nonurgent data
XXs 12. RNORMs 180.0, 40.0. Duration for specialist advise
XXs 13. ATRIBs 7. Number of times a machine is repeated on job
XXs 14. ATRIBs 8. Number of times data we re invalid
XXs 15. ATRIBs 9. Number of times extra tests required

a
All duration times for the oil process refer to the preanalysis or the wear debris
analysis.

Model Verification and Validation


After the program was developed and implemented, it was tested for
correctness and accuracy. A well-established verification scheme was
used s Sargent, 1988.. All simulation functions that were included have
been tested using simple entity flow tests to determine whether they
work properly. Then the simulation model was executed under different
straightforwar d conditions to determine whether the computer program
and its implementations are correct. The bottom-up dynamic testing
strategy was used. First, five program modules representing different
locations were tested. Then the overall model was run with a number of
SYSTEM SIMULATION FOR MAINTENANCE 493

Table 8. Data collection description

Labe l Values Histogram Description r Name

CEX1 INTs 10 . None Extra Staffs


V CNT INTs 1. 75 ce lls, 0] 225 Time in system for all vibration
O CNT INTs 1. 12 ce lls, 0] 180 Time in system for all oil
RJC1 XXs 13 . 4 ce lls, 0] 4 Invalid machine data} redo machine s V ib.
RJC2 XXs 14 . 4 ce lls, 0] 4 Invalid downloade d data} redo machine s V ib.
RJC3 XXs 15 . 4 ce lls, 0] 4 Inade quate results} furthe r te sting s V ib .
WOC INTs 1. 10 ce lls, 10] 110 Time in system for Location A s Oil .
MMOC INTs 1. 10 ce lls, 20] 170 Time in system for Location B s Oil.
MOC INTs 1. 10 ce lls, 15] 165 Time in system for Location C s O il .
CCOC INTs 1. 10 ce lls, 20] 120 Time in system for Location D s Oil .
HOC INTs 1. 10 ce lls, 20] 170 Time in system for Location E s O il .
C1A INTs 1. 10 ce lls, 20] 120 Machine ’s data time in system for Location A
C2A BET 11 ce lls, 0] 132 Time betwe en machine completion Location A
Q 1B INTs 1. 10 ce lls, 15] 165 Machine ’s data time in system for Location B
C2B BET 11 ce lls, 0] 110 Time betwe en machine completion Location B
C1C INTs 1. 8 ce lls, 50 ] 170 Machine ’s data time in system for Location C
C2C BET 11 ce lls, 0] 110 Time betwe en machine completion Location C
C1D INTs 1. 11 ce lls, 10] 175 Machine ’s data time in system for Location D
C2D BET 12 ce lls, 0] 120 Time betwe en machine completion Location D
C1E INTs 1. 10 ce lls, 30] 150 Machine ’s data time in system for Location E
C2E BET 11 ce lls, 0] 110 Time betwe en machine completion Location E

TRACE options to include in the testing process the values obtaine d


during the program execution. The model was also validated and its
satisfactory accuracy with the study objectives was determined. The
techniques presented in Sargent s 1988. s i.e., event validity, face validity,
and historical data validation. have been used for validation of all
modules as well as of the overall model.

DISCUSSION OF TIME RESULTS


The total time for the system to run was 24,450 time units, where the
time units are in minutes. This equates to 11 weeks, 22 hours, and 30
minutes based on a 35-hour week average, which is well within limits for
a 13-week s 3-month. turnaround.
For the system to finish with this final time there were stages at
which a staff member, such as staff5, was receiving assistance from
other members. This occurred three times for staff5, meaning that
494 E. SZCZERBICKI AND W. WHITE

without this aid the overall duration would have been much closer or
possibly over the limit. Extra staff members were used on a 16-hour
time period. This reflects real-life situations, as with this condition
monitoring service, there are other members of the staff that may not
necessarily work in these processes but possess the skills required and
help when needed.
The program is designed with the use of assign nodes at the start, to
make it easy to alter any of the duration times or percentage rejections.
On this basis the adaptability of the program is very useful, as it has
already proved itself in arriving at the values implemented and the
results produced. Table 9 gives the summary of all times for each
location.

Location A
The times in the system for location A are 37.9 minutes per machine for
the oil process and 39.4 minutes per machine for the vibration process,
giving a total time per machine of 38.0 minutes. The geographic
separation of the staff is one reason for the fast completion times, as
these staff do not share their workload with another location. An
exception is staff5, who performs a minimal amount and is based at
anothe r location. Another reason is that the collection stage predomi-
nantly takes longer to complete and with sufficient staff, not involve d in
the analysis, allows reduction of the machine completion times. Times
for the oil side also had the aid of greasers, who supplied the labor for
20% of the collection, thus contributing to the short time periods. Time
between completions was found to be 39.9 minutes, approximately the
same as the creation time, indicating that the flow through the system
was constant.

Table 9. Summary of location times

Time in Time in Total time


syste m s vib. system s oil. in system Completion time
Location s min. s min. s min. s min.

A. 39.4 37.9 38.0 39.9


B. 24.9 85.8 40.0 39.8
C. 43.0 88.2 61.4 40.1
D. 23.5 81.6 31.1 40.0
E. 44.4 106 61.5 39.6
SYSTEM SIMULATION FOR MAINTENANCE 495

Location B
Times for the system resulted in 85.8 minutes per machine for the oil
process and 24.9 minutes per machine for the vibration process, giving
an overall time per machine of 40.0 minutes. Staff6 collects oil and
vibration data, and staff7 performs vibration analysis for this location
only, which, as in location A, assists in reducing the times. The oil
analysis staff members, staff5 and staff9, are requested for several
locations, hence the higher times for the oil side even though the
number of oil machines is one-fifth that of the vibration machines. Also,
it generally takes longer to analyze from the oil side of the system
because of the types of procedures involved. Hence, if these staff
members are in more demand, the times in the syste m increased
considerably. The flow through the system was 39.8 minutes between
completions, again showing good consistency, as with the system time
results.

Location C
Times for the system resulted in 88.2 minutes per machine for the oil
process and 43.0 minutes per machine for the vibration process, giving
an overall time per machine of 61.4 minutes. Here one staff member is
dedicated to vibration data collection and one to oil sample collection,
benefiting the system times. For the analysis stage the times reflect the
availability of the staff members, hence the longer times for the oil and
slight increase in times for the vibration. Oil analysis utilizes two shared
staff members, staff5 and staff9, whereas the vibration utilizes only
staff11, who is also shared. Another point to note that the number of oil
machines is 41% of the total number of vibration machines, again
affecting the times in the system. The time between completions is 40.1
minutes, again indicating flow consistency through the system.

Location D
For this location the times in the system results were 81.6 minutes per
machine for the oil process and 23.5 minutes per machine for vibration,
giving a total time in the system of 31.1 minutes per machine. The times
for the vibration were reasonably short because of the number of staff
members available for both collection and analysis. These staff mem-
496 E. SZCZERBICKI AND W. WHITE

bers were also availabl e for the oil sample collection, which helped, but
only two shared staff members s staff5 and staff9 . were available for the
analysis, hence the longer times. If staff14 had fewer members, longer
times would be observed because the collection times have a great
bearing on the total times. As in location C, there were 25% fewer
machines, but less staff members to collect the samples. Thus the
results having similar outcomes is an indication of the effect of staffing
levels at the analysis stage. This means that the number of collectors
had little bearing on the result; the driving force behind the longer
times stems from the analysis staffing levels. The time between comple-
tions was 40.0 minutes, again indicating the consistency of the flow
through the system.

Location E
The location times in the system resulted in 106 minutes per machine
for the oil process and 44.4 minutes per machine for vibration, giving a
total time in the system of 61.5 minutes per machine. These were the
longest times in the system for any of the five locations, and looking at
the staffing numbers for the whole processes provides the justification.
The connection between the staffing numbers and the times in the
system, shown here to have the full effect, is something that the other
locations had to a smaller degree. Although there are fewer machines
than in most locations, the numbers are comparable with those for
location C, but the times were influenced by the staffing numbers and
their shared workloads. Again, this is more evident with the oil sampling
and analysis taking longer to perform than those of the vibration,
especially in the analysis stage. Longer times had no effect on the flow
through the system, with a time between completions of 39.6 minutes.

CONCLUSIONS
Several points arose from the program results, ranging from the quality
of the program itself to the possible implementation of system alter-
ations. For the system modeled, simulation demanded some intrinsic
logical loops that proved successful. Although the staffing distribution
had complications, SLAM II handled the logic of the system with simple
execution. In the early stage s of programming some problems evolved
related to the logic required. This logic was simplified and the program
SYSTEM SIMULATION FOR MAINTENANCE 497

achieved the same tasks, only in a more efficient way} an indication


from a technical viewpoint of the versatility of the model, something
that was also evident in the relationship between the real system and
the model.
System changes were frequent, so the program had several update s
in order to keep a reasonable comparison between syste m and model.
Changes did not pose a problem as the resources and variable were set
at the beginning of the program and the conditional branching was easy
to follow. It provide d the adaptability required for a model of this
nature and moreover assisted in the optimization of resources and the
``what if’’ scenarios that were executed.
In optimizing the resources, the results were able to show that
instrumentation was sufficient and the only difficulties with resources
were the staffing levels of the oil process. Staff5’s ``average wait times’’
and ``total times’’ were too high and needed to be addressed by
implementing assistance on three occasions for 16 hours. On the
vibration side of the system, the staffing levels were above require-
ments. When a different scenario was implemented, the times for the
oil process increased because the staff used spent more time on the
vibration process. Thus, exploiting the staffing numbers for the oil
process is the deciding component in scheduling and staff allocations.
The model developed satisfies the purpose and in doing so identi-
fies problem areas that need to be considered for the staffing arrange-
ments. In addition, it shows some of the capabilities that SLAM II has
as a simulation language and its ability in modeling the management
process for condition monitoring.

REFERENCES
AS 2625-1983. 1983. Rotating and reciprocating m achinery} mechanical v ibra-
tions, Parts 1, 2, 3, and 4. North Sydney: Standards Association of Australia.
Beebe, R. 1988. Machine condition m onitoring. Victoria: Engineering Publica-
tions.
Chen, Y., and X. Li. 1996. Integrated diagnosis using information-gain-weighted
radial basis function neural networks. Comput. Ind. Eng. 30:243 ] 255.
Huang, H.-H., and B. Wang. 1996. Machine fault diagnostic using a transputer
network. Compu t. Ind. Eng. 30:269 ] 281.
Lin, D., and B. Wang. 1996. Performance analysis of rotating machinery using
enhanced cerebellar model articulation controller s E-CMAC. neural net-
works. Com put. Ind. Eng. 30:227 ] 242.
498 E. SZCZERBICKI AND W. WHITE

Moubray, J. 1991. Reliability-cen tered m ainten ance. Oxford: Butterworth-Hein-


mann.
Patel, S. A., and A. K Kamrami. 1996. Intelligent decision support system for
diagnosis and maintenance of automated systems. Compu t. Ind. Eng.
30:297 ] 319.
Pritsker, A. A. B. 1995. Introdu ction to Sim ulation and Slam II, 4th ed., West
Lafayette, IN: Systems Publishing.
Pujadas, W., and F. Chen. 1996. A reliability centered maintenance strategy for
a discrete part manufacturing facility. Comput. Ind . Eng. 31:241 ] 244.
Sargent, R. G. 1988. A tutorial on validation and ve rification of simulation
models. Proceedings of the 1988 Winter Sim ulation Conference, New Jersey,
33] 39. Englewood Cliffs, NJ: IEEE Press.
Shyur, H.-J., J. T. Luxhoj, and T. P. Williams. 1996. Compu t. Ind. Eng.
30:257 ] 267.

You might also like