You are on page 1of 232

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/320020529

Partial Discharge (PD): PSPICE Simulations

Conference Paper · March 2008

CITATIONS READS
0 1,789

1 author:

Andrzej J. Gapinski
Pennsylvania State University
51 PUBLICATIONS 52 CITATIONS

SEE PROFILE

All content following this page was uploaded by Andrzej J. Gapinski on 25 September 2017.

The user has requested enhancement of the downloaded file.


Proceedings of the 2008 IEMS Conference

International Conference on Industry,


Engineering, and Management Systems
March 10 - 12, 2008

Dear Conference Participants:

It is with pleasure that we present to you the Proceedings of the 2008 International
Conference on Industry, Engineering and Management Systems (IEMS). The
papers presented this year were of consistently high quality in their respective
fields. The papers cover a wide range of topics in the business and engineering
disciplines, integrating concepts that further the mission of the IEMS Conference.

We present this Proceedings to you in the spirit of continuous improvement. Your


comments and suggestions regarding the continued improvement of the
Proceedings are always welcomed.

These proceedings would not have been made possible without the valuable
contributions of our Track Chairs for the time and effort they spent reviewing all
the papers; and for our Administrative Coordinator, Elki Issa, whose work behind
the scenes helps make our Conference a success.

We look forward to seeing you at IEMS 2009!

Warm Regards,

Nabeel Yousef, Ph.D.


IEMS Publications Editor
Proceedings of the 2008 IEMS Conference

2008 IEMS Officers

Nael Aly, Conference Co-Chair Ahmad Elshennawy, Conference Co-Chair


California State University, Stanislaus University of Central Florida

Adel Ali, Program Coordinator Nabeel Yousef, Publications Editor


University of Southern Mississippi University of Central Florida

Alfred Petrosky, Program Chair


California State University, Stanislaus

TRACK CHAIRS
Accounting/Finance Lean Six Sigma
LuAnn Bean, Florida Institute of Technology Sandra Furterer, Univ. of Central Florida
Abstract Concepts in Computer Science Management Information Systems
Dia Ali , University of Southern Mississippi John Wang, Montclair State University
Automation/Intelligent Computing Management & Organizational Behavior
Andrzej Gapinski, Penn State University Ed Hernandez, Cal State Univ., Stanislaus
Construction Management Management of Technology
Mostafa Khattab, Colorado State University Gordon Arbogast, Jacksonville University
Decision Making in Mgmt & Engineering Marketing
E. Ertugrul Karsak, GalatasaryUniversity Kaylene Williams, CSU, Stanislaus
Decision Support Systems Operations Management
Dia Ali , University of Southern Mississippi J.S. Sutterfield, Florida A&M University
Education and Training Project Management
Ralph Janaro, Clarkson University Stephen Allen, Truman State University
Entrepreneurship Quality Management
Colin Benjamin, Florida A&M University Hesham Mahgoub, South Dakota State Univ
Global Applications Simulation and Modeling
Djehane Hosni, University of Central Florida Kevin O’Neill, Plattsburgh State University
Human Computer Interaction Stat. Quality Improvement & Control
Mohammad Khasawneh, SUNY, Binghamton Gamal Weheba, Wichita State University
Human Engineering Supply Chain Management
Deborah Carstens, Florida Institute of Tech. Ken Morrison, Kettering University
Industry and Academia Collaboration Technology Commercialization
Alexandra Schönning, Univ. of North Florida Tiki Suarez, Florida A&M University
Lean Enterprise
Isabelina Nahmens, Univ of Central Florida
Proceedings of the 2008 IEMS Conference

TABLE OF CONTENTS

Shalini Jajpuria and Suraj M. Alexander 1


EVALUATING THE SECURITY VULNERABILITIES IN MILK
COLLECTION AND TRANSPORT

Praveen Janjirala and Suraj M. Alexander 10


IMPROVING THE LOGISTICS OF MILK COLLECTION AND TRANSPORT

Stephen Allen 20
A LOOK AT CLOSING THE LOOP IN AN UNDERGRADUATE
PROJECT MANAGEMENT CLASS

Shahram Amiri 25
INFORMATION AND TELECOMMUNICATION TECHNOLOGY: BLUEPRINT FOR
SOCIAL-ECONOMIC GROWTH IN DEVELOPING NATIONS

Gordon Arbogast 31
DEDICATED ACCOUNT TEAMS: WORTH THE COST?

Michael Bell and Vincent Omachonu 37


PROCESS FOCUS AND COMMITMENT: THE KEY TO ISO 9001 IMPLEMENTATION

Donald Brandon, Brian Bourgeois, Ashley Morris 41


HYDROGRAPHIC MISSION PLANNING PROTOTYPE COMBINING
GIS AND AUTOSURVEY™

John J. Burbridge, Jr. and Coleman Rich 47


SUPPLY CHAIN MANAGEMENT: PAST, PRESENT, AND FUTURE

Deborah S. Carstens and Stephanie M. Rockfield 54


CHANGING THE HUMAN DIMENSION IN INFORMATION SECURITY

Thien-My Dao, Barthelemy H.Ateme-Nguema, and Victor Songmene 59


THE NEW APPROACH BASED ON RSM AND ANTS COLONY SYSTEM FOR
MULTIOBJECTIVE OPTIMIZATION PROBLEM: A CASE STUDY

Y. Desai, E. Park, P. Demattos, J. Park 61


PROCESS IMPROVEMENT IN THE EMERGENCY DEPARTMENT AT MOSES CONE HOSPITAL

Daniel J. Fonseca, Terry Brumback, Christopher M. Greene, Matthew E. Elam 65


DEVELOPMENT OF A WEB-BASED MANUFACTURING EDUCATION TOOL
Proceedings of the 2008 IEMS Conference

Nabeel Yousef and Tarig Ali 70


MINIMIZING LINE OF DUTY DEATHS (LODD) FOR FIREFIGHTERS THROUGH
PROPOSING A TRACKING SYSTEM TO TRACK FIRE FIGHTERS AT FIRE SCENES

Wei Zhan, Jacob Schulz, John Crenshaw and Tim Hurst 75


ADDRESSING THE ENGINEERING NEEDS OF THE NUCLEAR POWER INDUSTRY

Andrzej Gapinski 80
PARTIAL DISCHARGE (PD): PSPICE SIMULATIONS

H. M. Hassan and Saleh M. Al-Saleem 85


ON LEARNING PERFORMANCE ANALOGY BETWEEN SOME PSYCHO-LEARNING
EXPERIMENTAL WORK AND ANT COLONY SYSTEM OPTIMIZATION

Xiaochun Jiang, BaaSheba Rice, Gerald Watson, and Eui Park 90


A MULTIVARIATE STATISTICAL APPROACH TO ANALYZE NURSING ERRORS

Jerry W. Koehler and Thomas W. Philippe 95


IMPLEMENTING INNOVATIVE PERFORMANCE IMPROVEMENT PRACTICES:
EFFECTS OF WORKPLACE DECEPTION

Cynthia Brown-LaVeist, S.K. Hargrove, J. Doswell, Deborah Ihezie 99


DEVELOPING AN INFORMATION STRATEGY FOR CIVIL AVIATION
COMPETITIVENESS: THE JOINT TECHNICAL DATA INTEGRATION (JTDI)
PROJECT AT MORGAN STATE UNIVERSITY

Ashraf H. Galal and Tamer A. Mohamed 104


DIMENSIONS OF SERVICE QUALITY FOR TV SATELLITE CHANNEL
PROGRAMS-A FRAMEWORK FOR QUALITY IMPROVEMENT

Robert L. Mullen 111


AN INTELLIGENT EXPERT SYSTEM FOR THOSE WHO WISH TO BECOME BILLIONAIRES

Robert L. Mullen 114


AN INTELLIGENT EXPERT SYSTEM FOR THOSE ENTERING THE INTERNET/WEB INDUSTRY

Ali Ahmad and Isabelina Nahmens 117


ASSESSING SIX SIGMA PROJECT IMPROVEMENTS- A STATISTICAL PERSPECTIVE

Imshaan Somani, Ha Van Vo and R. Radharamanan 125


A NOVEL METHOD TO INVESTIGATE THE POSSIBLE LOCATIONS OF
HUMAN BODY INJURIES DURING A VERTICAL FALL

Angela P. Ansuj and R. Radharamanan 131


PRODUCT AND PROCESS IMPROVEMENT IN ELECTRONIC MANUFACTURING
Proceedings of the 2008 IEMS Conference

Jonathan B. Ksor, Ha Van Vo and R. Radharamanan 137


DETERMINING THE MOST EFFECTIVE MIDSOLE MATERIAL
THICKNESS IN PREVENTING CALCANEAL FRACTURES

Federica Robinson, Mario Marin, Serge Sala-Diakanda,


Jose Sepulveda, Luis Rabelo, Kent Williams 142
DEVELOPING A DATABASE THAT EXECUTES COMPREHENSIVE,
COMPARATIVE ANALYSES AMONG UNDERGRADUATE INDUSTRIAL
ENGINEERING PROGRAMS NATIONWIDE

Alicia Combs, Mihaela Petrova-Bolli, Eric Tucker and Dana Johnston 147
APPLYING SIX SIGMA TO SERVICE ORGANIZATIONS: A CASE FOR THE
FLORIDA DEPARTMENT OF CHILDREN AND FAMILIES

Kendra Lee, LaShaveria Keeton, Carsolina Walton, Tiki L. Suarez-Brown 153


RESOURCE UTILIZATION: A STUDY ON HOW TO IMPLEMENT A
COMMUNICATION SYSTEM

Dominique Drake and J. S. Sutterfield 160


THE REVOLUTION OF SIX-SIGMA: AN ANALYSIS OF ITS THEORY AND APPLICATION

J. S. Sutterfield, Sade L. Chaney and Tiffani R. Davis 173


THE USE OF TAGUCHI METHODS TO OPTIMIZE SHAPE FACTORS FOR
MAXIMUM WATER JET STABILITY

Kaylene C. Williams, Al Petrosky, Edward H. Hernandez and Robert Page 182


IN'S AND OUT'S OF PRODUCT PLACEMENT

Banu Karsak and Elgiz Yılmaz 194


NEW MARKETING PUBLIC RELATIONS TREND: CUSTOMER AS A
BUSINESS PARTNER IN CORPORATE INNOVATION STRATEGIES

Joycelyn Finley-Hervey, Tiki L. Suarez-Brown and Forrest Thompson 204


EXPLORATION OF FACTORS AFFECTING ONLINE SOCIAL NETWORKING SERVICES

Sandra Archer, Robert L. Armacost and Julia Pet-Armacost 209


RECENT LITERATURE IN STOCHASTIC RESOURCE CONSTRAINED PROJECT SCHEDULING

Michael Ferguson, Jesse Parkhurst, Alexandra Schonning, Simeon Ochi 215


DESIGN AND MANUFACTURING OF AN ACOUSTIC PANELS TEST FRAME

LuAnn Bean and Jeff Manzi 220


A MODEL FOR AUDIT BRAINSTORMING: THE SEARCH FOR MATERIAL
MISSTATEMENTS AND FRAUD
Proceedings of the 2008 IEMS Conference

Evaluating the Security Vulnerabilities in


Milk Collection and Transport
Shalini Jajpuria Suraj M. Alexander
University of Louisville University of Louisville
suraj.alexander@louisville.edu

Abstract When milk haulers arrive at a farm they


evaluate the contents of the milk storage tank on
The current approach to milk collection and characteristics such as odor, color and visible
transport with manual monitoring and the contaminants. If the milk appears satisfactory
proposed approach of milk collection and then they measure the load with a scale and
transport with wireless electronic monitoring are convert this to gallons. The milk is then agitated
assessed with hierarchical task analysis and a for 5 to 10 minutes depending on the capacity of
software system referred to as “CARVER + the milk tank in the farm. During the agitation
Shock”, developed by the Food and Drug process, various records are completed by hand
Administration / Center for Food Safety and and the set up process to unload the milk to the
Applied Nutrition (FDA/CFSAN). The latter is a tanker or tanker trailer is completed. After the
target prioritization tool adapted from the agitation, the temperature of the milk is checked
military version “CARVER” that assesses the to see if it is within standards (40 deg F). If it is,
criticality, accessibility, vulnerability etc. of any then samples are collected and placed in an ice
part of the system (target). The additional term cooled container in the milk truck for regulatory
“shock” is used to cover the health, economic, testing. These tests have to be completed within
and psychological impact of an attack on the a certain time interval; hence the time of
food industry. We expect this effort to define collection is recorded on the sample container.
areas that still need attention in the context of The milk is then transferred to the milk truck.
security and help us define standards for secure Finally, if the milk tank is emptied, it is cleaned
transport of milk from farms to processors. and the haulers move on to the next farm.

1. Introduction: 2. Micro Evaluation of Vulnerability


using Hierarchical Task Analysis
Current approaches to the collection of milk
from farms and transporting it to processing For a micro evaluation of current process
stations is manual, paper intensive and prone to vulnerabilities, a hierarchical task analysis
errors. The process is also vulnerable to failures (HTA) of the process was conducted [1]. This
in monitoring and safety features that could lead analysis revealed a number of process steps that
to the rejection of the whole load of milk at the were susceptible to potential errors, procedure
processing station. With the support of a violations and outside interference. Examples
homeland security grant, a wireless electronic include data recording errors, such as, erroneous
monitoring system has been developed that stick-to-volume conversions, incorrect recording
reduces the chances of errors and failures in the or entry of milk temperature, incorrect recording
process of milk collection and transport. This of the time of sample collection and process
system also improves security against violations, such as, inadequate duration of
unauthorized tampering with the milk and agitation before samples are collected and
enhances traceability if a problem does occur. incorrect storage of milk samples.

Figure 1 illustrates the current process of


collecting milk from farms.

1
Proceedings of the 2008 IEMS Conference

Figure 2a and b illustrate the HTA for a portion “CARVER+Shock” stands for the following six
of the process, when the milk transfer is attributes used to evaluate the attractiveness of a
complete and milk tank is empty. target for attack:
• Criticality: This attribute helps to
3. Wireless Security Monitoring System assess the critical impact an attack can
(SMS) have on public health and economy.
• Accessibility: This attribute helps to
In order to improve efficiency, reduce assess the ability to physically access
process errors and improve security, a wireless and egress from a target.
monitoring systems (SMS) integrated with GPS • Recuperability: This attribute helps to
and electronic locks was developed for milk assess ability of a system to recover
trucks [2]. The system has the ability to ensure from an attack.
that milk collected from the farm has only • Vulnerability: This attribute helps to
authorized access and that the samples collected assess ability to accomplish a successful
for further testing are stored at the right attack.
temperature and tested within the requisite time • Effect: This attribute helps to assess the
interval. The system records the correct amount amount of direct loss from an attack as
of milk collected; the miles traveled in different measured by loss of production.
states by the hauling truck; the service hours of • Recognizability: This attribute helps to
the drivers, and the wash times of the truck. It assess ease of identifying a target.
also enables optimal routing and cycle time of • Shock- This attribute helps to assess the
the milk trucks from collection through combined health, economic and
unloading at the processing plants [3], [4]. The psychological impacts of an attack
system also allows the printing of the required within the food industry.
documentation, such as barn tickets, milk
sample labels, wash tickets, etc. The assessment process using CARVER+ Shock
is as follows:
Figure 3a illustrates the (SMS) and Figure 3b 1. A process flow chart is constructed
defines the communication links in the SMS. using the tool.
2. Carver + Shock scores are based on
4. Assessment of the Improvement in answers to specific questions asked for
Security with SMS using CARVER + each process node in the flow chart.
Shock 3. The results are presented as a color-
coded process flow diagram colored in
“CARVER + Shock”, developed by the different shades of blue, based on the
Food and Drug Administration / Center for Food total CARVER + Shock score for each
Safety and Applied Nutrition (FDA/CFSAN) is process node. Process nodes with low
used by the United States Department of scores, that imply lower risk, will be
Agriculture Food Safety and Inspection Service light blue, becoming darker as the
(USDA FSIS) to assess the vulnerabilities within CARVER + Shock score increases.
a system or infrastructure to an attack [5]. It Double-clicking on a process node
allows the user to think like an attacker to displays the node’s individual
identify the most attractive targets for an attack. assessment, which provides more
By contrasting a “CARVER + Shock” detailed information regarding the risk
assessment of the current milk collection process assessment of the process node,
and transport process with the proposed process, including individual scores for each
one can assess the improvement in security with element that make up the CARVER +
SMS. Shock total score. Additional results are
displayed as a bar graph. Clicking on
each bar will bring up text with

2
Proceedings of the 2008 IEMS Conference

suggestions for risk mitigation and Systems Conference, March 10- 12,
improving security at process nodes. 2008.

Security improvements in the process of milk 5. User manual of the CARVER+Shock


collection and transport with SMS are easily software tool from the Food and drug
seen in the difference in color in process nodes Administration FDA website.
in Figures 4 when contrasted with that in Figure
5. The nodes in Figure 5 have a much lighter
shade of blue than that in Figure 4 illustrating
the improvement.

Tables 1a and 1b show the individual criteria


scores that make up the total CARVER +
SHOCK score for each process node.

5. Conclusions

This paper has illustrated the use of process


flow analysis and HTA to identify areas of
vulnerability to errors and security in the current
milk collection process. It also illustrates the
application of the tool CARVER +Shock to
assess the improvement in security of the milk
collection process with the SMS.

References

1. Hierarchical Task analysis from website


http://classweb.gmu.edu/ndabbagh/Reso
urces/Resources2/hierarchical_analysis.
htm

2. “Invention Disclosure for Electronic


Monitoring System for Securing Milk
from Farm to Processor” by Dr. Fred
Payne, Professor, Biosystems and
Agriculture Engineering, University of
Kentucky

3. Janjirala, P., Alexander, S. M. and


Evans, G.W., “Improving the logistics
of milk collection via simulation”
Proceedings, Industry, Engineering and
Management Systems Conference,
March 12-14, 2007.

4. Janjirala, P. and Alexander, S.M.,


“Improving the logistics of milk
collection and transport”, Proceedings,
Industry, Engineering and Management

3
Proceedings of the 2008 IEMS Conference

Hauler Clean-up
Reject load on Yes
arrives at appearance?
and Depart
farm

No

Yes

Measure Agitate Collect


load samples
No
Record Reject load (and Collect
Temp. on temp?
sample Milk
info.)

Record farm
info. and set-
up

Figure 1 – Process of Collecting Milk at Farms

4
Proceedings of the 2008 IEMS Conference

Seal and clean


milkhouse tank

Recap milk Replace nozzle


Turn off motor Clean floor of
hose seal on milk Start washer
milkhouse with
tank
milkhouse hose

Disconnect milk Insert washer Replace hose on


hose hose, washer drain truck and close
Use milkhouse
Replace tank lid rear doors
hose to clean hose, and cleaning
inside of tank solution hose to
milk tank

Figure 2a – Illustration of HTA for the Process Steps after the Milk Tank is Emptied

Figure 2 b – Areas of Possible Process Errors from the HTA of Process Steps in Figure 2a

5
Proceedings of the 2008 IEMS Conference

Figure 3a – Wireless Monitoring System for Milk Transport Security

6
Proceedings of the 2008 IEMS Conference

SECURITY
LOCK POSITION, GPS MONITORING SYSTEM
DATA, TEMPERATURE (SMS)
INFORMATION, USER
INTERFACE
INFORMATION

Security data Milk data

HAND HELD DEVICE/


MILK BARCODE READER/
HAULER/ CELLULAR
SAMPLER COMMUNICATION
DEVICE/ WIRELESS
COMMUNICATION
DEVICE

Data Attribute
updates

DATA RELATED TO:


MILK MARKETING
AGENCY, DATA SERVER
TRANSPORTATION
COMPANY, MILK
PROCESSING PLANT
AND DATA
PROCESSOR

DATA USED BY:


MARKETING
AGENCY,
TRANSPORTATI
ON COMPANY,
MILK
PROCESSING
PLANT AND
DATA
PROCESSOR

Figure 3b – Communication links within the SMS

7
Proceedings of the 2008 IEMS Conference

Figure 4 - CARVER + Shock Assessment of Current Process

Figure 5 – CARVER + Shock Assessment of Milk Collection Process with SMS

8
Proceedings of the 2008 IEMS Conference

C A R V E R S Total Process Element

6 8 8 1 4 10 10 48 Milk Hauling Truck (Tanker Truck)


10 9 4 1 3 10 10 47 Milk
6 6 8 1 4 10 10 44 Milk * (Storage Tank Refrigerated)
6 6 8 1 4 10 10 45 Milk * (Tanker Truck Filler)
6 4 10 1 4 10 10 44 Empty tank at Processing Plant * (Pump)
5 5 10 1 4 10 9 44 * at the Processing Plant (Receiving)
10 4 5 1 3 10 10 44 * [2] (Clean/Sanitize Equipment)
5 4 10 1 4 10 9 43 Clean and seal milk house tank (Clean/Sanitize Equipment)
5 4 8 1 4 10 9 42 Go for next Pick up (Transport)
6 6 1 1 4 10 10 38 Seal top and rare lids (Sealer)
6 4 4 1 4 10 10 38 Processing Plant * [2] (Storage Tank Refrigerated)

Table 1a – CARVER + Shock Scoring Results for Current Process

C A R V E R S Total Process Element

1 4 2 1 3 10 1 23 Milk
1 1 2 3 3 10 1 22 Seal truck until next pick up. (Sealer)
1 1 5 1 3 10 1 22 Control Checks all the Locks (Other Control Checks)
1 1 4 1 3 10 1 21 Milk* (Tanker Truck Filler)
1 1 4 1 3 10 1 21 * Milk (Receiving)
1 1 4 1 3 10 1 21 Milk hauling Truck (Tanker Truck)
1 1 4 1 3 10 1 21 Clean/Sanitize Milk storage tank (Clean/Sanitize Equipment)
1 1 4 1 3 10 1 21 Milk * (Storage Tank Refrigerated)
1 1 4 1 3 10 1 21 Seal top and rare lids (Sealer)
1 1 4 1 3 10 1 21 * Milk (Pump)
1 1 4 1 3 10 1 21 go for next pick up (Transport)
1 1 4 1 3 10 1 21 Control Checks all the locks before receiving (Other Control Checks)
1 1 4 1 3 10 1 21 Clean/Sanitize tank after unloading (Clean/Sanitize Equipment)
1 1 2 1 3 10 1 20 Scan processing plant ID (Scanner)
Processing Plant * [2] (Storage Tank Refrigerated)
1 1 2 1 3 10 1 20
Scan driver farm and truck ID (Scanner)
1 1 2 1 3 10 1 20

Table 1b – CARVER + Shock Scoring Results for Process with SMS

9
Proceedings of the 2008 IEMS Conference

Improving the Logistics of Milk Collection and Transport


Praveen Janjirala Suraj M. Alexander
University of Louisville University of Louisville
suraj.alexander@louisville.edu

Abstract would have the ability to establish the integrity


of seals, locks, and routes, via sensors, electro
As a part of a homeland security project an mechanical locks, wireless technologies and
electronic tracking system has been developed to global positioning systems (GPS). A provisional
secure the collection and transport of milk from patent application has been filed for this system.
farms to processors. In order to make this system The ultimate objective of this development
financially attractive to stakeholders we effort is to have dairy trucks throughout the U.S.
demonstrate that the capabilities of the system incorporate this technology, by retrofitting
can also be used for optimized logistics control. current trucks and having new trucks built with
Optimized scheduling and logistics will result in this system. Such a tracking system is promising
lower fuel expenses and less wait time at the for securing not only the nation’s milk supply,
milk processing station. This paper illustrates but also the liquid and dry bulk food
our attempt to define an optimal route schedule transportation across North America. Almost
using Geographical Information Systems (GIS). everyone realizes the need for a secure food
This schedule is being developed using real data transport system in this era. However, a major
provided by The Alan Wilson Hauling barrier to implementing technology based
Company, in Somerset, Kentucky, a milk systems, as the one proposed, is the cost of these
hauling company. The routing algorithm in GIS systems. The problem is that the expenses of
is able to adjust the routes based on weather and retrofitting and of new designs have to be passed
traffic conditions. Rerouting haulers is also on to the transporting/hauling companies.
feasible in case the scheduled route has to be However, transporters want to contain their
changed owing to new information related to a costs, to maintain their profit levels in the face
farm, such as, if a farm is disapproved by a milk of higher costs resulting from increasing fuel
agency or if there is power failure at a farm. prices and stringent EPA requirements. For
The GIS system also enables the selection of an example, emission control hardware, adds about
optimum location for a depot for milk trucks $7000 to $ 10,000 to the cost of a new truck,
used to collect milk from a specified set of increases its weight by 130 to 150 pounds and
farms. requires low sulfur diesel for fuel [1]. This
paper demonstrates that the technologies
1. Background implemented for security purposes can be used
to improve routing and delivery efficiency and
This paper relates to improving the logistics of thereby reduce costs for transporters/haulers.
collecting milk from farms and delivering it to Demonstrating the potential for cost reduction is
milk processors. It addresses problems related crucial for making the implementation of the
to scheduling, routing and delivery efficiency proposed technology based system acceptable.
and impacts profit levels and employee turnover.
Since current methods of securing milk are The potential for cost reduction with improved
manual, paper intensive and prone to errors, the logistics is huge. For example, if milk collection
goal of an ongoing multi university homeland and delivery to a processing plant takes a long
security project (OGMB060636), is to design time owing to inefficient routings and delays,
and implement a technology based “tracking” the milk is vulnerable to spoilage, resulting in
system on milk hauling trucks. This system rejection at the processing plant; inefficient

10
Proceedings of the 2008 IEMS Conference

routings also lead to increased fuel consumption model was developed with three hypothetical
and cost. Another common problem that occurs hauling companies of different sizes, servicing
frequently is the extensive delay in unloading clusters of farms and unloading at two
the milk at processing plants, owing to the processing stations, with different capacities.
limited capacity at the plants relative to the The preliminary simulation analysis and runs
number of trucks waiting to be unloaded. The with an optimization tool [OPTQUEST],
possibility of milk spoilage during this wait demonstrated that adaptive scheduling can
time, costly truck idle time and extensions of a indeed have a positive impact on reducing the
milk hauler’s work day results in unhappy milk wait time at the processing plant and other
haulers and impacts driver retention significantly delays [3].
[2].
3. Demonstration of Logistics Decisions using
2. Objective GIS

This paper demonstrates the use of In order to demonstrate, with some credibility,
Geographical Information Systems (GIS) with the application of GIS for logistics related
the real time information available via the new decisions, real data is used from The Alan
system to improve the logistics of milk Wilson Hauling Company, in Somerset,
collection and delivery. For example, consider a Kentucky, which collects milk from 388 farms
milk truck traveling interstate after collecting in Kentucky and processes the milk at The
milk from farms in Florida; it is transporting the Southern Belle Dairy in Somerset, Kentucky.
milk to a designated processing plant in
Kentucky. En route to Kentucky, real time 3.1 Basic Process of Milk Hauling
information on truck location, capacity, Daily schedules are provided to milk haulers
temperature of its contents, etc. is available and which define the cluster of farms that each milk
transmitted to a monitoring system in real time. hauler should visit to collect milk and take to the
If, owing to weather and traffic conditions, as processor. This cluster depends on the capacity
the truck approaches say, Atlanta, the milk of the milk truck and the amount of milk
temperature appears to be headed towards the expected from each farm. A typical truck for the
red zone (unacceptable region), adaptive above mentioned hauling company has a
decisions can be made in real time, with the GIS capacity of 50,000 gallons. At each farm the
based system, such as redirecting the truck to a milk is examined for any visible contamination
nearby processing plant with sufficient capacity and its temperature is checked to see if it within
in the Atlanta area. In a similar way, routing standards (i.e. < 40 deg. F). If the milk meets
and scheduling decisions can be adjusted based the required standards, samples are collected and
on real time information to meet other stored in refrigerated container on the truck and
objectives, such as minimizing travel times, and the milk is loaded into the truck. The volume of
wait times. Examples of adaptive scheduling milk collected from the farm is also noted. This
decisions include defining staggered release whole process at a farm takes about 20 minutes.
schedules of milk trucks for the collection of If the milk truck is not full, it goes to the next
milk, based on information of current truck farm on its route, otherwise it goes to the
locations, routes and collection times to processing station. At the processing station, the
minimize wait time at the unloading stations, samples are evaluated and the milk truck is
and redirecting a truck to a selected alternate checked for any signs of intrusion (currently this
farm, if the power is off at a particular farm. is done by examining the seals, with the new
system it would be through computerized
2.1 Preliminary analysis records that determine when and for what reason
the locks providing access to the milk are
To evaluate and demonstrate impacts, such as opened). If the milk samples from any farm fail
the wait time at processing plants based on the evaluation standards or if the seals are
dynamic scheduling decisions, a simulation broken then all the milk carried by the truck is

11
Proceedings of the 2008 IEMS Conference

rejected. The actual process time for unloading route from a starting location through the seven
the milk at the processing plant is around 40 potential milk farms, listed on Table 1, before
minutes. However, as mentioned before, milk proceeding to the milk processor. Seven farms
haulers spend considerable queue time “waiting are selected for demonstration purposes, since,
their turn” at the processing station. Traffic on average, seven farms are visited before the
conditions, inclement weather, insufficient milk hauler goes to the processor with a full tank.
at farms, all cause an increase in throughput time
for a milk hauler. This increases the probability 1. Producer – Marty Clark
of milk spoilage and unhappy milk haulers. Address- 4311 Knob- Lick wisdom
Road, Knob Lick- 42154.
3.2 GIS Application to Milk Hauling
Logistics: With the use of GIS a number of the
2. Producer – Daniel Adams
problems mentioned above can be mitigated.
GIS is capable of providing routes considering
Address- 1304 Milltown church Road,
traffic conditions, weather and road conditions. Milltown- 42761.
Also, since all the farm data is in the GIS
database, it can provide information to reroute 3. Producer- Amy Coomer
the milk hauler to another farm in the vicinity Address- 111 Long Meadow Drive,
owing to several reasons, such as, additional Greensburg- 42743
capacity available on the truck; the farm which
the hauler was scheduled to go to is taken of the 4. Producer- Hubert G. Wright
list of approved farms; the scheduled farm is out Address- 840 Elmo Russell Road,
of power, thus making milk collection Summersville- 42782
infeasible, etc.. Demonstrations of these
capabilities of GIS are provided in the following
sections.
5. Producer- Terry Tow bridge
Address- 5720 Hiseville park road,
3.3. GIS Setup Horse Cave- 42749

First ArcGIS Network Analyst is used to 6. Producer- Robert Nissley


create a route layer which is called the Network Address- 138 Wheeler Road, Clarkson-
Dataset layer to locate farms and interconnecting 42726
roads on the map. Next the Network Analysis
Layer is used to create routes based on various 7. Producer- Wayne Eial
objectives, such as, quickest, shortest or most Address- 1010 D David court,
scenic; time window based; cost based, etc.
Elizabethtown-42701
3.4 GIS Demonstration
The Milk Processing station is located at
Figure 1 shows the road structure and the 388 Ben Hurt Drive, Morgantown-42261.
farms located in Kentucky visited by the haulers
of the Alan Wilson hauling company. Table 1 – Addresses of the seven farms and
Figure 2, offers a close up view of the road the milk processor selected for
structure, available from GIS.
demonstration purposes.
3.5 Routing Demonstrations:
The route established by the Network Analyst is
3.5.1 Shortest Path: To illustrate this illustrated on the map in Figure 3.
demonstration, first the “route analysis layer” in
the “ArcGIS Network Analyst” is programmed
to use the closest facility heuristic to define a

12
Proceedings of the 2008 IEMS Conference

3.5.2Time Window based Routing: Routing also defines the order in which the farms should
can also be based on “time windows”, where be visited.
trucks are scheduled to arrive at the different The components of this layer include the
farms within a specified time slot. Hence, with following:
the GIS tool a schedule and route is established 1. Origins feature layer: This layer stores
the network locations that are used as
so that milk haulers arrive at the farms within
potential origins or starting points.
the allotted time window. This is quite a 2. Destinations feature layer: This layer
common requirement, since haulers maintain stores the network locations that are
close relationships with farm owners and try to used as destinations.
satisfy their scheduling preferences. Table 2 3. Barriers feature layer: This layer is used
illustrates a time window based routing from to specify a point through which a route
Farm 1 to Farm 2, where Farm 1 is given a time is not possible.
4. Lines feature layer: This layer stores the
window of 8:00 a.m. to 8:20 and Farm 2 a time
results and the attribute table of the OD
window of 8:50 to 9:10 a.m. cost matrix.

3.4.1.3 Routing to the Nearest Facility: The Figure 5 illustrates an application of the O.D.
Closest Facility tool in Network Analyst can be cost martix layer to establish a depot.
used to find the route to the nearest facility.
This is necessary if the truck needs to go to the References
nearest allocated farm from another farm or 1. “Emission mandates” Bulk Transporter,
from any specified location. This need could September 2006, pp.22
arise if the hauler needs to visit another farm to
fill his tank, or if en-route to a farm a hauler 2. “Financial Plan-Market implications impact
transporters”, Bulk Transporter,
receives word that a designated farm is black September, 2006 pp. 24-25
listed or removed from the list of farms from
which milk is allowed to be collected. The 3. Janjirala, P., Sivaguranathan, S. Alexander,
S. M. and Evans, G.W., “Improving the
Closest Facility problem in Network Analyst,
logistics of milk collection via simulation”
involves finding the closest facility to a given Proceedings, Industry, Engineering and
location. The given location is referred to as Management Systems Conference, March
“incident” and the milk farms are referred to as 12-14, 2007, pp. 602-605.
“facilities”. The network analyst will define the
closest facility and also specify the best route.
Figure 4 illustrates such a route.

In a similar fashion network analyst can be used


to define the route to a facility or facilities that
can be reached in the shortest time from an
“incident”.

3.5.3 O.D. Cost Matrix Layer: The O.D.


Cost Matrix Layer, in Network Analyst, can be
used to find the closest location to a number of
facilities. This could be useful in setting up
depot for instance for the milk hauling trucks. It

13
Proceedings of the 2008 IEMS Conference

Table 2 : GIS routing based on Time Windows

1. Start at Farm 1: Time Window: 8:00 AM - 8:20 AM


2. Go South West on CRENSHAW CASSADY RD toward S GILBERT RD; Drive
< 0.1 mi
3. Turn right on BOBBY MCCANDLESS RD;Drive 1.3 mi
4. Turn right at MCCANDLESS RD to stay on BOBBY MCCANDLESS RD; Drive
0.3 mi
5. Make U-turn and go back on BOBBY MCCANDLESS RD; Drive 0.3 mi
6. Turn right on MCCANDLESS RD; Drive 0.6 mi
7. Turn right on CRAIG RD; Drive 0.6 mi
8. Make U-turn and go back on CRAIG RD; Drive < 0.1 mi
9. Make sharp right on STATE HWY 70; Drive 8.1 mi
10. Turn right on JACKSON HWY; Drive 2.4 mi
11. Turn right on UNNAMED STREET; Drive 0.3 mi
12. Continue on COVE CITY-BEAR WALLOW RD; Drive < 0.1 mi
13. Bear right on UNNAMED STREET; Drive < 0.1 mi
14. Bear right on OLD LEXINGTON RD; Drive 0.1 mi
15. Make U-turn and go back on OLD LEXINGTON RD; Drive < 0.1 mi
16. Turn right on STATE HWY 1141; Drive 1.1 mi
17. Bear right on OLD GLASGOW RD; Drive 1.4 mi
18. Make U-turn and go back on OLD GLASGOW RD; Drive < 0.1 mi
19. Turn right on JEWELL DR; Drive 0.2 mi
20. Turn right on HOWARD DR; Drive < 0.1 mi
21. Turn right on CHARLIE MORAN HWY; Drive < 0.1 mi
22. Make U-turn and go back on CHARLIE MORAN HWY; Drive 0.5 mi
23. Bear right on E MAIN ST; Drive 0.8 mi
24. Turn right on GUTHRIE ST; Drive 0.4 mi
25. Arrive at Farm 2, on the right; Time Window: 8:50 AM - 9:10 AM.

14
Proceedings of the 2008 IEMS Conference

Figure 1 - Kentucky Farms Covered and by the Alan Wilson Hauling Company

15
Proceedings of the 2008 IEMS Conference

Figure 2 – A Close up view of the Roads Layout

16
Proceedings of the 2008 IEMS Conference

Figure 3 – Shortest Route through Farms to Processing Station

17
Proceedings of the 2008 IEMS Conference

Figure 4 – Closest Facility to a given Location

18
Proceedings of the 2008 IEMS Conference

Figure 5 – The use of the O.D. cost matrix layer to establish a depot location

19
Proceedings of the 2008 IEMS Conference

A Look at Closing the Loop in an


Undergraduate Project Management Class
Stephen Allen
Truman State University
sallen@truman.edu

Abstract Baldrige National Quality Award created quality


criteria for institutions of higher education [6].
An undergraduate project management The criteria emphasized continuous
curriculum has been designed and coordinated improvement in higher education practices.
around a typical project lifecycle phases—
conception; selection; planning, scheduling, 1.1. Evolution of management education
monitoring, reporting, and controlling; and
finally evaluation and termination. A project As business education quality evolved, the
design element is included as a curriculum concept of the balanced scorecard was met with
layover component. Students are given the approval of business school deans [7]. The
opportunity to self-select a project to be balanced scorecard approach had successfully
designed in consultation with the faculty. The been adopted by the for-profit sectors of
projects are an opportunity for a student to manufacturing and service [8]. In brief, the
design an ongoing project, redesign a past balanced scorecard requires an effort of deciding
project, or use the design element to design a what is to be included on the scorecard and what
project in a field of self-interest. Students development and processes are needed to
submit weekly progress updates regarding the achieve results.
project design element to avoid poor design In 1991, the AACSB (American Association
performance. The projects are assessed at the of Collegiate Schools of Business) revised its
end of the semester by an external review team accreditation standards to espouse a mission-
of professional project management linked standard for schools of business. The
practitioners. The assessment includes a change in accreditation was a continuous
technical component and an oral presentation improvement measure to ensure ever increasing
component. The review team meets with the quality measures in business school education.
entire class and concludes with a group The AACSB organization has continued to
debriefing session. The review team then meets refine and improve its accreditation standards
with the faculty providing feedback and means and expectations. In 2003, the organization
of curriculum improvement. again made improvements with an adoption of
revised standards to ensure quality management
1. Introduction education from worldwide educational
institutions [9].
Continuous improvement has been a long
standing element of a quality management 1.2. Project management education trends
education. Gone are the days of a pure lecture
class format in management and higher As businesses continue to improve
education in general [1]. In the late 80’s and operations and do more with less, qualified
early 90’s, quality improvement efforts to management leaders will need the technical
improve management and higher education in tools to achieve business goals. As a
general were being suggested by a variety of consequence of the immediate need for more
authors from a variety of applications [2] [3] [4] technically capable and better trained managers,
[5]. At the time, the recently devised Malcolm trained managers with project management tools
are in high demand and short supply.

20
Proceedings of the 2008 IEMS Conference

Consequently, the project management segment 2. Project design components/lifecycle


of management education has undergone
significant growth over the past ten years [10] to Students begin to better grasp elements of
meet these demands. project management and the project lifecycle
Project management education in the through the project design experience. The
business curriculum is also changing. students will go through the process of project
Traditional text knowledge, software conception, selection, planning and scheduling
applications and practices are available to an including risk assessment. In some cases the
instructor through the use of simulations, case designed projects are executed providing an acid
studies, applied readings, end of chapter test of the project design. For most students, the
assignments, and essay questions. The typical course requirement takes them through the
materials, however, fail to close the loop on a scheduling and risk assessment stages. Students
significant project management learning must also rely on themselves to create a major,
experience. Continuous improvement efforts original work utilizing their management skills
have been made to address the need of and tools. Further, students must also design
significant project management experience. appropriate visual aids to convey their design
One continuous improvement effort is the message to an external audience of professional
use of Project-based Learning (PbL) techniques project management practitioners.
in the nonprofit sector [11]. This approach is an
excellent example of continuous improvement in 2.1. Learning by design
project management education in that project
management practices are applied external to the The project to be designed is chosen by the
course. A shortcoming of the practice is the student in consultation with the faculty.
likelihood of loss of control of curriculum basis Students are asked to identify potential design
as the approach, while student learning centered, projects worthy of project management
is also student driven to a greater extent. practices. Once several prospective projects
Another significant continuous have been identified, the faculty member’s role
improvement effort is the formation of is to help guide the project selection with
competitive teams in the class with the objective parameters such that the chosen project is
to design the robbery of a jewelry store [12]. sufficient in complexity to warrant project
Using a project management perspective, teams management practices and yet not so complex as
design an action plan to accomplish the jewelry to prevent a timely design within the scheduled
store theft. While the application is innovative, time constraints. Students must secure faculty
the effort is hardly practical and also fails to approval before the design may progress.
close the loop on project management education Once a project has been approved, the next
within the business curriculum. phase gate in the design process is the
identification and approval of major project
1.3. Moving forward phases. Here again, the faculty acts as a
consultant and a quality check in the major
How then does one close the loop on project phase identification. Once approval is granted,
management education? Theories must still be the student takes on full charge of the project
understood. Tools must still be acquired. design. A Work Breakdown Structure is utilized
However, critical thinking must be applied, to identify all needed tasks and assignable work
critical assessment of work must occur, and packages to complete the project. Order
finally, feedback from performance and precedence, time estimates and needed resources
improvement implementation must take place to are a part of the design. See Figure 1.
close the loop. What follows is a suggested A key element of the project design is the
process to provide completeness in risk analysis and mitigation plan. Risk
undergraduate project management education. assessments are made for the project elements of
The process itself follows the typical project scope, schedule and budget. Students assess
lifecycle. these elements in their projects for risks and risk

21
Proceedings of the 2008 IEMS Conference

severity. Risk elements of high impact/high 2.3. Completing the package


severity, medium impact/high severity, medium
impact/medium severity, and medium Once the final project has been designed and
impact/high severity require a risk mitigation the visual aid package has been completed,
plan to be designed. Students also perform students are asked to orally walk through their
Monte Carlo simulations to assess the presentation. The walk-through serves many
uncertainty of project completion based on their purposes including a quality check, time check
task time estimates. and review of good presentation practices. For
Use of professional project software, such as many students this is the first major project
Microsoft Project, is required as a part of the wholly completed by themselves without the
design element. A total budget, time estimate support of a team effort.
and critical path are required analysis items.
3. Project design assessment

The project presentation element is planned


Student as a formal presentation with an atmosphere
much as that found in the workplace. The
assessment outcomes of the presentation are
Project Conception
two-fold: technical project assessment and an
oral presentation assessment.
Approval
3.1. Assessment measures

One key purpose of closing the loop in the


Major in the project management class is to assess the
Phase technical or scope component of the project
Faculty Selection design. The projects are of a level assessable by
professional project management practitioners.
A clear logical thought process toward task
Approval development fitting to the phase is sought. The
critical path analysis is vital toward the analysis
and assessment.
Additionally, the level and depth of risk
Project Design analysis and mitigation planning is assessed as
part of the technical evaluation. Critical
Figure 1. Approval Process thinking skills are required to effectively assess
risk and plan for mitigation of risk elements.
2.2. Visual aid design The standard School of Business oral assessment
rubric is used by the reviewer. A brief training
Students design a visual aid package to session is conducted by the faculty prior to its
present and document their project design work. use by the reviewers.
The design work covers three main areas:
project planning, project scheduling and project 3.2. Student presentation session
analysis. Topics covered include the project
purpose and key deliverables, applied resources, Students are assigned randomly to sessions
a top level schedule for major milestones, action and reviewers. One to two students attend
plan, responsibility chart, budget, critical path sessions in the role of an audience in addition to
and risk analysis. The visual aid package must the reviewer. The session expectation is
successfully integrate the design phase with the professional. The student assessment begins at
analysis phase into a unit that is presentable the introduction. The sessions are typically held
within the presentation time frame.

22
Proceedings of the 2008 IEMS Conference

in professional conference rooms with applying and improving their project


multimedia tools available. management skills.
One tool of teaching improvement in
3.3. Group debriefing business schools is the common use of student
evaluations. While these evaluations are
Once the individual presentations are important and serve a purpose, the evaluations
completed, the students and the team of can only provide a faculty member with each
professional project management practitioners student’s comparative view of the teaching
reconvene for a meal and debriefing. At this effort. By use of the faculty debriefing, the
point in the process, the students are far more faculty member receives an immediate, external
relaxed after having completed their validation of the semester’s efforts and
presentations. The team conducts a group achievements. Specific areas of future project
debriefing highlighting best practices observed management topic improvement are also
during the presentations. A short presentation of provided by the external review team.
project management at work is presented by the Frequently, teaching and course evaluations
team with a follow up question and answer become available after the start of the
session. Individual project feedback is provided subsequent semester or term. By conducting the
to students during the next class period. It is a faculty debriefing with such timeliness,
long but very beneficial experience not only for immediate and continuous improvement may be
the students but for the team as well. made to the project management curriculum
prior to the commencement of the next semester
3.4. Faculty debriefing or term.
In summary, the external validation of the
At the conclusion of the group debriefing, undergraduate student designed projects aids in
the team reviews the effective points of the the application of project management
student efforts and then indicates areas of future knowledge areas, provides a professional
class improvement. The improvement feedback learning and presentation environment, and
is timely, to the point, and effective for provides an extraordinarily quick and pointed
integration in the next semester’s course continuous improvement means for the
offering. undergraduate project management course.

4. Conclusions 5. References

An undergraduate project management [1] F.D.S. Choi, “Accounting Education for the 21st
business course could be taught and assessed for Century”, Issues in Accounting Education, 8, 1993,
student achievement solely in knowledge areas. pp. 423-430.
That same course could benefit from an
[2] R. Cornesky, Using Deming to Improve Quality
additional design element which applies the
in Colleges and Universities, Madison, WI: Magna
acquired knowledge areas. The additional Publications, 1992.
design element assessed by the instructor would
add incremental improvement to the student [3] R. Cornesky, Implementing Total Quality
achievement of project management knowledge Management in Higher Education, Madison, WI:
areas. Extending the design element by having Magna Publications, 1991.
faculty serve the role of consultant to a student
project designer by which the knowledge areas [4] D.L. Hubbard, “Can America Learn from
are applied further improves the understanding Factories?” Quality Progress, 25(5), 1994, pp. 93-
of knowledge areas. By having student designed 97.
projects assessed by external professional
[5] D.T. Seymour, “TQM on Campus: What the
project management practitioners, the students Pioneers are finding”, AAHE Bulletin, Washington,
get practitioner feedback for their design and D.C., 1991.
analysis thereby closing the loop of learning,

23
Proceedings of the 2008 IEMS Conference

[6] D. Karathanos and P. Karathanos, The Baldrige


Education Pilot Criteria 1995: An Integrated
Approach to Continuous Improvement in Education”,
Journal of Education for Business, 71(5), 1996, pp.
272-276.

[7] A.R. Bailey, C.W. Chow and K.M. Hadad,


“Continuous Improvement in Business Education:
Insights from the For-Profit Sector and Business
School Deans”, Journal of Education for Business,
Jan/Feb, 1999, pp. 165-180.

[8] R. Kaplan and D. Norton, “Using the balanced


Scorecard as a Strategic Management System”,
Harvard Business Review, Jan/Feb, 1996, pp76.

[9] AACSB International, Eligibility Procedures and


Standards for Experiential Reviews, April, 2003, St.
Louis, MO.

[10] J.B. Arbaugh, “Project Management Education:


Emerging Tools, Techniques and Topics”, Academy
of management Learning & Education, 6(4), 2007,
pp. 568-569.

[11] T.J. Kloppenborg and M.S. Baucus, “Project


Management in Local Nonprofit Organisations:
Engaging Students in Problem Based Learning’,
Journal of Management Education, 28, 2004, pp.
610-629.

[12] E.D. Walker, II, “Introducing Project


Management Concepts Using a Jewelery Store
Robbery”, Decision Sciences Journal of Innovative
Education, 15(4), 2004, pp. 65-69.

24
Proceedings of the 2008 IEMS Conference

Information and Telecommunication Technology: Blueprint for


Social-Economic Growth in Developing Nations
Shahram Amiri
Stetson University
samiri@stetson.edu

Abstract to become richer, and therefore more able to


This paper explores the interplay of Information afford still newer technology; it is, moreover, the
and Communication Technology (ICT), and already well educated take up lifelong learning
social-economic growth in developed and opportunities and who, in general, get better
developing nations. Digital literacy, ICT services. In short, the educated information rich
infrastructure and creation of local content are become richer and the less educated information
crucial foundation that must be attained before poor become poorer (OECD, 2001).”
the dream of global availability of knowledge, This gap is further delineated among
improved quality of life, commerce, education developed, developing, and undeveloped
and health can come to fruition. countries based on their ability to provide
Internet connectivity and promote digital
Introduction literacy. The gap is magnified in undeveloped
countries where learning and cultural limitations
The Digital Divide are extremely challenging. However, lack of a
“Visions of a global knowledge-based economy developed infrastructure does not represent the
and universal electronic commerce, sole reason for an unconnected population.
characterized by the ‘death of distance’ must be Many times the lack of connectivity can be
tempered by the reality that half the world’s based on information literacy, which keeps some
population has never made a telephone call, groups underserved.
much less accessed the Internet.” OECD (1999), Accessing information in a global economy
“The Economic and Social Impact of Electronic leads to a better quality of life. To help reach
Commerce: Preliminary Findings and Research that goal, developed countries have a
Agenda.” responsibility to show developing and
As new technologies become a steadfast undeveloped countries the advantages and
element of our everyday affairs, they enhance advancement opportunities the Internet provides.
many peoples’ lives enabling them to be more In order to help these countries, there are several
productive. Technology has always pushed questions that must be addressed to engage
society forward to achieve goals and solve everyone in Internet literacy. What is the
problems, but it has also created challenges, as appropriate technology according to the local
not all people have access to these technologies. conditions? Is it relevant? How would the people
Today, one of these challenges is represented by put it to use?
the digital divide. This term “refers to the gap A larger issue has become the ever widening
between individuals, households, businesses and gap between developed, developing and
geographic areas at different socio-economic undeveloped countries. Statistics show that
levels with regard to both their opportunities to Internet connectivity in North America is now
access ICT and to their use of the Internet for a 68.2% (Internet World Stats, 2007). In
wide variety of activities (OECD, 2001).” developing countries, such as Europe, the gap
Today’s digital divide can be attributed in still exists with connectivity at 35.2%; in Africa,
part to geography, employment, age, and gender, the connectivity figure is an alarming 2.7%
with primary dependence on income and (Internet World Stats, 2007).
education. “Easy access to ICT enables people

25
Proceedings of the 2008 IEMS Conference

literacy is not encouraged-- and its importance


demonstrated-- then the future remains bleak for
the majority of the world’s population. Digital
inclusion and digital literacy hold such great
promise for the humankind. From medicine to
governance to commerce to life long learning to
democracy all hold exciting new opportunities,
assuming the divide is conquered.

E-Health
E-health is an emerging field in the
intersection of medical informatics, public
health, and business, referring to health services
and information delivered or enhanced through
Developing Nations – The Need for ICT the Internet and related technologies. In a
Infrastructure broader sense, the term characterizes not only a
How can catching up effect the technical development, but also a state of mind,
underdeveloped countries? For some of the a way of thinking, an attitude, and a
underdeveloped countries, the digital poverty is commitment for networked, global thinking, to
not a result from “naturally lagging behind”, but improve health care locally, regionally, and
is rather a direct result from the success of the worldwide by using information and
strong industrial nations. We all should agree communication technology (Eysenbach, 2001).
that the discourse on digital divide has become a Telemedicine may be as simple as two health
debate of contrasting opinions, but the reality is professionals discussing a case over the
that governments and private sector in the telephone, or as complex as using satellite
underdeveloped countries have to play a technology and video-conferencing equipment
decisive role to reduce the digital gap. to conduct a real-time consultation between
(http://www.acm.org/ubiquity/question/divide.ht medical specialists in two different countries.
ml) Telemedicine generally refers to the use of
The World Employment Report 2001 communications and information technologies
emphasizes that literacy and education cannot be for the delivery of clinical care.
by-passed, yet both are vital for reaping the
greatest advantages from the emerging digital Clinical uses of e-health technologies:
era. One of the greatest challenges facing • Transmission of medical images for
underdeveloped countries today is the promotion diagnosis
and implementation of educational programs • Groups or individuals exchanging health
that include literacy and particularly digital services or education live via
literacy. It is obvious to conclude then that the videoconference
digital divide must then be fought on at least two • Transmission of medical data for
battlefields: economy and education. With diagnosis or disease management
universal access and an Internet-literate • Advice on prevention of diseases and
workforce, the digital revolution can serve as the promotion of good health by patient
engine of economic growth and development. monitoring and follow up
• Health advice by telephone in emergent
ICT and Socio-Economic development cases
Any combination of the aforementioned ICT
would certainly benefit the peoples of the By helping build the necessary infrastructure
developing nations. There is no magic bullet, no such as satellites and procuring computers,
easy answer. There exists only one certainty: if developing countries would gain access to all
the digital divide is not addressed and bridged types of information including heath and
through Internet connectivity, and if digital

26
Proceedings of the 2008 IEMS Conference

education on health. According to the former staffs have left for better-paid jobs in the private
UN Secretary-General Kofi, “Too many of the sector or abroad because the work is exhausting
world’s people remain untouched by this and the pay is low.
revolution (Mutume, 2007).” With regard to the benefits of e-medicine, the
Healthcare in under developed countries, ability to head off pandemics resides at one
mostly in Africa, continues to be a big challenge extreme, while simple acts like allowing a child
for the World Healthcare Organization(WHO). to survive a treatable illness or improving life
In addition to so many diseases, a shortage of for an asthmatic sit on the other extreme; no
doctors and medical resources has only matter how an individual from the developed
worsened the healthcare problems in under world feels about this matter, they stand to only
developed countries. For example, Rwanda has a benefit from an improvement in global
population of 10 million people and only 500 medicine. Putting aside all humanity for a
doctors total to provide medical care for moment, e-medicine could accomplish what all
everyone in that country. Cleary, with a ratio of the king’s horses and all the king’s men (i.e.
one doctor per 20,000 people, most of the UN, Red Cross, etc.) could not: reach any
Rwanda citizen will not get the opportunity to person, anywhere on the globe, and provide
consult with a doctor in their lifetime. According medical diagnosis and treatment
to the WHO report in 2006, the Americas, which recommendations, at the least. If cellular phones
account for just 10% of global disease burden, become universal, as hoped, then the cameras
have 37% of the world’s healthcare workers and found on most models could greatly aid in
spend more than 50% of the world’s health diagnosis, for example.
budget (World Health Organization, 2007).
On the contrary, Africa has 24% of the E-Governance
disease burden but just 3% of the health care Medicine is not the only aspect of life that
workforce and accounts for less than 1% of the will be improved as digital inclusion spreads.
health spending (Kumar, 2007). Consequently, Governance will rise to the top. As soon as
many people living in under developed countries people actually have a voice, or a means of
are dying with diseases that could have been communicating over distance, they will begin to
easily prevented and cured if there were speak for themselves. E-governance, as it is
adequate doctors available to treat them. To called, allows for individuals from rural, isolated
make matters worse, most of the doctors in areas to have a say in government. “E-
under developed countries usually leave their governance has to be viewed as a political
homeland to work in an industrialized country in process dealing with reform of governance
search of a higher salary and political stability. towards good governance. And governance
Manuel Dayrit, director of the Department of reform is a slow process requiring engaging with
Human Resources for Health at the World institutions & bringing about attitudinal and
Health Organization, said, “Even if you have the constitutional changes,” according to Vikas Nath
medicine, the vaccines, and the bed nets, you (Nath, 2007).
need health workers to deliver the service E-Democracy, the end goal of the WSIS and
(Kumar, 2007).” Besides emigration of trained a form of e-governance, refers to the use of these
workers, underproduction and internal mechanisms in the democratic process. This
maldistribution in the medical field in Africa are usage varies greatly in form and content (van der
the other factors that contribute to the shortage Graft and Svensson, 2006). E-democracy can be
of health care workers in Africa. Most of the as simple as access by citizens electronically to
health care workers that stay in their homeland government information and services, or can be
do so only because they have a passion for more complex and involve the interaction
saving lives and healing their fellow citizens. between both the citizens and the government
Not only do these physicians have to deal with (Norris, 2007).
political instability in their country and low pay, Some components of e-governance are news,
they also have to work with extremely limited events, and political participation. As more and
resources. Workloads have soared as nursing more people become aware of, and included in,

27
Proceedings of the 2008 IEMS Conference

the world’s events, they will certainly express content will need to be created for different
their opinions and attempt to better things. communities, regions, and peoples.
Imagine: if the people of Darfur were not Distance learning is another example of how
separated by the digital divide, they would most e-education can elevate the world’s information
likely generate more global outcry over their poor. Naturally, issues such as literacy and
predicament. Digital literacy, along with the language barriers, in additional to the actual
necessary ICT infrastructure, grants individuals digital divide, will need to be addressed and
and groups the effectively participate in their remedied before this dream becomes a reality.
own governance. Having said that, distance learning offers the
possibility of a structured, western-style
E-Learning education to the globe’s poorest.
Another improvement made possible by ICT Conversing with other peoples and learning
infrastructure is electronic learning. “E-learning about distant places and cultures is another part
has been defined as the use of new multimedia of e-education. Education sows the seeds of
technologies and the Internet to improve the tolerance. Imagine a world where the poor
quality of learning, to make it accessible to children of Pakistan are no longer limited to
people out of reach of good educational madrasas run by religious teachers; if these
facilities, and to make new and innovative forms impressionable young minds could be taught
of education available to all,” according to tolerance and science, rather then religious
WSIS (World Summit on the Information dogma, the dividends would be incalculable.
Society, 2007). Relevant content is perhaps the This grand dream is within humanity’s reach.
most important aspect of e-learning, for the Wars and genocides, among other scourges, will
foreseeable future. Surely, most people agree become less common as different races and
that knowledge should be made available to all peoples learn about and from, and converse
of the world. However, information which with, each other. Hunter-gatherers in South
seems trivial to modern societies can make America know nothing of the situation in
massive impacts upon developing nations. Darfur, but through connectivity these two
Agricultural and scientific knowledge is isolated peoples could communicate and relate
capable of revolutionizing the Third World. If to one another. So much of the world’s strife is a
knowledge such as meteorological data, farming result of ignorance regarding others. Digital
techniques, livestock and crop husbandry, and inclusion tears down barriers and allows us to
common Western medical knowledge, for see ourselves in others, rather than someone who
example, were openly available to the world’s looks, speaks, worships, or lives differently.
poor, then their quality of life would greatly
improve. Starvation would be reduced, simple E-Commerce
illnesses could be avoided or mitigated, and less Finally, the last major facet of life which
conflict over food and other essentials would digital literacy and inclusion stands to improve
arise as a result of the dissemination of for all is commerce. Some individuals view
knowledge long considered common by commerce and profit with scolding eyes.
developed nations. However, trade and desire for wealth have
Content, is perhaps the most driven humanity to its current pinnacle, and will
underappreciated facet of e-learning. It is take it even higher. By connecting the remaining
absolutely correct that as much information 90%+ of the worldwide population, business
should be made available as is possible. gains a much larger market. One day in the
However, for peoples who are trying to catch up future, the entire world will constitute one
to, and join, the modern world, smaller steps market: an actual “global village.”
may be required. Rather than teaching rural The United Nations also addressed the topic
African children about the American Civil War of e-business. According to their Conference on
or other distant subjects, these same children Trade and Development (UNCTAD), business-
should be taught African geography and farming to-consumer (B2C) e-commerce is still only a
techniques tailored for their region. Local small percentage of all electronic commerce;

28
Proceedings of the 2008 IEMS Conference

unsurprisingly, the report found that developed References


Western countries (i.e. United States, United
Kingdom, Sweden) conducted the majority of 1) Adeogun, Margaret. “The Digital Divide
B2C electronic commerce (UN, 2004). Also, the and University Education Systems in
UN report found that most of the purchases were Sub-Saharan Africa,” African Journal of
digitized products or clothing (UN, 2004). Library, Archives & Information
Obviously, emerging countries and populations Science 13, no. 1 (2003): 115.
have less need for fashion and gadgets than they
do items that will improve the quality of life 2) “Capability Gap,” Governing 20 no11
(e.g. farming equipment, first aid supplies, pest 18 (August 20, 2007); available from
control, et cetera). http://www.governing.com/.

Conclusion 3) “Digital Literacy.” Wikipedia.org


The private sector, or more accurately, the (2007). Cited 20 November 2007.
search for profit, will ensure that more and more
of the world’s population are able to access 4) Eysenbach, G. “What is e-health?” J
markets and purchase goods and services. It Med Internet Res 2001;3(2):e20
seems that the most likely avenue for digital http://www.jmir.org/2001/2/e20/.
inclusion can be found in mobile handset
technology. What remains to be seen is who will 5) Humphrey, John, et al. “The Reality of
capitalize on this tremendous opportunity. E-commerce with Developing Nations:
It is important to remember the key to digital Executive Summary.” (Brighton, U.K.,
inclusion lies in digital literacy and the ability to 2003), 2.
relate pertinent information to ones life. If the
information gathered by the newly developed IT 6) International ICT Literacy Panel (2002).
infrastructure does little or nothing for the Digital transformation: A framework for
intended users, then there is a need to teach ICT literacy (A report of the
literacy to help the underprivileged relate and International ICT Literacy Panel).
use the gathered information to their benefit. Princeton, NJ: Educational Testing
Imagine for one moment that the dream of Service. Retrieved August 18, 2004,
putting a mobile handset or a bare-bones laptop from http://www.ets.org/
in almost every person’s hands has come into Media/Research/pdf/ictreport.pdf.
fruition. What are typically viewed as common
technologies in developed nations will serve as a 7) Internet World Stats (2007). Cited 21
revolutionary catalyst for a majority of the November 2007. Available from
world. Farmers will be able to check prices in http://www.internetworldstats.com/stats.
different markets and obtain weather updates, htm; INTERNET.
rural populations will have access to medical
expertise, people will be able to communicate 8) James, Edwyn. Learning to bridge the
globally, markets will expand rapidly, and digital divide (2001). Center for
democracy will spread: from the obscure farmer Educational Research and Innovation
in Zimbabwe to the futures trader in Chicago; (CERI). Available from
the entire world will benefit from connection to http://www.oecdobserver.org/news/fulls
each other. tory.php/aid/408/Learning_to_bridge_th
Overall, those who utilize connectivity e_digital_divide.html
experience an enhancement in their quality of
life. Digital literacy, ICT infrastructure and 9) Kumar, Pooja. “Providing the Providers:
creation of local content are crucial foundation Remedying Africa’s Shortage of Health
that must be attained before the dream of global Care Workers.” The New England
availability of knowledge can come to fruition. Journal of Medicine 356, no. 25 (June
2007): 2565.

29
Proceedings of the 2008 IEMS Conference

10) Livingston, Stephan. “Diplomacy in the 18) OECD. Understanding the Digital
New Information Environment (2003, Divide (2001). Cited 30 October 2007.
Feb 23).” Georgetown Journal of Available from
International Affairs 4, no. 2: 111. http://www.oecd.org/dataoecd/38/57/18
88451.pdf; INTERNET.
11) Marshall, J.M. “Learning with
Technology: Evidence that Technology 19) Oguya, Vivienne. “Bridging Africa’s
Can, and Does, Support Learning.” May Digital Divide: A Case for CD-ROM
2002. San Diego, CA: San Diego State Technology,” Quarterly Bulletin of the
University. International Association of
Agricultural Information Specialists 51,
12) Matteo, Sonina. “Philly Pushes for Low- no. 3 (2006): 161.
Cost Wi-Fi for Its Poorest Residents,”
Network World 24: 30, 6 August 2007, 20) “Race is on to lay undersea fiber optic
40; and “Houston Mayor White Names cable on eastern Africa coast,”
New Digital Inclusion Project Director,” Technology Review, 1 June 2007
U.S. Fed News Service, 12 March 2007. [electronic article] [cited 10 November
2007]; available from
13) Mutume, Gumisai. “Africa Takes on the http://www.technologyreview.com/Wire
Digital Divide.” Cited 2 December /18814/; INTERNET.
2007. Available from
http://www.un.org/ecosocdev/ 21) Ribble, Mike & Gerald Bailey. “Digital
geninfo/afrec/ vol17no3/173tech.htm; Literacy.” Cited 3 December 2007.
INTERNET. Available from
http://www.educ.ksu.edu/digitalcitizens
14) Naisbitt, John. “The New Media Age: hip/Education.htm; INTERNET.
End of the Written Word?” The Futurist
41 no2 (March/April 2007. 22) Rowe, Julie. International Computer
Driving License (ICDL):International
15) Nath, Vikas. In Digital Governance.Org Digital Literacy Takes Hold. January
Initiative [website] (London, U.K. [cited
2004; available from
10 November 2007]). Available from
http://216.197.119.113/ http://www.certmag.com; INTERNET.
artman/publish/index1.shtml;
23) Trippe, Bill. “Content Management
INTERNET.
Technology: A Booming Market.”
Econtent ((2001, Feb-Mar.), 20-21.
16) Norris, Donald F. “E-Government and
E-Democracy at the American 24) United Nations Conference on Trade
Grassroots.” International Conference and Development. E-Commerce and
on Public Participation and Information Development Report 2004. New York,
Technologies. [cited October 25, 2007]. 2004: 2: 4.
Available from http://www.umbc.edu/
mipar/documents/- 25) van der Graft, Paul, and Jorgen
EGovernmentandEDemocracy.pdf Svensson. “Explaining eDemocracy
development: A quantitative empirical
17) OECD. Digital Divide Report 2000. study.” Information Polity 11 (2006):
Retrieved October 30, 2007. Available 123-134.Wade Roush and David Talbot,
from “Beaming Books,” Technology Review
http://www.oecd.org/dataoecd/56/14/39
109, no. 2 (2006).
204469.doc.

30
Proceedings of the 2008 IEMS Conference

Dedicated Account Teams: Worth the Cost?


Gordon Arbogast
Jacksonville University
garboga@ju.edu

ABSTRACT than an adversary and work very hard to make it


easy for the customer to do business with them.
This research paper provides valuable insight Customers represent 100% of a company’s
on customer satisfaction. Customer satisfaction revenue and they have a profound impact on
should have the highest priority within any market perceptions of a business. Customers
organization. The research recognizes that can provide more business or leave. They can
additional cost measures must be targeted to tell others about their positive or negative
some customer segments. The intent of the experiences.
research was to validate the additional cost
incurred by the company in its endeavor to Companies may have many different
achieve a higher customer service relationship initiatives in place, but metrics must be used to
for such segments. effectively measure their financial impact.
Businesses must develop effective means to
Specifically, the company studied is evaluate the equity that exits in the customer
primarily engaged in insuring loans made by base. Understanding how to serve the customer
banks, mortgage companies, and other financial best and which segment groups are more
lending institutions. It allocated dedicated teams profitable and strategic allows companies to
to targeted groups of customers to enhance the allocate their resources efficiently.
customer’s perspective on the company’s ability
to respond and nurture their needs. The results of Technology has changed the way businesses
the targeted research were compared to current operate. In many cases, their customers can
methods, processes and procedures typically view the same information as their employees in
being deployed and utilized within other real time. This new technology has created
customer relationships. better customer service in terms of hands on
information for their customers, and a challenge
For the research, null and alternate as well. Employees must be well versed on
hypothesis were established, customer handling real time information and the inquiries
satisfaction surveys were pooled and the raw associated with it. “In a market in which
data was statistically modeled using appropriate technology-enabled customers can now engage
analytical tools and m methods. The research themselves in an active dialogue with service
developed a methodology to study this problem providers- a dialogue that customer’s can
and concluded in a pilot study that the additional control-companies, have to recognize that the
cost incurred by the company to resource customer is becoming a partner in creating
dedicated teams did not achieve a higher value” (Harvard Business Review, p.1).
customer satisfaction report as opposed to
traditional methods being utilized. Service quality is a true differentiator in
business. “Organizations create superior service
1. Introduction through a combination of eight tactics and
Customer relationships are critical to the practices:
success of a business. Businesses need to ensure • Finding and retaining quality people
that they know what their customers want and • Knowing their customers intimately
expect. They must be flexible in meeting those • Focusing their units on
demands; treat the customer like a partner rather organizational purpose

31
Proceedings of the 2008 IEMS Conference

• Creating easy-to-do business with • They recruit, hire, train and promote
delivery systems for service.
• Training and supporting employees • They market service to their
• Involving and empowering customers.
employees • They market service internally to their
• Recognizing and rewarding good employees.
performance and celebrating success • They measure service internally and
• Setting the tone and leading the way externally and make those results
through personal example” available.
(Connellan, Zemke, p. 6).
Driving customer loyalty is not about
Delivering high quality customer service coming up with new marketing messages or
must be a part of everyday performance. jumping through hoops when things go
Customer surveys are a critical part of wrong; it is about creating positive
understanding customer needs. Successful experiences for customers through an endless
organizations have one common central string of needs understood and delivering on
customer-focus. Once good performance is those needs. Turning customers into loyal
delivered consistently, you must improve. advocates is essential for business survival.
Ongoing improvement is equally important. The
Plus One Percent Rule is vital to a company’s Customer surveys are essential in helping
on-going concerns. “The Plus One Percent us target what our customers value most and
Rule is to keep you moving ahead and focused help us align our people and processes to
beyond your vision” (Blanchard and Bowles, p. consistently deliver on those values.
114). All you need to do is improve 1% each “Customer surveys should always answer the
week; if you can do that you are ahead 52% in a questions of who, what, where, why, when
years’ time. “You can make big changes in and how. With these answers we can better
almost anything or achieve great things in your understand our customer’s needs”
life by improving or changing 1%. Things can’t (Willingham, p. 49). The following are some
help but improve if you keep it at 1% each time” key areas we focused on to providing good
(Blanchard and Bowles, p. 117). Progressive customer service.
improvement is necessary at any level of
achievement. It guarantees positive change. • Communication- Do you make it easy
for the customer to communicate? Is
2. Background the phone answered promptly and are
“The process of reorienting a large inquiries handled properly. Do people
organization toward its market is analogous to know where to call? Is the customer
trying to teach an elephant to dance. Many of satisfaction survey used to confirm
the same challenges are involved” (Albrecht and that the staff is perceived by the
Zemke, p. 169). Organizations that have customers as being helpful, courteous
achieved excellence in service share the and knowledgeable?
following characteristics: • Making it Easy and Pleasant- Can the
• They possess a strong vision and customers find what they need when
strategy for service they need it and is there sufficient
• They practice visible management information and help on hand to
• They “talk” service routinely explain how the services/software
• They have customer friendly service works. Are we listening to our
systems. customer’s concerns? Are we
• They balance high tech with high knowledgeable about their business?
touch i.e. they temper their systems • The right quality products for the
with the personal factor. value of money – Not only should

32
Proceedings of the 2008 IEMS Conference

the quality of the service that you people quote the time elapsed since the
provide be measured, but you should unfortunate incident.
check that the products and services • You’re Not Going to Believe This –
that you market are what the customer Those abused by poor customer service
wants and closely match their just can never seem to accept the fact
expectations. Do your customer’s that it happened. They remain shocked
equate your business with securing and continue to agonize.
value for their money? • No Return, No Deposit – Rarely does a
• Speed and Attention- Customers want complaining customer indicate that he
to be dealt with quickly, but or she would return to an offending
attentively. Are we doing everything store.
we can to avoid delays? Good • Free Advertising – The Kind You
businesses try to deal with each Don’t Want – Customer will tell their
customer individually. A quick family members, friends and coworkers
resolution is necessary for every about a bad customer service incident.
inquiry. • Hell Hath No Fury Like a Customer
Scorned – All of the Principles of Bad
The Customer Value Management (CVM) Customer Service could be summed up
approach is a starting point for any analysis, in this simple phrase” (Friedman, p.
which is an outside-in customer vision of the 148).
firm as their ideal provider. “Becoming
customer centered with CVM starts by “The battle to attract and retain loyal
identifying the customers and securing their customers, given today’s technology, will be
visions of ideal outcomes from doing business fought and won based on the ability to
with the company. This includes: identify potentially profitable current and
• How the customer perceives value or future market segments and then meet their
benefit from interactions with company specific needs” (Thompson, p. 52). “Most
products. firms categorize their customers into groups
• What is the optimum level of value that based on the types and volume of products
the customer can envision? they use, not on their margin of profit or on
• What are the attributes of an ideal the basis of common needs and wants”
vendor and of an ideal value delivery in (Thompson, p. 53. Businesses must ensure
order to influence buyer behavior, they mine and analyze their data to better
loyalty and growth” (Thompson, p. understand customers at an individual level,
115). group them into profit and needs based micro-
segments, and access the information for
“The 7 principles of Bad Customer personalized service delivery.
Service” on the other hand state:
• They’re Grateful for the Chance To The ideal customer service survey will
Vent – Customers are always grateful perform the following three functions:
for the opportunity to tell how they • Market Research – provides valuable
were mistreated. feedback to help improve customer
• Tomorrow’s Joke – Many people joke satisfaction levels
to vent their frustration about their last • Marketing – promotes aspects of your
bad customer service experience and business
tell these “jokes” to anyone who will • Advertises a service that you provide
listen! that your customers may not have been
• The Memory of an Elephant – aware of
Customers often don’t forget. Lots of

33
Proceedings of the 2008 IEMS Conference

Measuring customer satisfaction is a • Null Ho : µ2 < µ1 Dedicated


critical part of a businesses mission. account teams do not increase
Developing a good customer survey can help customer satisfaction.
organizations achieve greatness and avoid the • Alternate H1 : µ2 > µ1 Dedicated
principles of bad customer service. account teams increase customer
satisfaction
3. The problem (µ1 and µ2 refer to the mean overall
Specifically, the company studied is customer satisfaction survey result for
primarily engaged in insuring loans made by both Groups 1 and 2).
banks, mortgage companies, and other
financial lending institutions. It has allocated 6. Methodology
dedicated teams to targeted groups of Sample customer satisfaction survey scores
customers to enhance the customer’s are used for three groups of clients. The three
perspective on the company’s ability to groups are the Mid-Tier, Major, and Mega
respond and nurture their needs. clients. The customer satisfaction surveys are
the same for each size client but the level of
The Customer Service Survey used by the service and associated costs are very different.
company focused on servicing three customer The level of service along with its related cost
service segments, Mid-Tier, Major and Mega. increases significantly as the size of the client
The Mid-Tier Segment represents companies increases with the Mega being the highest.
with a number of loans less than 75,000. The
Major Segment represents companies with a The survey is intended to justify that the extra
total number of loans from 75,000 to 999,999. cost for dedicated account teams afforded the
The Mega Segment represents those Mega clients is resulting in extra customer
companies with a number of loans greater satisfaction. Therefore the test is if there is a
than or equal to 1,000,000. Dedicated teams greater than or equal mean score for the Mega
were used only to service the Mega Segment. client than for the average mean score for the
For this reason the Mid-Tier and Major Mid-Tier and Major clients combined. This uses
Segments were combined into Group 1 data, a Likert scale of 1 to 6 to measure overall
while Group 2 was the Mega Segment data. customer satisfaction for each group. One is the
The problem studied was to determine if the lowest customer satisfaction rating progressing
customer satisfaction for the Mega Segment up to six which is the highest customer
justifies the extra costs for dedicated account satisfaction rating.
teams.
To test our hypothesis we used the
4. Objectives Normal five step method as follows:
There were three primary objectives of • Null and alternate hypothesis -The null
the study: hypotheses were stated previously.
• To design a methodology to measure the • Level of significance - A significance
impact of customer dedicated account level of alpha = 0.05 was used.
teams on customer service. • The test statistics – Choose and
• To test the methodology with some real calculate the appropriate test statistic.
world data (pilot study). • Formulate the decision rule.
• To determine if the pilot study indicates • Take a sample and arrive at a decision.
that dedicated account teams enhance
customer service. The Test is a 2 sample t-test (one tail)
with the rejection region in the upper tail)
5. Study Hypothesis
The hypotheses of the study were as 7. Statistical analysis
follows: The statistical model compares the case

34
Proceedings of the 2008 IEMS Conference

where the population standard deviations are 8. Study Results


unknown and the numbers of samples are small.
This test is often referred to as a small sample Table 1- Statistical Results
test of means. These small sample tests of means
are more stringent and have three required Variable 1 Variable 2
assumptions. “The three requirement Mean 4.93214 5.05714
assumptions are as follows: Variance 0.03254 0.10109
Observations 14 14
• The sampled populations follow the Pooled Variance 0.06682
normal distribution. t Stat 1.27939
• The two samples are from independent P(T< t) one tail 0.106028
populations. T Crit. one tail 1.705617
• The standard deviations of the two
populations are equal” (Lind, Marchal 9. Discussion and Conclusions
and Wathen, p. 330) The decision is not to reject the null
hypothesis, because the t-stat of is too
In this test, the t distribution is used to small. Specifically:
compare the two population means. An
additional calculation for pooled variance is • The test statistic of 1.279 is smaller than
required for computing the statistic. The third the critical value of 1.71.
assumption assumes that the population standard • Also, the p-value of 0.106 is not smaller
deviations are equal. Therefore “the two sample than the chosen level of significance of
variances are pooled to form a single estimate of 0.05.
the unknown population variance” (Lind, • The mean customer satisfaction of the
Marchal and Wathen, p. 331). In this model the dedicated account teams (Group 1) is
weighted mean is computed of the two sample higher than the mean of the groups
standard deviations and this is then used as an without dedicated account teams (Group
estimate of the population standard deviation. 2), but the magnitude of the difference is
Additionally, the weights are degrees of freedom not statistically significant.
that each sample provides. The standard • The evidence from the pilot study
deviations are pooled when the sample sizes are indicates that the mega clients do not
small and also when the population standard have a greater customer satisfaction
deviations are not known. score.
• The belief that dedicated account teams
In this method s was calculated, the sample increase customer satisfaction is not
standard deviation, and substituted for the supported, as the Mega clients do not
population standard deviation. “Because we have a greater customer satisfaction
assume that the two populations have equal score.
standard deviations which the best estimate of • A methodology to measure the impact
that value is to combine or pool all the of dedicated account teams has been
information that we know about the value of the derived.
population standard deviation” (Lind, Marchal
and Wathen, p. 331. Lastly, the degrees of The case could be made that the surveys are
freedom are equal to the total number of items limited and only indicate that mega clients are
sampled minus the number of samples. In the only receiving marginally better service than
pilot study case, N1 and N2 were both 14. the other groups (mid-tier and large).
Therefore: N1 + N2 -2 = 26. However, it would be inappropriate to
conclude that the mega markets do not need
the dedicated teams. It may well be true that
this size of customer may have higher

35
Proceedings of the 2008 IEMS Conference

customer service expectations and therefore


require the dedicated teams. Without the
dedicated account teams and the extra cost and
effort by the company studied, the customers
may well have rated mega clients much lower
on the survey.

10. Recommendation
Other tests and surveys need to be
performed in order to gain better insight into
the phenomena surrounding the value of
dedicated account teams. This study presents
an initial indication that dedicated account
teams may have value for mega sized clients,
but empirical testing and data is needed to be
able to achieve more confidence that this is in
fact the case.

References
[1] Albrecht, K., Zemke. R., Service America, Dow
Jones Erwin, New York, 1992.
[2] Blanchard, K., Bowles, W., Raving Fans,
William Morrow and Co., New York, 1993.
[3] Brackett, T., Peterson, T., Britt, B., Ignelzi, R.,
and Fitzgerald, K., Customer Service: Dedicated
Account Teams, student research paper at
Jacksonville University, Spring 2006.
[4] Connellan, T., Zemke, R., Sustaining Knocks
Your Socks Off Service, American Management
Association, New York, 1993.
[5] Friedman, Nancy, Customer Service Nightmares,
Crisp Publications, Menlo Park, California, 2003.
[6] Gronstedt, Anders. The Customer Century,
Routledge, New York, 2000.
[7] Harvard Business Review on Customer
Relationship Management, Harvard Business School
Press, Boston, 2002.
[8] Lind, D., Marchal, W., Wathen, S., Basic
Statistics for Business and Economics, McGraw Hill,
New York, 2003.
[9] National Press Publications, Exceptional
Customer Service, Shawnee Mission, Ks, 2000.
[10] Rosenbluth, H., McFerrin, D., (1992) The
Customer Comes Second, William Morrow and
Company, New York, 1992.
[11] Thompson, Harvey, (2000) The Customer
Centered Enterprise, McGraw Hill, NY, 2000.
[12] Willingham, R., Hey, I’m The Customer, R.
Willingham, Prentice Hall, Paramus, NJ, 1992.

36
Proceedings of the 2008 IEMS Conference

Process Focus and Commitment: The Key to ISO 9001 Implementation


Michael Bell Vincent Omachonu
University of Miami University of Miami
M.bell4@umiami.edu vomachonu@miami.edu

Abstract the defacto standard for management systems


[1]. A significant cost is incurred by
The management of quality and quality control organizations in obtaining certification to the
is a consideration in all industries. In the 1980s, standard so that it is worthwhile to study the
the ISO 9000 Quality Management System process for implementation of the management
standard emerged as the de facto worldwide system.
framework for quality system certification. The Although many researchers have
ISO 9000 family of standards defines a concluded that ISO 90001 certification does
management system framework which includes have a positive impact on an organization’s
the necessary and sufficient elements for the financial performance, some researchers
systematic management of quality. Although disagree [6], [12]. The environmental factors
many consumers perceive firms with ISO 9000 such as company size [3], industry [4],
certified management systems as providing implementation methodology [8], management
higher quality products, some organizations do commitment [9], and motivation for seeking
not experience positive results from certification [12] have been identified as
implementing an ISO 9000 based quality potential mitigating factors for the lack of
management system. improved organizational performance after
Some studies of the financial attaining ISO 9000 certification. Company size
performance of certified firms have yielded has been an area of particular interest to
conflicting results. Researchers in the Total researchers because the cost of certification is
Quality Management arena have identified some perceived as a much greater burden to smaller
key quality system elements that impact businesses. These companies account for more
organizational performance. Given ISO 9000’s than half of the total employment in the U.S.
widespread use and the economic implications economy according to the United States bureau
of ineffective implementation, an effort was of labor statistics. In addition, 54% percent of
made to better understand the determinants of ISO 9001 registered companies in 1997 had 500
success in ISO 9000 implementation. employees or less [2].
A survey was conducted to analyze the In a 1994 article titled “The Proven Path
implementation process steps in an attempt to to ISO 9000 Certification” the author describes
answer the question “how to best implement ISO the steps that companies take in order to gain
9000 to gain results”. ISO 9000 certification for their quality
The results of this study characterize the management system [7]. Based on the work of
best pathway through the implementation Murakami [7] along with Smith [11] and Jodoin
process. [5] the path to certification can be said to
involve the following nine activities;
Introduction • Gaining management commitment
• Employing external consultants
ISO 9001 is a standard developed by the • Conducting an awareness campaign
International Organizations for Standards meant • Creating an overall quality system manual
to serve as a framework for a quality • Developing a documentation system
management system. This framework is • Training employees on the system
recognized around the world and has grown into • Creating work processes

37
Proceedings of the 2008 IEMS Conference

• Conducting system wide reviews large degree”. The adjective descriptors were
• A pre-assessment audit converted to numerical values and the average
Organizations seeking certification may not use ratings were evaluated.
all of these activities and employ these activities Figure 1 shows the ratings of the
to varying degrees. implementation activities sorted from highest
usage to lowest usage. Examination of the data
Data Collection and Demographics concerning the ISO 9000 implementation
activities indicated that two activities; the use of
The data for this study was collected external consultants and conducting a pre-
over a twelve week period using a survey form assessment audit were the implementation
on the World Wide Web. Survey invitations activities excluded most often. The use of
were e-mailed to members of the Benchmarking external consultants had the largest variation in
Exchange and placed in two consecutive issues rating from respondents.
of the Quality Digest electronic newsletter. A statistical test (paired t-test) was used
The number of responses totaled 153 to determine if there was truly a difference in the
which is relatively small compared to the ranking of the implementation activities. Table
number of people who received the e-mailed 1 shows the differences between the average
invitations and compared to the 150,000 ratings of the implementation activities. The last
organizations that are ISO 9001 registered. Use of Implementation Activies
Based on the number of respondents, this allows 70%

for an 8.87 interval of error at the 95% 60%

50%

confidence level. 40%

However, the results seem to represent a 30%

good stratified sample of organizations based on


20%

10%

company size (small business – 63%, large 0%


System Manual Useful Preassessment Management System Aw areness Documentation Quality External

business – 37%) , years certified ( 9 yrs. or less –


Procedures Audit Commitment Review s Campaign System Management Consultants
System Training

53%, 10 yrs or more – 47%) and internal (52%) Very Large Degree Large Degree Medium Degree
Small Degree Very Small Degree Not Done
vs. external (48%) reasons for seeking
certification. The data should not be biased by
an overrepresentation of respondents in any of Figure 1: Use of ISO 9000
these categories. In addition, the data also implementation activities
represents a mix of working level (65%) and row of the table contains the differences between
senior managers (35%) respondents as well as the average rating for ‘use of external
quality system personnel (55%) vs. other consultants’ and all of the other implementation
respondents (45%). In the sample, 58% were
3yrs or less Mean
Comparisson of Newness of Certication 4 - 9yrs Mean
10yrs or more Mean
from manufacturing industries versus 42% for 4.50
4.00
all others. Finally, the survey represents 70% 3.50
Average Rating

3.00
respondents from the United States and 30% 2.50
2.00

from outside the United States. 1.50


1.00

Approximately 65% of the respondents 0.50


0.00
System Manual

Management
Commitment

Preassessment

Documentation

Management
Procedures

Consultants
Reviews
System

Campaign
Awareness

also implemented Continuous Improvement and


External
Training
System
Useful

Quality
System
Useful
Audit

nearly 40% also implemented TQM, SPC Lean Implementation Activities

and Six Sigma. However, most (82%) had not


applied for quality awards. Figure 2: Certification Timeframe Factor
activities. This activity had the lowest average
Data Analysis usage and is significantly different (based on the
p-value) from the other activities. The rating for
The survey respondents rated their use development of an overall system manual was
of the implementation activities on a scale with a significantly different from all other activities.
descriptor starting from “not done” up to “very The system manual is required for certification

38
Proceedings of the 2008 IEMS Conference

Table 1: Differences in average Table 2: Correlation of implementation


rating of implementation activities activities
and their p-values

Management Commitment
System Useful Mgt. System Preassessm Awareness Mgt. System External
Manual Procedures Commitment Reviews ent audit Campaign Doc. System Training Consultants

Documentation System
Awareness Campaign

External Consultants
Preassessment Audit
Useful Procedures 0.521

Useful Procedures
0.000

System Training
System Reviews
System Manual
Mgt. Commitment 0.383 0.284
0.000 0.002

System Reviews 0.29 0.358 0.68


0.001 0.000 0.000
Preassessment
audit 0.176 0.084 0.276 0.133
System Manual 0.054 0.363 0.002 0.147
0.197 Awareness
Useful Procedures Campaign 0.284 0.196 0.583 0.554 0.189
0.035 0.002 0.031 0.000 0.000 0.039
0.303 0.106
Management Commitment Doc. System 0.398 0.515 0.353 0.469 0.119 0.291
0.013 0.386 0.000 0.000 0.000 0.000 0.196 0.001
0.328 0.131 0.025 Mgt. System
System Reviews Training 0.545 0.641 0.547 0.519 0.228 0.464 0.56
0.008 0.229 0.781 0.000 0.000 0.000 0.000 0.012 0.000 0.000
0.479 0.272 0.174 0.140 External
Preassessment Audit Consultants 0.079 -0.048 0.008 -0.041 0.099 -0.044 0.01 0.005
0.004 0.096 0.277 0.404 0.387 0.601 0.927 0.659 0.283 0.63 0.914 0.955
0.475 0.279 0.172 0.148 0.000
Awareness Campaign Correlation Descriptors
0.000 0.033 0.104 0.158 1.000 0 to .25 = Little to no relationship
0.549 0.352 0.246 0.221 0.074 0.074 .25 to .50 = Fair relationship
Documentation System .51 to .75 = Moderate to good relationship
0.000 0.001 0.063 0.053 0.674 0.592
.75 to 1 = Very good relationship
0.639 0.443 0.336 0.311 0.165 0.164 0.09
System Training
0.000
1.934
0.000
1.744
0.002
1.628
0.003
1.603
0.295
1.458
0.150
1.446
0.385
1.397 1.298
three years or less compared with companies
External Consultants
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 that have been certified for longer than three
and no survey respondents indicated that was not years. This suggests that companies undertaking
done. Table 2 shows the correlations between this effort now are truly getting buy in from their
implementation activity ratings. Again, the use employees and management and no longer see
of external consultant rating has little or no ISO 9000 as a marketing gimmick.
relationship to rating of other activities. The The survey also included a free text area
pre-assessment has a fairly weak correlation to so that respondents could provide comments
management commitment but no significant about their experience becoming certified. A
correlation to other activities. High rating of the significant percentage of these comments (both
management system training has a moderate to positive and negative) were about management
good relationship almost all other activities. commitment and involvement in the certification
A further examination of the data was process. Other areas that were commented on
undertaken to compare the responses in relation related to the amount of paperwork involved in
to three environmental factors; large business vs. certification and the benefits of the process
small business, internally motivated documentation. The final area worth mentioning
certifications vs. externally motivated is related to training the workforce. Comments
certification, and newer certifications vs. older suggest that many times this was done poorly or
certifications. Two of these factors did not have with little understanding of the desired end
a statistically significant impact on the responses result.
for any of the implementation activities. Newer
certifications vs. older certifications proved to be Conclusions
a factor for one activity. Management
commitment for the implementation was rated
Based on the data in the previous
different depending on how long ago the
sections, it can be said that the best pathway
organization was certified. The survey
through the implementation process requires
instrument described management commitment
focus on creating a workable, user friendly
as;
documentation system and training employees
“Management commitment for
on how to create and maintain system
the ISO 9001 certification was evident
documentation. This will provide the most
in that; executives, supervisors and
benefit to the organization. Top management
employees demonstrated support
from the early adopter organizations may not
through their words and behaviors.”
have understood this. Organizations should
Figure 2 shows that the ratings for this
focus on what they will gain from implementing
implementation activity were significantly
ISO 9000 and infusing this idea into their
higher for companies that have been certified for
training process and the right documentation

39
Proceedings of the 2008 IEMS Conference

Organizations that are now following in their small- and medium-sized enterprises in
footsteps are taking those lessons to heart. Even Singapore International Journal of Quality &
small businesses can benefit from this same Reliability Management, Vol. 15 No. 5, 1998,
advice. pp. 489-508

References [10] Singels, J., Ruel, G., and van de Water, H.,
(2001) ISO 9000 series Certification and
[1] Briscoe, J. A., Fawcett, S. E. and Todd, R. H. performance International Journal of Quality &
(2005) The Implementation and Impact of ISO Reliability Management, Vol. 18 No. 1, 2001,
9000 among Small Manufacturing Enterprises pp. 62-75.
Journal of Small Business Management 2005
43(3), pp. 309–330 [11] Smith, B., (1994) A Proven Path to ISO
9000 Registration, Industrial Distribution; Jul
[2] Callaghan, N., and Schnoll, L. (1997) 1994, 83,7, pg. 50
Quality Digest
http://www.qualitydigest.com/aug97/html/cover. [12] Terlaak, A. (2002) Exploring the adoption
html process and performance consequences of
industry management standards: The case of
[3]Haversjo, T., (2000) The financial effects of ISO 9000, University of California, Santa
ISO 9000 registration for Danish companies Barbara
Managerial Auditing Journal 15/1/2 [2000] 47-
52

[4] Heras, I., Casadesus, and M.,Dick, G. P.M.,


(2002) ISO9000 Certification and the bottom
line: A comparative study of the profitability of
Basque region companies Managerial auditing
Journal 17/1/2 (2002) pg 72-78

[5] Jodoin, C., (1998) Getting Started with ISO/


QS 9000 Requirements, Printed Circuit Design
Jun 1998; 15, 6, pg. 46

[6] Morris, P. (2003) Quality and competitive


advantage: An empirical study of ISO 9000
adoption in the electronics industry, Texas Tech
University

[7] Murakami, R., (1994) How To Implement


ISO 9000, CMA Magazine, March 1994; 68, 2
pg. 18

[8] Powers, L. (2003) The velocity of change in


business today: Analysis of a defined method of
implementing ISO 9000 standards in selected
Southern California companies, Pepperdine
University

[9] Quazi, H. A. and Padibjo, S. R. (1998) A


journey toward total quality management
through ISO 9000 certification – a study on

40
Proceedings of the 2008 IEMS Conference

Hydrographic Mission Planning Prototype Combining


GIS and AutoSurvey™

Donald Brandon, Brian Bourgeois, Ashley Morris


Naval Research Laboratory, Stennis Space Center
dbrandon@nrlssc.navy.mil

Abstract The next section gives an overview of


historical and current mission planning
This paper discusses an approach to approaches, and explains why a new system is
hydrographic mission planning that utilizes a being developed. Section 3 introduces the
commercial-off-the-shelf (COTS) GIS product COTS GIS product selected as the basis for the
and the AutoSurvey™ Planner (ASP). It new system. Section 4 discusses the
introduces the two systems and presents how AutoSurvey™ Planner, a set of tools used to
joining them to create the Hydrographic Mission provide adaptive planning for hydrographic
Planner (HMP) eases decision making for surveys. Section 5 discusses how merging the
mission planners. It also describes the benefits two systems provides advantages to
that exist over older systems and approaches. hydrographic mission planners, and section 6
Finally, we explain how the GIS products will introduces some additional tools to further
be used to build custom tools that will further enhance the decision making process. Finally,
aid in decision making by improving analysis section 7 offers conclusions and ideas for future
capabilities. work.

1. Introduction 2. Current Techniques of Hydrographic


Mission Planning
The United States Navy is involved with
surveying bodies of water ranging from great Typically, our mission planning tasks are for
ocean depths to the shallows of harbors and either hydrographic or bathymetric surveys.
approaches. The cost of these surveys can run Bathymetric surveys are categorized as surveys
upward of sixty thousand dollars daily, so with depths 200 meters or greater that are not
detailed planning of the missions in advance is required to adhere to IHO standards. Because of
essential in order to properly allot manpower, this, there are fewer restrictions, and planning
resources, and ship time. The planning must requires fewer parameters, thus fewer decisions
follow a strict set of guidelines that changes by the planner. Hydrographic surveys need to
depending on many factors. These factors may follow various accuracy requirements, such as
include: standards set forth by the International those laid out in the IHO sp44. They may
Hydrographic Organization (IHO) in Special include surveys with depths that are greater than
Publication 44 [1], restrictions or limitations of 200 meters, which would be classified as order 3
available vessels and sensors, or environmental surveys. Order 2, order 1, and special order
restrictions. Because of this dynamic set of surveys add additional difficulties to the
guidelines, mission planners face a variety of decision making process. The IHO standards
complex decisions daily, and would benefit from require certain sensors and coverage be used
a system that could provide assistance towards depending on depth and order of the mission.
making more informed decisions for Figure 1 explains the difference between
hydrographic planning. bathymetric and hydrographic surveys.

41
Proceedings of the 2008 IEMS Conference

Special Order Hydrographic which allows it to pull in geospatially referenced


<= 100m Order 1
data to use for mission planning. It also uses a
wizard to go through the steps to generate the
< 200m Order 2 appropriate track lines, thereby reducing some of
the steps in decision making. Finally, it
generates a survey report that defines the survey
estimations. However, this software has reached
Order 3 the end of its lifespan. It was built on a
200m +
framework that has become obsolete and is no
Hydrographic longer supported, and the Naval Oceanographic
or Office (NAVO) has removed it from its list of
Bathymetric approved software. Therefore, an adequate
Figure 1 – This image describes the breakdown of replacement will be needed in the near future,
bathymetric and hydrographic surveys and shows and the combination of the Commercial Joint
that complexity of the plan changes as the Mapping Toolkit (CJMTK) and the
requirements change.
AutoSurvey™ Planner (ASP) is our suggested
approach.
Early approaches to mission planning used
navigational charts, pen, paper, and relied totally
on human calculations. Mission planners would
3. CJMTK
determine the area to be surveyed on the chart,
CJMTK is a collaborative effort between
construct a survey line through the area, and
Environmental Systems Research Institute
determine a depth along that line based on the
(ESRI) [2] and the National Geospatial-
depths on the chart. From that depth, as well as
Intelligence Agency (NGA) that provides
from the specifications of the sensor being used,
approved federal government programs a suite
they would determine the line spacing and lay
of ESRI software to use for mission planning
out the remaining lines. They would then use
applications.
the length of the survey lines and estimated
CJMTK includes the ArcGIS Desktop
vessel speed to make a prediction on the initial
software. This environment allows for the
survey time. To this value an additional
management and display of a variety of geo-
percentage, generally ten to twenty percent, was
referenced data, including vector and raster
added to determine the overall estimation for
datasets, scanned images, satellite imagery and
survey time.
more.
The disadvantages to this technique were
CJMTK also includes useful extensions, such
numerous. First, it could become very time
as Spatial Analyst, 3D Analyst, and the Military
consuming depending upon the complexity of
Overlay Editor. These extensions provide
the survey. The detail involved in some
advanced data manipulation and analysis
hydrographic surveys could be immense,
capabilities to the ArcGIS Desktop.
depending upon the IHO order of the survey.
Every function of the ArcGIS Desktop was
Also, the accuracy of the surveys was often
built from the ArcEngine framework. CJMTK
inconsistent. It was common to see variations in
includes various Software Development Kits
the results of survey plans, even when similar
(SDK) that allow developers access to the
results should be expected. Finally, as the
functionality of ArcEngine. This allows
computations were done by humans, they were
developers to create custom tools and commands
prone to error.
that can be integrated into ESRI’s ArcGIS
Currently, there is a wide variety of software
Desktop environment. Furthermore, ArcEngine
that is used to assist mission planners, but only
provides the capability to develop stand-alone
one piece of software actually designed for
applications that operate independently from the
hydrographic survey planning is available. This
ArcGIS Desktop.
system does offer some advantages over the
manual methods. The software is GIS capable,

42
Proceedings of the 2008 IEMS Conference

CJMTK serves as the basis for the decision can be proposed for that area. Furthermore, if
making capabilities being developed for the mission contains survey plans for several
hydrographic surveys. areas, the results of each line method for each
area can be compared, and a collection of
4. AutoSurvey™ appropriate methods can be obtained to provide
the most efficient approach for the entire
The ASP [3] is a mission planning system for mission. Table 1 shows an example report to aid
hydrographic surveys that provides the in the determination of this collection of
capability to estimate the most efficient set of methods.
track lines for a given area. It is built on top of a
library called AutoSurvey™, which provides a Area 1 Area 2 Area 3
set of environmentally adaptive line running Adaptive
methods that take information from a current Parallel
track line and then predict what the track of the Time Inside 218.61 208.88 561.66
next line should be. Time Outside 11.66 8.33 6.38
AutoSurvey™ provides three methods of line Total Time 230.27 217.22 568.05
manipulation: the Adaptive Parallel (AP)
method, the Linear Regression (LR) method, Linear
and the Piecewise Linear (PL) method. Fig. 2 Regression
shows an example of each method.
Time Inside 215.00 203.05 717.77
Time Outside 11.94 8.05 17.7
Total Time 226.94 211.11 735.55

Piecewise
Linear
Time Inside 197.29 195.93 351.18
Figure 2 – (From left to right) The AP method, the
LR method, and the PL method. These are Time Outside 19.77 19.84 31.06
estimated track lines for each method covering the Total Time 217.06 215.77 382.25
same area.
Most Efficient PL LR PL
The AP method is a simple method that Table 1 – This table shows survey estimations for
returns a series of straight, parallel lines whose three areas, and for each line method in that area.
spacing is based on the minimum depth along The time is measured in hours, the distance in
the previous track. kilometers.
The LR method returns a series of straight
lines as well, but the constraint of being parallel Typically, only bathymetry data, the areas to
no longer exists. The spacing and azimuth of be surveyed and predefined sensor parameters
the line are determined by the best fit to the data are used by the ASP. Additional data and
previous track’s swath width. parameters could further enhance the capability
The PL method returns a set of lines that is provided to mission planners. Furthermore, the
not necessarily parallel or straight. The track ASP itself does not contain a true GIS
line is segmented to get the absolute best fit to component. The system is fed a series of files
the edge of the previous track’s swath edge. In that define the area and the bathymetry, with
most cases, this method returns the most coordinates in latitude and longitude. These
efficient estimates. coordinates are converted into Universal
The ASP alone is a useful decision aid. One Transverse Mercator (UTM) and then converted
of the key advantages to the ASP is that it back after being run through the ASP. By
provides these multiple approaches to running coupling this system with the CJMTK, it is
lines. By comparing the results of each method possible to build a system that not only provides
for a given area, the most efficient line method the adaptive line methods of the ASP, but also

43
Proceedings of the 2008 IEMS Conference

allows geospatially coordinated environmental display, selecting and clearing features, and
data to be used in the hydrographic mission measurement functions. All of these tools are
planning effort. standard ESRI components.
The environment toolbar is a collection of
5. The Hydrographic Mission Planner custom tools that deal with adding and
manipulating environmental data within a
The Hydrographic Mission Planner (HMP) is project. It allows a user to initialize the
being developed to combine the capabilities of geospatial reference of the display, sets the area
the ASP and the CJMTK to better support the of interest for the mission, and provides access
decision making efforts of hydrographic survey to a variety of data including bathymetry,
planners. The HMP is designed to be an currents, and Digital Nautical Charts (DNC).
extension to the Mission Planning Mission Also, the tools that allow the creation and
Monitoring (MPMM) system, which is also modification of survey areas and track lines
being developed by the Naval Research Lab [4, using the non-ASP approaches are located on
5]; however, the initial prototype is utilizing the this toolbar.
ArcGIS Desktop as authorized by the CJMTK The AutoSurvey™ toolbar provides a way to
license agreement. Figure 3 shows the current interact with the ASP from within the GIS. It
state of the HMP prototype. includes the ability to start/kill the Cygwin
process, a Linux runtime application required by
any machine running AutoSurvey™. Also
available is a Bash interpretive shell for
interaction with the Cygwin environment.
The remainder of the commands on this
toolbar provides access to standard ASP dialogs.
These dialogs provide the ability to create, edit,
simulate, and generate results for the adaptive
line running methods.

6. Decision making tools of HMP

In addition to performing standard GIS


Figure 3 – The HMP components under the operations, the HMP makes use of several
ArcGIS Desktop environment. They consist of
custom tools that are designed to ease decision
several tools and commands housed on three
toolbars.
making involved with mission planning. These
tools include: the DNC tool for operating with
The HMP consists of a collection of tools DNC data, the Survey tools for creating and
intended to automate a portion of the thought working with survey boundaries and track lines,
process in order to alleviate some of the decision and Traffic Light Analysis (TLA) tools for
making associated with survey planning. The determining valid areas of operation.
tools are categorized according to their functions The DNC tool allows a user to incorporate
on three toolbars: the general toolbar, the DNC data into their mission planning project. It
environment toolbar, and the AutoSurvey™ supports decision making by providing easier
toolbar. The general toolbar and the DNC data interaction than previously available
environment toolbar are comprised of tools and with past systems, and it removes the need for
commands that are an integral part of the using ArcGIS extensions not included in the
MPMM, but are widely used with the HMP. CJMTK.
The general toolbar supplies the user with The DNC tool also allows a user to select the
common GIS tools to interact with data currently path to the DNC data. Once the path is
visible in the display. These functions include identified, several functions are available that
panning, zooming, changing the extent of the aid the survey planner in selecting and
displaying the desired data. This includes

44
Proceedings of the 2008 IEMS Conference

displaying the tile references in the map display,


and then applying labels to them. This makes
selecting the appropriate library much easier
than before, where the process included
manually writing the list of libraries. Once a
library is selected, its features are listed in a tree
view. The features can be pulled from this tree
view and inserted into a list whose items will be
added to the display. Figure 4 illustrates this
tool.

Figure 5 – This figure shows the survey planning


tool associated with generating track lines. The
functions of this tool allow a user to input several
parameters to automate the generation of track
lines using non-ASP approaches.

The dialog for survey boundaries provides


methods that can create, edit, delete, and export
the boundaries. A survey planner can create as
many boundaries as desired, manipulate the
points of those boundaries to reshape them, add
additional points to them, or delete them
altogether. Once a boundary is created, it can be
exported to a format capable of being ingested
into the ASP.
Figure 4 – This image is a screen capture of the The dialog for survey tracks provides the
DNC tool. It shows features that have been functionality to generate track lines based on the
selected (right listbox) from the tree view on the four original algorithms of NAVO’s previous
left to be added to the map display. planning software. These methods require the
user to input parameters based on the method
The survey planning tools are designed to selected, but only the necessary parameters for
resemble the original software tools that the that method are enabled. This eases confusion
hydrographic survey planners were using. This as to which parameters are used for a specific
was to provide familiarity to the planners method. As with the survey boundary dialog,
through well-known interfaces and the tracks can be exported for use in the ASP;
methodologies, which reduces the learning curve however, tracks can be created for only one
associated with the new software. survey at a time. This is decided by which
The tools are arranged in two dialogs: one boundary is active in the survey boundary
dialog handles the methods that operate on dialog.
survey boundaries; the other handles the The TLA tools are major decision aids for
methods for survey tracks. Figure 5 shows the mission planning because they provide spatial
dialog that is used for creating survey tracks. analyses that can be used to determine the
validity or availability of a navigable volume
based upon a set of rules. Because of the
common functionality for all mission planning,

45
Proceedings of the 2008 IEMS Conference

the TLA tools are part of the MPMM project; public release, distribution is unlimited. NRL
however, they are very important to contribution number NRL/PP/7440-08-1003.
hydrographic survey planning. Because of this,
the tools will be implemented as part of the 9. References
HMP prototype.
For example, if a survey is to be conducted [1] International Hydrographic Organization,
only in water deeper than 200 meters, then TLA “IHO Standards for Hydrographic Surveys.
on the bathymetry would provide a new polygon 4th Ed.”, Special Publication 44, April 1998.
depicting the unacceptable areas for surveying. [2] http://www.cjmtk.com
Figure 6 illustrates such a scenario. [3] D. Brandon, B. Bourgeois, “AutoSurvey:
Efficiency and Autonomy in Surveying”,
Hydro International, vol. 7, No. 10,
December 2003.
[4] B. Bourgeois, “Mission Planning Software
Support for Underwater Vehicles”, Industry,
Engineering, & Management Systems 2008,
March 2008.
[5] M. Roe, B. Bourgeois, A. Morris,
“Unmanned Underwater Vehicle Mission
Planning and Mission Monitoring
Figure 6 – This figure illustrates a TLA on an Prototype”, Industry, Engineering, &
area with a depth less than 200m. This is helpful Management Systems 2008, March 2008.
for defining valid operation areas.

Another scenario for TLA would be to break


larger areas into smaller ones based on
vessel/sensor capabilities, and provide visual
boundaries of these new sub-areas. This
automates the process of deriving these areas,
rather than relying on manual processes
presently used.

7. Conclusions

The HMP prototype is being developed to


consolidate the tools needed by the U.S. Navy
for hydrographic mission planning. It provides
GIS capability through the CJMTK program,
incorporates the adaptive line generation
methods of the ASP, and functions as a decision
aid to survey planners. Future work for the
HMP will include a framework to decouple it
from the ArcGIS Desktop, and the development
of new TLA tools to further automate the
decision process. We will also provide an ASP
environment that does not rely on Cygwin.

8. Acknowledgments

This work was sponsored by ONR under


NRL’s base research program. Approved for

46
Proceedings of the 2008 IEMS Conference

Supply Chain Management: Past, Present, and Future


John J. Burbridge, Jr. Coleman Rich
Elon University Elon University
burbridg@elon.edu richcole@elon.edu

Abstract supply chain practices have made many firms


more competitive. A realistic perspective on the
The concept of supply chain management is a current state-of-the-practice and its future will
relatively new business term first being be delivered including a discussion of
mentioned in the 1980s. However, prior to the sustainability.
use of the term, firms had to purchase raw
materials, process them into finished goods, and The Past
then use the necessary logistics strategy to have
products delivered to the customer on time at the As already indicated, the functional entities
lowest possible cost. What caused sudden that make up today’s supply chain have always
interest in this supposedly new approach to existed within the industrial organization. For
materials management called supply chain? example, purchasing managers were skilled in
buying the cheapest raw materials or parts so as
The authors look at the reasons for the sudden to minimize the cost of the product. If lead times
surge of interest in supply chain management. were long, inventory would be used as a buffer
The current state-of-the-art of supply chain to insure production schedules were met
management is then discussed. Has this Purchasing managers would be measured by
integrated approach delivered on the early their ability to minimize the costs of raw
promise of the concept? What has worked and materials or purchased parts.
what still must be achieved?
The manufacturing function would focus on
1. Introduction producing the lowest cost product. Production
lines and equipment were selected mainly
Within a manufacturing environment, supply through cost considerations. The concept of an
chain covers sourcing and purchasing of raw economic order quantity and economic lot sizes
materials, processing of those raw materials into are examples of that philosophy. Often, this
finished product and shipping the finished mentality led to long production runs and very
product through the distribution system to the little manufacturing flexibility.
customer.
The distribution function displayed further
If the above defines supply chain, why is it a specialization as traffic, inventory, and
new term? Manufacturing and service warehousing were often considered separate
organizations have always had to perform the entities. With respect to the traffic function, the
above functions. However, factors emerged in traffic manager’s most important objective was
the late 1970s and early 1980s that led to the securing the cheapest rate to move product from
concept of an integrated approach that was point A to point B. Since minimizing cost was
widely heralded as more effective and efficient. the ultimate goal, inventory and other
These factors will be reviewed since it is considerations were often ignored in making
important to understand their influence. shipping decisions.

Has supply chain management lived up to its Very little information was obtained from
predicted promises? The answer is probably no customers or suppliers. In fact, prior to 1980
but considerable progress has occurred and there was very little momentum at attempting to

47
Proceedings of the 2008 IEMS Conference

determine the needs of customers or providing After World War II, the United States was the
demand expectations to suppliers. The predominant player on the world stage in terms
organization was expected to use forecasting of industrial might. Symbolic of this industrial
algorithms to predict sales while suppliers to the might was the United States automotive and
organization were expected to anticipate needs. steel industries. The CEOs in these industries
were unable to conceive of a future that would
Why did such an environment exist? Four include strong global competitors. An interesting
factors were the major contributors to this contrast of Ford and Nissan [4] and studies on
situation. certain industries [8] provide vivid insights to
such management thinking.
1. Functional excellence dominated management
thinking. 4. Importance of functional financial
accountability
Until the 1980s most organization theorists
thought that creating an organization where The audit environment at this time
functional excellence was dominant would result emphasized that each functional activity must be
in an excellent organization. Certainly higher prepared to defend its day-to-day decision-
education, consulting organizations, and making. Therefore, the distribution function
management gurus fostered such a belief. within an organization would have to show the
Certification also created the illusion that such same ordering process for receiving product
specialists must have specific knowledge not from a manufacturing plant as an outside firm.
necessarily shared by general management. This Audit trails were more important than
last point is important since it led to the coordinated product flow.
perception that executives responsible for
overall performance were not equipped to make The 1980s brought about a dramatic change to
decisions in some specialized areas of the firm. such thinking. First, Japanese firms emerged
especially in the automotive and steel industries
2. Lack of an integrating enabler using lean manufacturing as their mantra.
Inventory wasn’t something to be optimized; it
The latter part of the 20th century saw the was to be eliminated under the concepts of lean
emergence of information technology within the manufacturing. The Japanese did not have space
organization. The first application of such for warehouses for raw materials and finished
technology was the financial functions of the product. In-process inventory was considered an
firm. In the 1960s and 1970s, these functions eyesore and eliminated.
were computerized on large mainframe
computers. In the 1970s the emergence of mini- It became clear to both U.S. automotive
computers allowed the creation of networks companies they would have to change or face
amongst the various functional areas. decreased market share or extinction in the
Therefore, it was not until the 1980s that the marketplace. Lean approaches to manufacturing
coordinating mechanisms to allow the effective became commonplace within U.S. automakers.
and efficient transfer of information began to Their financial performance improved. Today,
exist. With the continuing advances of the General Motors, Ford, and Chrysler are still
information technology in the 1980s and 1990s, facing significant challenges in the marketplace
the integration of functions began to take place. but this is mainly due to long-term arrangements
Also, software suppliers began to realize that a with retirees and workers resulting in very
market for such applications existed. The result expensive pension and health care commitments.
was the emergence of specialized software for
logistics applications. Toyota [10] and an influential study on lean
production [16] became models for American
3. Lack of competitive environment firms. Interestingly, the approaches the Japanese
used such as Kanban did not easily transfer to

48
Proceedings of the 2008 IEMS Conference

the mentality of American manufacturing. As a result of these influences, the concept of


However, supply chain consulting firms and how to approach the supply chain evolved. The
others were quick to fill this void. With the Global Supply Chain Forum identifies the
emergence of mini and micro-computing, such following processes associated with the supply
firms promised superior supply chain chain [2]:
performance using information technology.
SAP, Manugistics, Accenture, and others were • Customer relationship management
hired to upgrade supply chain through the use of • Customer service management
information technology. • Demand management
• Order fulfillment
In addition, the late 1970s and early 1980s • Manufacturing flow management
saw the skyrocketing of United States interest • Supplier relationship management
rates. Chief executives became acutely aware of • Product development and commercialization
the pitfalls associated with having large
• Returns management
inventories. The potential of supply chain and
reducing overall inventory became highly
Therefore, supply chain management is not
attractive. Interestingly, even with future
just integrating the logistics functions and
decreases in interest rates, management thinking
adding suppliers and customers but also
never again embraced arguments for increasing
recognizing that the concepts of business
inventory levels.
process management must be applied to the
above which in totality compose the supply
Management thinking also changed
chain for an organization.
dramatically. One of the most popular business
books reengineering business processes [6]. An
The Present
earlier paper [5] already called for blowing apart
the current approaches to doing business and
Therefore, as we entered the 21st century,
starting with a clean slate. This thinking requires
supply chain management had become widely
more than just integrating functional silos.
accepted as an important concept for firms to
Another approach to reengineering used
embrace. However, the full promise of supply
information technology as an enabler [3].
chain had yet to be realized. An interesting
approach to the diffusion of supply chain within
While the traditional manufacturing and
organizations [14] introduces the following six
logistics functions lent themselves to process
stages to the achievement of supply chain
thinking, it was often argued that marketing was
excellence:
outside the realm of such analysis. However,
others [15] stressed that marketing would
1. Business as usual - company is working
significantly benefit from such thinking and that
hard to maximize functional entities
customer relationship processes must be part of
2. Link excellence-elimination of bound-aries
the supply chain.
among departments
3. Visibility-bringing to light all links in the
Coupled with the total quality management
supply chain
movement, the mindset of many U.S. executives
4. Collaboration-working as a whole along the
changed. The executive no longer could let
supply chain to maximize customer
decision-making reside in the functional silos of
satisfaction while minimizing cost
the organization. For example, General Electric
5. Synthesis-synchronization of all supply
is an organization where executives became
chain links to form a whole
major proponents of reengineering and quality
6. Velocity-synthesis while optimizing the
concepts such as six-sigma [13]. In addition, the
speed along the supply chain.
value chain concept [11] added to this growing
awareness.

49
Proceedings of the 2008 IEMS Conference

It is felt that many of the more sophisticated While there has been progress concerning
companies have yet to advance past stage two, information technology to support supply chain
link excellence. While many firms have excellence, much of the software that has been
embarked on the journey, they do not fully installed supports link excellence. Interfaces
comprehend what is necessary to achieve supply have been developed that have aided visibility
chain excellence. To move beyond stage 2, the but the entire view of the supply chain does not
firm must take deliberate steps organizationally exist in most organizations. As a result, link
to make supply chain excellence a dominant excellence is still the major objective. Firms like
strategy of the firm. Dell have made supply chain visibility a priority.

Only a few firms have made such a 3. Lack of trust


commitment. One example is Wal-Mart. Wal-
Mart has been very successful in creating an If an organization is to move to supply chain
environment where maximizing customer excellence, suppliers and customers must
satisfaction while minimizing the cost of become part of the chain. However, this is a very
purchased product and inventory levels is also a difficult step since suppliers and customers are
critical goal. To do so, Wal-Mart has used its more than likely different organizations. Since
size to source globally and dictates to suppliers all firms along the supply chain are expected to
Wal-Mart’s terms. maximize their own profit, the overall
profitability of the industry is often of secondary
Dell is another firm that has clearly moved interest. The three examples of organizations
beyond stage 2. Dell’s strategic thrust in the that have progressed beyond stage 2 are firms
marketplace has been to create a build to order that tend to dominate the marketplace and can
environment which results in high levels of force suppliers to meet their requirements rather
customer satisfaction while only having the than a truly collaborative environment.
necessary inventory to meet the requirements of
orders on hand. This strategy has resulted in a Therefore, it can be argued that certain
lean organization and has required high levels of organizations have made considerable progress
integration and visibility. but that the steps required to achieve the full
potential of supply chain excellence are going to
Why has there been this apparent lack of take longer than twenty five years. In the next
progress during the past twenty five years? It section, the future of supply chain excellence is
can be argued that this apparent lack of progress explored.
is not that surprising. Industry and
organizational change are long and difficult The Future
processes. The following are some of the
reasons contributing to this current environment: To achieve the promise of supply chain
excellence will require a significant commitment
1. The functional organization is entrenched from the firm. Three inhibitors of advancing
beyond stage 2 have already been discussed.
The supply chain concept was somewhat These inhibitors will continue to exist as we
radical when initially proposed. While many move forward in the 21st century.
organizations did attempt to create some form of
matrix allowing cross-functional management, While these inhibitors will continue to be a
this was not an easy transition. Organizations deterrent, there is no doubt that certain firms and
that are in stage two are still addressing these whole industries will move forward towards
issues. supply chain excellence. What will be the
possible drivers of such excellence? The
2. Lack of mechanisms to promote real following are four market forces that could force
visibility firms and industries to aspire to supply chain
excellence.

50
Proceedings of the 2008 IEMS Conference

problems for Dell as consumers wish to feel


1. Commodity industry their purchases before buying.

The principle goals associated with supply 3. Industry pressures


chain excellence are the ability to provide
superior customer service at the lowest possible As the forces of globalization become
cost. If you source your raw materials at the stronger in the 21st century, more and more firms
lowest possible price and move those materials and industries will become challenged. Clearly,
through the chain optimally with respect to the textile and apparel industries have had to
speed, you should be able to achieve supply respond in the last thirty years to the pursuit of
chain excellence. Since those industries with supply chain excellence in meeting the needs of
commodity products are perpetually attempting the consumer. Velocity matters so you see firms
to fulfill such a goal, one would expect that such going to great lengths so as to understand market
industries may be the first to achieve supply trends, design apparel to meet such trends,
chain excellence. Gasoline, oil, paper, and source the necessary materials, and have final
agricultural products are examples of such products ready for consumption during the
industries. appropriate season.

One problem Asian suppliers have in


2. Firms desiring to become dominant player in addressing the needs of markets such as the
their industry United States and Europe is the need to get high
end apparel products into specialty retailers to
As Wal-Mart and Dell have done, certain quickly take advantage of style and trend
firms see the potential of achieving the bottom- changes. Benetton is an example of such a firm.
line benefits of supply chain excellence and While Asian suppliers can distribute through
pursue such a strategy. As a result, firms become mass retailers such as Wal-Mart, boutique shops
industry leaders. Wal-Mart saw that leaders in must rely on a quick response and find sources
their industry such as K-Mart allowed of supply within their geographical reach.
considerable slack in their supply chains
resulting in an inefficient operation. Wal-Mart 4. Sustainability
devised a strategies, processes, and systems
where they would grow their stores and then Currently driven by retailers such as Wal
through their size in the marketplace dictate to Mart, McDonalds and Starbucks [12},
suppliers their terms. Wal-Mart also realized that companies are developing strategies and
it would have to continually find lower cost programs for their supply chains to be socially
sources of merchandise in their quest to provide responsible, manage resources efficiently and be
value for the consumer. As a result, Wal-Mart good environmental stewards. These sustainable
has grown to dominate its segment of the retail supply chains create a differentiated competitive
world. advantage over current supply chains.
Sustainable supply chains will enable companies
Dell pursued a strategy of custom-building to develop new recyclable products, reduce
personal computers and also providing superior costs, and most important avoid any type of
online ordering and service. To do such, it disruptions of material supply throughout the
needed to create assembly operations throughout supply chain.
the world that could quickly respond to the
needs of a customer. As a result of pursuing that It is also important to continue to stress the
strategy and supply chain excellence, Dell was important organizational challenges facing the
able to dominate the personal computer industry firm in its quest for supply chain excellence.
for several years. Recently, the growth of the These are:
laptop segment of the industry has caused
1. Information technology

51
Proceedings of the 2008 IEMS Conference

domestic traffic. These risks and critical success


There is no doubt that information technology factors to mitigate such risks are the subject of a
has played a big role in firms optimizing links recent study [1].
and will continue to do so in the future.
Electronic commerce will allow firms within the 3. Organizational change
same industry to communicate seamlessly and
be able to view transactions. Enterprise systems Finally, changing the organization to embrace
such as SAP have allowed individual firms to be supply chain is difficult. The theory and practice
successful in breaking down the barriers of associated with a model for transforming an
disparate systems in the various silos of the organization [7] can be used to address such
business. Today, the software required to issues [9]:
achieve supply excellence is still expensive and
often must be customized. • Establishing a sense of urgency
• Forming a powerful guiding coalition
2. Global sourcing • Creating a vision
• Communicating the vision
When the concept of supply chain • Empowering others to act on the vision
management first came on the scene, the • Planning for and creating short-term wins
suppliers or raw materials and parts were often
• Consolidating improvements and producing
well established and had earned the respect, if
still more change
not the trust, of the firm. However, the past
• Institutionalizing new approaches.
twenty five years has seen a radical shift as the
search for cheaper raw materials and parts has
Conclusion
escalated. Global sourcing creates significant
challenges to managing the supply chain. The
In the last twenty five years, supply chain
suppliers of raw material and products from
management has made considerable inroads as a
China and other parts of Asia are often not very
key management issue that should be addressed
sophisticated and respond to orders and
by all firms in their quest to be competitive in
promises of payment. Such firms have
their industry.
significant difficulties in creating the
information network required of an advance
This paper has addressed the key issues and
stage supply chain. In addition, the shipping
challenges that confront firms on their journey to
issues associated with sourcing materials and
achievement of supply chain excellence.
parts from around the globe can also prove very
difficult for a firm that has only handled

52
Proceedings of the 2008 IEMS Conference

References 12. Duda, Stacy, LaShawn James, Zeryn


Mackwani, Raul Munoz, David Volk, Hau Lee,
1. Braithwaite, Alan, “The Supply Chain Risks of “Starbucks Corporation: Building a Sustainable
Global Sourcing,” Working Paper, Stanford Supply Chain,” Stanford Graduate School of
University Center on Global Supply Chain, 2003. Business, Global Supply Chain Management
Forum, pp. 4-5, May 2007.
2. Croxton, Keely L., Sebastian J. Garcia-
Dastugue, Douglas M. Lambert, and Dale S. 13. Tichy, Noel M. and Stratford Sherman,
Rogers, “The Supply Chain Management Control Your Destiny or Someone Else, New
Processes,” The International Journal of Logistics York: Doubleday, 1993.
Management, Vol. 12, No. 1 (2001), pp. 1-19.
14. Tompkins James A., No Boundaries: Break
3. Davenport, Thomas H., Process Innovation: Though to Supply Chain Excellence, Raleigh, NC:
Reengineering Work through Information Tompkins Press, 2003.
Technology, Boston, MA: Harvard Business
School Press, 1993. 15. Webster, Frederick E. Jr., “The Changing
Role of Marketing in the Corporation,” Journal of
4. Halberstam, David, The Reckoning, New York: Marketing, Vol. 56, No. 4 (1992), pp. 1-7.
William Morrow & Co., 1986.
16. Womack, James P., Daniel T. Jones and
5. Hammer, Michael, “Reengineering Work: Daniel Roos, The Machine That Changed the
Don’t Automate, Obliterate,” Harvard Business World: The Story of Lean Production, New York:
Review, Vol. 68, No. 4 (1990) pp.104. Harper’s Perennial, 1990.

6. Hammer, Michael and James Champi,


Reengineering the Corporation: A Manifesto for
Business Revolution, New York: Harper Business,
1993.

7. Kotter, John P. “Leading Change: Why


Transformation Efforts Fail,” Harvard Business
Review, Vol. 74, No. 2 (1995), pp. 59-65.

8. Magaziner, I. and Mark Patinkin, The Silent


War: Inside the Global Business Battles Shaping
America’s Future, New York: Random House,
1989.

9. Mento, Anthony J., Raymond M. Jones, and


Walter Dimdorfer, “A Change Management
Process: Grounded in both Theory and Practice,”
Journal of Change Management, Vol. 3, No. 1
(2002) pp.45-59.

10. Ohno, Taichi, Toyota Production System,


Cambridge, MA: Productivity Press, 1988.

11. Porter, Michael E., Competitive Advantage –


Creating and Sustaining Superior Performance,
New York: The Free Press, 1984.

53
Proceedings of the 2008 IEMS Conference

Changing the Human Dimension in Information Security


Deborah S. Carstens and Stephanie M. Rockfield
Florida Institute of Technology
carstens@fit.edu srockfie@fit.edu

Abstract and therefore keep insecure records of their


passwords such as writing a password down on
Research has indicated that human error makes paper [19]. Since computer security is largely
up as much as 65% of incidents that cause dependent on the use of passwords to
economic loss for a company and that security authenticate users of technology, the objectives
incidents caused by external threats such as of this paper is to (a) provide a background on
computer hackers occur only 3% or less of the information security and password
time [7, 8, 10]. However, there is only a authentication and (b) provide a discussion on
minimal effort to address the human error risks the results of the password survey administered.
in information security, which is among the
highest cause of information security incidents 2. Background
[8, 18]. A common challenge faced by
individuals today is the need to simultaneously Information Security
maintain passwords for many different systems
in their work, school and personal lives. Since Events such as 9/11 and the war on terrorism
computer security is largely dependent on the have also underscored an increased need for
use of passwords to authenticate users of vigilance regarding information security.
technology, the objectives of this manuscript is Organizations, government, and private industry
to (a) provide a background on information are currently trying to adjust to the burden of
security and password authentication and (b) this heightened need for information security,
provide a discussion on the results of the and, as an example of this, the U.S. Department
password survey administered. of Homeland Security [14] has focused
particular efforts on ensuring information
1. Introduction security. Ensuring effective information
security involves making information accessible
Research has indicated that human error makes to those who need the information while
up as much as 65% of incidents that cause maintaining the confidentiality and integrity of
economic loss for a company and that security that material. There are three categories used to
incidents caused by external threats such as classify information security risks: (a)
computer hackers occur only 3% or less of the confidentiality, (b) integrity, and (c) accessibility
time [7, 8, 10]. However, there is only a or availability of information. With the
minimal effort to address the human error risks increasing daily reliance on electronic
in information security, which is among the transactions, it is essential to have reliable
highest cause of information security incidents security practices for individuals, businesses,
[8, 18]. A common challenge faced by and organizations to protect their information
individuals today is the need to simultaneously [15, 16].
maintain passwords for many different systems
in their work, school and personal lives. The development of security is similar to any
Research suggests that stringent rules for other design or development process in that the
passwords lead to poor password practices that involvement of users and other stakeholders are
compromise overall security [17]. Human crucial in the success of an organization’s
limitations can compromise password security security policies. Research by Belsis, Spyros
because users are unable to remember passwords and Kiountouzis [1] suggests that although

54
Proceedings of the 2008 IEMS Conference

successful security management depends on the can compromise password security because
involvement of users and stakeholders that users are unable to remember passwords and
knowledge on information systems security therefore keep insecure records of their
issues may be lacking resulting in reduced passwords such as writing a password down on
participation. Organizations seek to retain paper [19].
knowledge in their operations but the extent to
the knowledge captured on security related Miller’s [9] Chunking Theory and Cowan’s [4]
topics is not necessarily handled with the same research is useful to consider regarding human
consistency and rigor. Carnegie Mellon’s and social factors in organizational security
Computer Emergency Response Team (CERT) specifically when developing a model for
[2] has collected statistics showing that six password guidelines. This theory classifies data
security incidents were reported in 1988 in terms of chunks and indicates that the
compared to 137, 529 in 2003. Furthermore, capacity of working memory is 7±2 chunks of
CERT [2] reported that 171 vulnerabilities were information. More recent research suggests that
reported in 1995 in comparison to 7,236 in 2007. a mean memory capacity in adults is only three
In addition, the Federal Bureau of Investigation to five chunks with a range of two to six chunks
(FBI) conducted a survey in which 40% of as the real capacity limit. A chunk of data is
organizations claimed that system penetrations defined as being a letter, digit, word or different
from outside their organization had increased unit, such as a date [9]. Similar to Miller’s
from the prior year by 25% [6]. Chunking Theory, Newell, Shaw, and Simon
[11] suggest that highly meaningful words are
Password Authentication easier for a person to learn and remember than
less meaningful words, with meaningful being
Research on passwords is necessitated in spite of defined by the person’s number of associations
a movement towards alternative security with the word. Research suggests that passwords
techniques, such as bioidentifiers, individual could be more memorable if users comprised
certificates, tokens and smart cards. Smart cards their passwords with familiar characters such as
communicate directly with the target system and phone numbers [15]. Memorizing a string of
run the authentication procedure themselves. A words that represent complete concepts is easier
survey of 4,254 companies in 29 countries was to remember than an unrelated list of words
conducted by Dinnie [5] to identify a global suggests Straub [13]. A study was performed to
perspective of information security. The survey test passwords between five characters and eight
indicated that password authentication in the characters in length [12]. The research suggests
USA is the preferred security method utilized that increasing password character length to a
62% of the time as opposed to smart card minimum of six to eight characters reduces
authentication only being used 8% of the time crackability and therefore password strength in
and certificates 9% of the time. In Australia, terms of security. Another study conducted
password authentication is used 67% as opposed suggests that crack-resistant passwords were
to smart card authentication only being used 9% achieved through the use of a sentence
of the time and certificates 5% of the time. The generation password method including the user
remaining countries surveyed showed password to embed a digit and special character into the
authentication at 58% with smart card password [16]. Carstens, Malone and
authentication at 4%. A common challenge McCauley-Bell [3] performed a study at a large
faced by individuals today is the need to federal agency to determine whether a difference
simultaneously maintain passwords for many exists in people’s ability to remember complex
different systems in their work, school and passwords with different difficulty levels. The
personal lives. Research conducted by results suggests that as long as a password is
Wiedenbeck, Waters, Birget, Brodskiy and composed of meaningful information unique to
Memon [17] suggests that stringent rules for an individual, that human memory capabilities
passwords lead to poor password practices that enable an individual to recall up to four-chunks
compromise overall security. Human limitations of data consisting of up to 22-characters.

55
Proceedings of the 2008 IEMS Conference

3. Password authentication survey indicated having a range between 0 to 15 work


methodology or school passwords each were responsible for
The purpose of the survey was to determine how remembering. The average number of
the number of passwords an individual has to passwords participants had were 4.12 passwords.
recall impacts the security of an information The difficulty level of the passwords that survey
system. The following outlines the steps taken participants had to remember consisted of 19.6%
in conducting the password authentication of passwords comprised of only words, 20%
survey research: comprised of two or more word passwords,
• Survey Design 9.3% comprised of unfamiliar (not meaningful)
• Internal Review Board Approval numbers, 12.1% comprised of familiar numbers
• Survey Respondent Identification (street address, SSN, birth date, etc.) and 39.3%
• Dissemination of the Survey comprised of a combination of numbers,
symbols and letters. The data collected indicated
• Data Collection
that 30.5% of respondents had passwords
• Interpretation of the Survey Results
containing 4-6 characters in length, 51.4% had
The survey design consisted of three sections
passwords containing 7-9 characters in length
which were demographics of survey
and 18.1% had passwords containing 10 or more
respondents, work and school passwords, and
characters.
personal passwords. The survey was submitted
to a college internal review board (IRB) to seek
Participants were asked in the survey if their
and obtain approval to conduct the survey.
passwords are saved on their computer to enable
Survey respondents were identified through
participants to not have to retype in their
recruitment faculty, staff and students on a
passwords when logging in to websites. The
college campus. Once the respondents were participants responded that 42.9% of their
identified, the survey was administered. The
passwords are saved on their computer.
data collected was interpreted utilizing
However, 53.1% of the participants do not save
descriptive statistics.
their password on a computer and 4.1% did not
respond to this survey question. Survey
4. Password authentication results participants when asked if their passwords are
written down on paper for easy recall of
Demographics passwords answered that 18.4% of participants
The password authentication survey was do write their passwords down on paper for
administered to forty-nine participants. Of the recall purposes, 79.6% of participants do not
participants surveyed, 53.1% were females and write passwords on paper and 2% did not
46.9% were males. When survey respondents respond. When responding to the question
were asked to report their occupation, 2% listed regarding how often their passwords are
themselves as working in administration, 4.1% changed, 6.3% answered monthly, 22.9%
as staff, 91.8% as a student and 2% as a field answered annually, 62.5% answered never, 8.3%
outside of those designated above. The survey answered “other” and 2% did not respond.
respondents also listed themselves as 55.1% as Participants were asked a question regarding if
working part-time, 8.2% as working full-time their password has ever been changed to a
and 36.7% as being unemployed. The survey previous password used in the past. There was
respondents age categories consisted of 49% as no response by 2% of the survey respondents.
being 20 or younger, 42.9% being 21-30, 0% However, those participants that did respond
being 31-40, 2% as being 41-50, 4.1% being 51- answered that 50% of the time when their
60 and 2% being 61 or older. password was changed, the respondents changed
it to a password used in the past. When
Work and School Passwords participants were asked how many times in the
The survey participants were asked to answer past month has a password been forgotten, the
questions regarding their work and/or school range was between one and four times a month a
passwords. Collectively, the survey respondents password is forgotten. Furthermore, survey

56
Proceedings of the 2008 IEMS Conference

participants responded that of the passwords were asked a question regarding if their
forgotten, 54.2% created the forgotten password password has ever been changed to a previous
themselves, 37.5% were assigned the password password used in the past. There was no
and 8.3% were both created by themselves and response by 2% of the survey respondents.
assigned. However, those participants that did respond
answered that 51% of the time when their
Personal Passwords password was changed, the respondents changed
it to a password used in the past. When
The survey participants were asked to answer participants were asked how many times in the
questions regarding their personal passwords. past month has a password been forgotten, the
Collectively, the survey respondents indicated range was between one and four times a month a
having a range between 1 to 10 personal password is forgotten. Furthermore, survey
passwords each were responsible for participants responded that of the passwords
remembering. The average number of forgotten, 60% created the forgotten password
passwords participants had were 4.2 passwords. themselves, 33.3% were assigned the password
The difficulty level of the passwords that survey and 6.67% were both created by themselves and
participants had to remember consisted of 23% assigned.
of passwords comprised of only words, 19.9%
comprised of two or more word passwords, 5. Conclusion
5.7% comprised of unfamiliar (not meaningful) The world has been revolutionized by the
numbers, 13.5% comprised of familiar numbers amount of information that makes its way into
(street address, SSN, birth date, etc.) and 39.9% the daily lives of individuals and ultimately
comprised of a combination of numbers, organizations. It is therefore necessary that
symbols and letters. The data collected indicated research continues to identify specific ways that
that .44% of respondents had passwords assist individuals in handling the abundance of
containing 1-3 characters in length, 42.58% had information. Although there are many
passwords containing 4-6 characters in length, alternatives to password authentication,
44.25% had passwords containing 7-9 characters individuals have multiple passwords that are
in length and 12.83% had passwords containing used daily as password authentication is still
10 or more characters. considered to be the most common form of
authentication. Therefore, an issue for
Participants were asked in the survey if their information security personnel is to reduce the
passwords are saved on their computer to enable information load faced by individuals trying to
participants to not have to retype in their maintain numerous passwords for recall. Future
passwords when logging in to websites. The research in password authentication practices
participants responded that 51.1% of their should continue to explore the human and social
password is saved on their computer and 44.9% factors in organizational security.
of the participants do not save their password on Organizational and individual password usage as
a computer. Survey participants when asked if new technology emerges will likely increase the
their passwords are written down on paper for memory demands placed on individuals daily.
easy recall of passwords answered that 18.4% of Research will need to be continuously conducted
participants do write their passwords down on to keep a pulse on the password demands being
paper for recall purposes and 81.6% of placed on individuals to identify techniques that
participants do not write passwords on paper. assist humans with the management of their
When responding to the question regarding how passwords. Determining the links between
often their passwords are changed, survey password issues and other newly identified
respondents answered the same as reported for issues on human memory limitations and
the work and school passwords which were strategies to reduce the potential for
6.3% answered monthly, 22.9% answered vulnerabilities produced will be crucial for
annually, 62.5% answered never, 8.3% answered organizational success. Until passwords are no
“other” and 2% did not respond. Participants longer in use, it is important that practitioners

57
Proceedings of the 2008 IEMS Conference

and researchers continue their efforts as a [10] National Institute of Standards and Technology
building block for future research that focuses (NIST). (1992). Computer System Security
on the human side of information security and andPrivacy Advisory Board (Annual Report), 18.
specifically the human and social aspects of
[11] Newell, A., Shaw, J. C., & Simon, H. (1961)
password authentication.
Information Processing Language V Manual.
Edgewood Cliffs, NJ: Prentice-Hall.
6. References
[12] Proctor, R.W., Lien, M.C., Vu, K.P.L., Schultz,
[1} Belsis, P., Kokolakis, S., & Kiountouzis, E.
E.E., & Salvendy, G. (2002). “Improving computer
(2005). “Information systems security from a
security for authentication of users: Influence of
knowledge management perspective”, Information
proactive password restrictions”, Behavior Research
Management & Computer Security, 13(3), 189-202.
Methods, Instruments, & Computers, 34, 163-169.
[2] Carnegie Mellon Computer Emergency Response
[13] Straub, K. (2004). “Cracking password usability
Team (CERT). (2007). Computer emergency
exploiting human memory to create secure and
response team statistics. Retrieved March 25, 2008
memorable passwords”, UI Design Newsletter,
from
Retrieved March 2, 2008, from
http://www.cert.org/stats/cert_stats.html#incidents
http://www.humanfactors.com/downloads/jun04.asp
[3] Carstens, D. S., Malone, L., and Bell, P. (2006).
“Applying Chunking Theory in Organizational
[14] U.S. Department of Homeland Security.
Human Factors Password Guidelines”, Journal of
(2002). Federal information security management act,
Information, Information Technology, and
Retrieved March 2, 2008, from
Organizations, 1, 97-113.
http://www.fedcirc.gov/library/legislation/FISMA.ht
[4] Cowan, N. (2001). “The magical number 4 in ml
short-term memory: A reconsideration of mental
[15] Vu, K.P.L., Bhargav, A., & Proctor, R.W.
storage capacity”, Behavioral and Brain Sciences,
(2003). “Imposing password restrictions for multiple
24(1), 87-185.
accounts: Impact on generation and recall of
[5] Dinnie, G. (1999). “The second annual global passwords”, Proceedings of the 47th Annual Meeting
information security survey”. Information of the Human Factors and Ergonomics Society, USA
Management & Computer Security, 7(3), 112-120. (pp. 1331-1335).
[6] Ives, B., Walsh, K., & Schneider, H. (2004). [16] Vu, K.P.L., Tai, B.L., Bhargav, A., Schultz,
“The domino effect of password reuse”, E.E., & Proctor, R.W. (2004). “Promoting
Communications of the ACM, 47(4), 75-78. memorability and security of passwords through
sentence generation”, Proceedings of the Human
[7] Lewis, J. (2003). “Cyber terror: Missing in Factors and Ergonomics Society 48th Annual
action”, Knowledge, Technology & Policy, Meeting, USA (pp.1478-1482).
16(2), 34-41.
[17] Wiedenbeck, S., Waters, J., Birget, J.C.,
Brodskiy, A., & Memon, N. (2005). “PassPoints:
[8] McCauley-Bell, P.R., & Crumpton, L.L. (1998).
Design and longitudinal evaluation of a graphical
“The human factors issues in information security:
password system”, International Journal of Human
What are they and do they matter?”, Proceedings of
ComputerStudies, 63, 102-127.
the Human Factors and Ergonomics Society
42ndAnnual Meeting, USA (pp. 439-442). [18] Wood, C.W., & Banks, W.W. (1993). “Human
error: an overlooked but significant information
[9] Miller, G.A. (1956). “The magical number seven security problem”, Computers & Security, 12, 51-60.
plus or minus two: Some limits on our capacity for
[19] Yan, J., Blackwell, A., Anderson, R., & Grant,
processing information”, Psychological Review, 63,
A. (2004). “Password memorability and security:
81-97.
Empirical results”, IEEE Security and Privacy, 2(5),
25-31.

58
Proceedings of the 2008 IEMS Conference

The New Approach Based on RSM and Ants Colony System for
Multiobjective Optimization Problem: A Case Study

Thien-My Dao, Barthelemy H.Ateme-Nguema, and Victor Songmene


École de technologie supérieure (ÉTS), Canada
thien-my.dao@etsmtl.ca

Abstract The ACS had been also used in the pass for the
optimization problem and proved that it is a
1. Problems and questions addressed powerful technique for optimization problem.

Today's industrial environment is very In this paper, a hybrid approach combined the
competitive, and requires shorter manufacturing RSM and an extension of the ACS in continuous
lead times, lower costs, and higher quality fields is proposed for solution of the continuous
products. These requirements produce complex MOO problem. The RSM modelling is used to
problems characterized by multiple objectives as determine the objective functions and the ACS is
well as complex design objective and constraint then applied to search the (optimum) solution..
relations. From high number of variables, The applications of the proposed approach
multiobjective optimization (MOO) becomes proved that the hybrid approach requires less
complex and very expansive to obtain the time to obtain the optimum solution and, also,
solution, particularly when the computational the quality of the solutions is better
time is taken into account. comparatively to others techniques developed in
this domain.The proposed hybrid algorithm
Generally, the MOO problems are however consists of seven steps and shown in the figure
difficult to solveNumerical optimization is one 1.
of the searching methods often used to
determine the best solution of the MOO 3. Results and conclusions reached
problems. Several algorithms applications have
been developed for single-objective. However, To demonstrate the high potential of the
given the complexity and the multiple objectives proposed approach, a case of multistage flash
of the MOO problems, currently found today in (MSF) desalination process has been
the manufacturing field, the researchers are now investigated. That is an optimization problem
more on the optimization algorithms for multiple with two performance objectives: maximization
objectives and multiple constraints, an of water production (distillate produced: DF)
optimization problems with high degree of and minimization of blow down flow (BDF)
complexity. rates...That is an industrial process consisting to
reheat the ocean water and to evaporate/to
2. Work performed condense a part to obtain the soft water (no salt).
The case study investigated in the paper is a
The response surface methodology (RSM) had three stage MSF operation process.. There are
been used in the pass into the optimization five (5) factors in the input of the process which
process to reduce the computation time. are determining and principally influencing on
However, RSM needs a lot of independent the values of two performance objectives of the
variables to construct a proper response surface. process in the output. The process had been
The Ants Colony System (ACS), a metaheuristic modeled with the RSM to establish the
inspired from the studies of Ants behaviour is mathematical model and the ACS had been now
considered as a multi-agent approach for applied (to the mathematical model) to optimise
resolving combinative optimization problem. and to obtain the best solution. The problem had

59
Proceedings of the 2008 IEMS Conference

been simulated in Matlab and the programming powerful and better comparing with the others
execution shows that after 260 iterations (figure techniques, in terms of quality and robustness of
2), the best solution is obtained and the results solution.
show that the proposed hybrid approach is really

AT OPTIMAL
RESULT :
RSM − SIMUL
pe
SIMUL

IF ITERATION
NUMBER EQUAL TO
GIVEN NUMBER OF
ITERATION

Figure 1: Seven steps hybrid approach algorithm

Fitness fonction evaluation according to the iterations


1

0.98

0.96
Fitness fonctionevaluation

0.94

0.92

0.9

0.88

0.86

0.84

0.82

0.8
0 50 100 150 200 250 300
Iteration index

Figure 2: Fitness function evaluation according to the iterations

60
Proceedings of the 2008 IEMS Conference

Process Improvement in the Emergency Department at


Moses Cone Hospital
Y. Desai, E. Park, P. Demattos, J. Park
North Carolina A & T State University
ysdesai@ncat.edu park@ncat.edu

Abstract and their resulting costs within an ED setting.


A significant amount of hospital income is Two of the contributing factors include “bed
contributed by the Emergency Department (ED). holding” and “eloping” that adversely affect
However, ED throughput can be adversely both operational effectiveness and costs within
affected by non-value added factors such as “bed an ED. Bed holding can be defined as the
holding” and “eloping.” This study focuses on unavailability of ED beds due to slow bed
investigating these factors to improve the turnover “upstream” on the medical/surgical
working of the ED for enhanced revenue stream. units and ICU. Bed holding negatively impacts
This research is based on overall ED patient data the length of time ED patients have to wait for
collected from 2006 to 2007 at the Moses Cone service. People who need inpatient care spend
Hospital in North Carolina. their entire stay in the ED because a hospital bed
never opened up. Moreover, beds in intensive
1. Introduction care units can be scarce burdening the ED to
care for critically ill patients over long periods.
Emergency Departments (ED) are frontline Several US single-center studies have also
hospital units that generate 30-50% of the total documented the extent of critical care delivery in
admissions and revenue. Over the last two EDs. Fromm and coworkers [8] reported that,
decades ED visits have increased with higher during a 1-year study period in a teaching hospital,
charges per visit [1-3]. It has been reported that 154 patient-days of ED critical care were
between 1960 and 2002 the proportion of gross provided, with ED length of stays (LOSs) for
domestic product spent on health care increased these patients of up to nearly 11 hours. Nguyen
from 5.5% to 9.9% in Canada and 5.3% to and colleagues [9] estimated that an even greater
14.6% in the US [4][5]. A significant amount of amount of critical care, 464.4 patient-days, was
critical care is performed in the ED before a provided annually in their large urban teaching
patient is transferred to the intensive care units hospital. Similarly, Nelson and coworkers [10]
(ICUs). In the 2001 US National Hospital examined the amount of critical care provided in
Ambulatory Medical Care Survey [6], 19.2% of their urban hospital’s ED and ICUs during a 3-
all ED patients were classified as emergent month study, and found that 15% of all critical
(patients who should be seen within 15 min), care was performed in the ED. Finally, Varon
and over 992,000 patients were admitted to an and coworkers [11] and Svenson and colleagues
ICU through an ED. Thus ED care of critically [12] reported that critically ill patients spent
ill patients has a significant impact on several hours in the ED before transfer to an
downstream ICU costs [7]. As such hospital ICU, and that critical care procedures were
administrators are interested in minimizing ED commonly performed in the ED. Thus it is
costs that may cascade over to subsequent units. desired that admitted ED patients be moved into
Overcrowding is a common phenomenon in the inpatient beds at the earliest, thereby enabling
United States. It is responsible for inferior emergency medical staff to care for more people,
quality of health care. It also reduces the which in turn could greatly increase hospital
efficiency of the hospital operation. In this revenue. The wait times for patient transfer to a
research we identify non-value added factors regular hospital room can vary from 4 to 48
hours. “Eloping” is defined as the loss of waiting

61
Proceedings of the 2008 IEMS Conference

patients from the ED due to extended wait times. center began in 2000 and was recently
Eloping occurs at different stages of ED recertified by the State of North Carolina until
admittance. Commonly, the four stages include: 2009. Trauma surgeons are specially trained
1. leave pre-triage, 2. leave post triage, 3. leave general surgeons. The trauma team coordinates
post RN - pre MD exam and 4. Against medical different medical specialties, including, but not
advice (AMA) – leave post MD exam. Both of limited to, neurosurgery, orthopedic surgery,
the above described factors (bed holding and plastic surgery, oral and maxillofacial surgery,
eloping) can result in lower hospital income and physical therapy, occupational therapy, speech
the loss of an intangible asset - the hospital therapy and rehabilitation services to optimize
reputation. This study utilizes overall ED patient care.
data collected at Moses Cone Hospital in North
Carolina from 2006 to 2007. Moses Cone 3. Data and Methods
Hospital, which has 535 beds, offers
comprehensive services in a wide range of Emergency department patient data collected
medical and surgical services. at Moses Cone Hospital over two years was used
for this study. For the year 2006 all twelve
2. Moses Cone Health System months were analyzed, while for the year 2007
data was used for the first 10 months. On an
Moses Cone Health System is a multi-hospital average the total patients attending emergency
system with a range of outpatient services department per year were around 65,000. Data
designed to provide care for Piedmont residents was further filtered to identify “Eloping”
throughout all stages of their lives. Residents of patients who left the ED without being seen and
our primary service area include neighboring treated at the hospital. In order to improve
four counties the city of Kernersville. More than efficiency of ED and reduce costs we identified
7,400 employees work throughout the Moses and evaluated non-value added activities within
Cone Health System and the outpatient services. different stages in elopement. The elopement
And as a teaching facility for internal medicine, data was analyzed on an hourly basis. Further
family practice, obstetrics, gynecology and elopement data was analyzed during peak
pediatrics, Moses Cone Health System gives periods by different stages of elopement.
patients access to the latest developments in
medical care from their first moments of life 4. Analysis of Elopement
through later years. In the year 2006 a total of 5974 patients
eloped from the ED while in 2007 a total of
The Moses H. Cone Memorial Hospital, the 6070 patients eloped from the ED. Depending
flagship of Moses Cone Health System, was on the level of service administered for patients
established in 1953 to serve the community by after their arrival to the ED, eloping commonly
delivering high-quality healthcare. Located on a has four categories. Patients leaving the ED
63-acre campus, this 535-bed hospital is the without being directed to triage belong to the
largest medical center in its four-county region. first category called as (1. leave pre-triage).
To ensure that the services are convenient for Patients leaving the ED after being directed from
patients, the hospital continues to invest in the triage to determine the level of patient care
updating and expanding its facilities. While needed belong to the second category termed as
continuing to grow, modernize and plan for the (2. leave post triage). Patients leaving the ED
future, Moses Cone Hospital has consistently after meeting with the nurse for initial check-up
kept its rates lower than those of other and before seeing the MD are categorized as (3.
comparable providers. The Moses H. Cone leave post RN - pre MD exam). Finally, patients
Memorial Hospital Level-II Trauma leaving the ED after diagnosis by MD against
Center serves the greater Greensboro medical advice are categorized as (4. Against
community by providing superior care to people medical advice (AMA) – leave post MD exam.)
who have sustained severe injuries. The trauma Eloping data was analyzed to observe seasonal

62
Proceedings of the 2008 IEMS Conference

trend patterns over the twelve months. We also (AMA). The length of waiting time in ED has an
analyzed data based on age groups and the four influence on the loss of patients at different time
eloping categories. Data was analyzed and periods and categories from the ED
plotted for both the years 2006 and 2007. Figure Patients Based on Type of Elopment
1 shows the monthly data for total number of 2500
patients.
2000

No. of P atients
1500
Monthly Total Patient Data
1000

7000
T o ta l n u m b e r o f p a t i e n ts

500
6000
0
5000 AMA - Leave post Leave post RN-pre Leave post-Triage Leave pre-triage
Year 2006 MD exam MD exam
4000
Year 2007 Type of Elopment
3000 2007 2006
2000
Figure 3. Elopement based on categories
1000
Jan FEB MAR APR MAY JUN JULY AUG SEP OCT NOV DEC
Months 5. Analysis for “Hourly Elopement Data”

Figure 1. Monthly data for total number of Figure 4 shows hourly elopement data (two hour
patients time period) for patients for months January through
May for year 2007. These four months were picked
Figure 2 shows the seasonal trend patterns for as they had higher number of eloping patients.
eloping patients. It can be seen that patients
eloping the ED increase in the spring months Total number of patients eloping on hourly basis

and summer months respectively. Also there was 120


increase in the eloping patients at the year end
T o ta l N u m b e r o f P a ti e n ts

100

for 2006. For the years 2006 and 2007 a total of 80

5,974 and 6,070 patients eloped from the ED. 60

On an average 9.5% of total patients eloped 40

every year from the ED. 20

0
0 to 2 2 to 4 4 to 6 6 to 8 8 to 10 10 to 12 12 to 14 14 to 16 16 to 18 18 to 20 20 to 22 22 to 24
Eloping Patients Monthly Data
Two hour time period
800
Jan Feb March April May
700
600
No. of Patients

500
Figure 4. Hourly Elopement Data for 2007 (Two
400 hour increments)
300
200
100
Eloping Patients between 10am-24pm for JAN-MAY 2007
0
v
n

ne

ly

c
g
ril
ch
b

pt

ct
ay

No
Ja

Au

De
Fe

Ju
Ap

Se

O
Ju

220
ar

M
M

Month
N o . o f E l o p i n g P a ti e n ts

200
2007 2006
180
160
Figure 2. Seasonal trend for eloping patients at 140
ED 120
100
JAN
80
Figure 3 shows elopement based on different 60
FEB
categories of elopement. Out of the total eloping Pre Triage Post Triage Post RN AMA
MAR
APR
patients on an average 22% leave at the pre- Elopement Types
MAY
triage stage, 34% leave at the post-triage stage,
23% leave at post RN and pre MD stage, and
21% leave post MD against medical advice Figure 5. Eloping patient data for each stage
during peak period (10am – 24pm)

63
Proceedings of the 2008 IEMS Conference

8. References
Figure 5 shows the number of eloping patients
for different eloping category. As seen from the [1] Hospital Statistics (1903-1994 and Previous
graph the highest elopement occurs during post- Years) - Chicago, III: American Hospital
triage stage. For each type of elopement we Association; 1993.
identified possible reasons in consultation with [2] Grumbach K, Keane D, Bindman A. Primary care
ED staff and hospital administration. and public emergency department overcrowding.,
Am.J Public Health. 1993:83:372-377.
[3] McCaig LF. National Hospital Ambulatory
6. Recommendations Medical Care Survey: 1992 Emergency Department
Summary. Adv Dam Itial Health Stat. 1995; no. 245.
The loss of eloping patients directly impacts DHHS publication PHS 94-1250.
the revenue stream and reputation of the hospital. [4] Canadian Institute for Health Information:
In order to address these issues possible National Health Expenditure Trends, 1975–2004,
suggestions were recommended to the hospital. Ottawa 2004.
These include: [5] Health Canada: National Health Expenditures in
1. Hire additional PA and RN in order to Canada, Ottawa 1997
reduce the loss of patients in the earlier [6] McCaig L, Burt C: National Hospital Ambulatory
Medical Care Survey: 2001 Emergency Department
stages of the ED cycle during peak period
Summary. Advance Data from Vital and Health
(10 AM – Midnight) Statistics, Centre for Disease Control and Prevention,
2. Increase number of beds available for ED National Centre for Health Statistics, 2003 Report No
3. Create better procedures for the non-insured 335.
4. Use more experienced RN for triage [7] David T. Huang, Clinical review: Impact of
5. Improve the training of receptionists emergency department care on intensive care unit
costs, Critical Care 2004, 8:498-502.
7. Conclusion [8] Fromm RE Jr, Gibbs LR, McCallum WG, Niziol
C, Babcock JC, Gueler AC, Levine RL: Critical care
in the emergency department: a time-based study.
Emergency department revenues are hampered
Crit Care Med 1993, 21:970-976.
by loss of patients at different stages of the [9] Nguyen HB, Rivers EP, Havstad S, Knoblich B,
admittance that can be generically termed as Ressler JA, Muzzin AM, Tomlanovich MC: Critical
“eloping.” A primary reason for eloping is the care in the emergency department: A physiologic
long waiting time in the ED. Based on different assessment and outcome evaluation. Acad Emerg
eloping categories the ED will loose different Med 2000, 7:1354-1361.
segments of revenue streams. Some factors [10] Nelson M, Waldrop RD, Jones J, Randall Z:
influencing long wait times include bed holding, Critical care provided in an urban emergency
MD and staff throughput efficiency per patient. department. Am J Emerg Med 1998, 16:56-59.
Eloping has negative implications on ED [11] Varon J, Fromm RE, Levine RL: Emergency
department procedures and length of stay for
revenue and subsequent loss of goodwill for the
critically ill medical patients. Ann Emerg Med 1994,
hospital. This paper analyzes loss of patients 23:546-549.
(elopement) to identify trends and relevant [12] Svenson J, Besinger B, Stapczynski JS: Critical
explanations of the same. care of medical and surgical patients in the ED:
Length of stay and initiation of intensive care
procedures. Am J Emerg Med 1997, 15:654-657.

64
Proceedings of the 2008 IEMS Conference

Development of a Web-Based Manufacturing Education Tool


Daniel J. Fonseca, Terry Brumback, Christopher M. Greene
The University of Alabama
dfonseca@eng.ua.edu, terry.brumback@ua.edu, cgreene@eng.ua.edu

Matthew E. Elam
Texas A&M University-Commerce
matthew_elam@tamu-commerce.edu

Abstract come about only through the development of a


more diverse workforce (ACT, 2003).
This paper discusses the development of a
web-based educational tool designed to promote DeReamer and Safai (2004) also state that
manufacturing engineering education among employment opportunities in the U.S. requiring
underrepresented high school students. The tool science, technology, engineering, and
is a component of a comprehensive mathematics (STEM) expertise are growing
manufacturing curriculum development rapidly. From 2000-2010, the growth is
initiative created as a response to the impressive expected to increase about three times faster
growth of the manufacturing sector of the than the rate for all other occupations. However,
Southeast of the United Sates. Such a growth in the available domestic STEM labor supply has
the manufacturing activities of the region has and will not be able to satisfy this growth
stressed the need for a large workforce because of the long-term trend of fewer students
professionally educated in manufacturing entering STEM programs in college, thus
principles and technology threatening the ability of U.S. businesses to
compete in the global marketplace. The
1. Introduction situation is so dire that the National Science
Board has stated that the federal government and
As the twenty-first century begins, the its agencies must step forward to ensure the
demand for an abundant, diverse, and talented adequacy of the U.S. STEM workforce, and that
manufacturing systems workforce remains all stakeholders must mobilize and initiate
strong. Continued growth in national efforts that increase the number of U.S. citizens
productivity requires a continuous supply of pursuing STEM studies and careers.
professionals who are highly competent in
mathematics, science, and informatics, and who May and Chubin (2003) mention that the
are adaptable to the needs of a rapidly changing STEM workforce, despite years of efforts to
profession. While overall employment in diversify it, remains overwhelmingly white,
manufacturing engineering is expected to male, and able-bodied, and the available pool of
increase during the 2000-2010 period, talented women, minorities, and persons with
engineering degrees over the same time period disabilities remains significantly underutilized.
are expected to remain stable. The number of It is an interesting fact that, if individuals from
18-24 year olds will grow by three million by these underrepresented groups were represented
2010; and African Americans, American in the STEM workforce at the same percentage
Indians, and Hispanics will make up almost 60% as their representation in the total workforce
of the population increase over that time period. population, the shortage in the STEM labor
The consensus among leaders in the supply would largely be filled. Also, the fact
manufacturing community is that the necessary that currently underrepresented groups are
increase in the engineering labor supply will projected to increase from about a quarter of the
workforce to nearly half by 2050 suggests that

65
Proceedings of the 2008 IEMS Conference

the U.S. must cultivate the STEM talents of all To ensure that the manufacturing sector
of its citizens, not just those from groups that continues to be a driving force for economic
have traditionally worked in STEM fields. development, Alabama must have an adequate
workforce that is not only knowledgeable in
Many efforts have been put forth to increase manufacturing principles but also
the percentage of underrepresented populations technologically educated in advanced
in the STEM workforce. These include summer technological platforms. This is main motivator
and after-school programs for high school for the development of a web-based tool that
students and the integration of STEM assists middle-school and high-school students
applications into high school curricula. The to learn fundamental concepts in advanced
unfortunate outcome of many of these efforts is manufacturing technologies.
that they are temporary and disappear once they
no longer have funding. Their positive effects 2. The Tool
then dissipate over time, and the same problems This section describes the software developed
with lack of diversity and shortage of human to help middle and high school students
resources in STEM disciplines resurface. assimilate basic manufacturing engineering
The fact that women and minorities are concepts. Images of the software screens are
underrepresented in the engineering labor force presented as figures.
has led to the development of the following
three questions: 2.1 Welcome Page
• What are the psychological mediators of Figure 1depicts the "Welcome" page of the
both gender and ethnic group differences in software. This page briefly describes
entry into and persistence in the engineering
labor force?
• What are the family and school forces that
underlie the gender and ethnic group
differences in these psychological
mediators?
• How do experiences in tertiary educational
settings and in manufacturing work settings
influence gender and ethnic group
differences in entry into and persistence in
the engineering labor force? (Eccles and
Davis-Kean, 2003).

The state of Alabama is currently


experiencing an expansion and enhancement of
its manufacturing sector; consequently, the Figure 1. Welcome Page
engineering opportunities associated with
manufacturing systems are also growing. This is the Manufacturing Engineering Program of the
especially valid relative to the automotive University of Alabama's Department of
industries. Mercedes Benz U.S. International, Mechanical Engineering and the Alabama
Inc. and Honda Manufacturing of Alabama, Partnership for Community Partnership (the
LLC already have operational manufacturing funding source for this project), and provides
facilities in Alabama. Hyundai Motor links to a number of manufacturing engineering
Manufacturing Alabama, LLC is completing programs and professional societies. This page
construction on its first U.S. manufacturing plant also has links to the five modules for the
in Alabama. With them, several other high-tech curriculum.
businesses have begun operations in this state.

66
Proceedings of the 2008 IEMS Conference

2.2 Module 1: History


Figure 2 depicts the first page of module one.
In this module, the student is introduced to the
modern history of manufacturing, and emphasis
is given to the role of state-of-the art software
supporting today’s modern manufacturing
machines.

Figure 3. Module Two: CIM


Figure 2. Module One: History 2.4 Module 3: Programmable Logic
Controllers
2.3 Module 2: Computer Integrated Module Three discusses the importance of
Manufacturing programmable logic controllers, PLC’s, in
Module two covers computer integrated monitoring the automation of industrial
manufacturing (CIM). It introduces students processes in a manufacturing facility. The
to the following subjects: CAD/CAM, module explains to the students the difference
Computer-Aided Process Planning, Enterprise between a PLC and a desktop computer, putting
Resource Planning, CNC, Direct Numerical emphasis on the PLC’s closed loop system, as
Control, Flexible Manufacturing Systems, well as its sensors to monitor industrial
Automated Storage and Retrial Systems, processes and determine changes to ongoing
Automated guided Vehicles, Automated manufacturing processes (see Figure 4).
Conveyance Systems, Robotics, Computer-
Aided Quality Assurance, and Lean 2.5 Module 4: Robotics
Manufacturing (see Figure 3). Module Four introduces the concept of
industrial robotics. Students are taught about the
principles governing robotic performance as
well as the different ways in which a robotic
system is programmed. In addition, the module
devotes a considerable time to the description of

67
Proceedings of the 2008 IEMS Conference

the main hardware components of a robotic arm


(see Figure 5).

Figure 5. Module Four: Robotics

Figure 4. Module Three: PLC’s

2.6 Module 5: Bar Codes


The last module (see Figure 6) of the web-
based tool involves the study of bar codes. In it,
students learn the importance of bar coding
technologies in areas such as materials handling,
inventory control, quality inspection, and
materials dispatching, among others. The
students also learn how to identify and generate
labels of the most commonly used industrial bar
code types.

Figure 6. Module Five: Bar Codes

68
Proceedings of the 2008 IEMS Conference

3. Conclusions

This paper presents internet-based software


for teaching an introductory curriculum in
manufacturing engineering to high school
students. Through the use of modern
technologies, the software is capable of the
following:
• Presenting material in a manner that
addresses all three learning styles (visual,
auditory, and kinesthetic)
• Presenting subject matter to high school
students that is not normally seen until mid-way
through a manufacturing engineering program
by reducing the level of learning to a
comprehension and pace acceptable to a high
school student

4. Acknowledgements
The authors want to thank the Alabama
Partnership for Community Partnership for
funding the development of this tool.

5. References

ACT, (2003), ACT Policy Report: Maintaining a


Strong Engineering Workforce, Information For
Life’s Transitions, Washington.

DeReamer, S.M. and Safai, N.M., (2004),


"Promoting Science and Engineering in Grades
K-12 By Means of a Summer Workshop – A
Universal Model," Proceedings of the American
Society for Engineering Education Annual
Conference and Exposition, Session #1460.

Eccles, J. and Davis-Kean P., (2003), Women,


Minority, and Technology, University of
Michigan, Michigan.

May, G.S. and Chubin, D.E., (2003), "A


Retrospective on Undergraduate Engineering
Success for Underrepresented Minority
Students," Journal of Engineering Education,
Vol. 92, pp. 27-39.

69
Proceedings of the 2008 IEMS Conference

Minimizing Line of Duty Deaths (LODD) For Firefighters through Proposing


a Tracking System to Track Fire Fighters at Fire Scenes
Nabeel Yousef and Tarig Ali
University of Central Florida
nyousef@mail.ucf.edu

Abstract departments still suffer from firefighter’s


injuries and deaths.
The real-time tracking of firefighters is a The serious injury or death of a firefighter in the
important part of firefighters safely at fire line-of-duty is a tragedy all members of the fire
scenes. This article presents the conceptual service dread. The family is disorganized by
design of a prototype system for tracking grief. The community and surviving fire
firefighters at fire scenes. The system is based department members are in mourning. The fire
on the triangulation technique where we have a department can be thrown into shock. It must,
transmitter and two receivers to accurately however, continue to provide normal services as
detect the position of a firefighter in at a fire well as deal with the serious injury or death.
scene. The base unit consists of two reader It is known that fires in enclosed spaces, such as
antennas, fire-scene geo-referencing and buildings, behave differently to fires in the open
management (GeoMan) software, and a laptop air. The smoke rising from the fire gets trapped
computer. The mobile unit consists of active by the ceiling and then spreads in all directions
radio frequency identification (RFID) tag, an to form an ever-deepening layer over the entire
electronic compass, and a Bluetooth transmitter. room or space. During this process, the smoke
The GeoMan software in the base unit creates a will pass through any holes or gaps in the walls,
local geo-referencing system at the fire scene, ceiling or floor. Smoke will trap the heat from
processes the received radio signals to infer the the fire in the building, greatly increasing the
instantaneous locations of the mobile units temperature. The rising temperature, the
(firefighters), and display the real-time locations visibility, and the falling derbies endanger the
of these units in the locally defined reference life of firefighters. Firefighter’s safety relates to
system. a successful real-time tracking of firefighters at
fire scenes.
1. Introduction In December 2002 FEMA and NFPA (National
Fire Protection Association) published a study
“To extinguish a fire, it is necessary to remove on “A Needs Assessment of the US Fire
one or more of the three components of Service”. Part of the study was to identify fire
combustion. Removing any of these will not department shortfalls. The NFPA used 26,354
allow combustion to continue. Firefighters work fire departments in the survey, 12,240 completed
by surveys were received back and 8,416 were
• Limiting exposure of fuel that may be analyzed for the assessment report. The purpose
ignited by nearby flames or radiant heat of the study was to assess the department needs
• Containing and extinguishing the fire not the individual firefighters needs and
• Removing debris and extinguishing all concerns.
hidden fires to prevent rekindling” [2] Based on statistics done by U.S. administration
Firefighters' goals are to save life, property and an average of 100 fire fighters die every year
the environment. A fire can rapidly spread and while on duty and tens of thousands are injured.
endanger many lives including firefighters In the last decade several incidents involving
themselves. Although firefighters have the firefighter fatalities have brought national
equipment and the cloths that can aid in attention to the number of fighters killed in the
protecting them during a fire accident fire line of duty. The last one was the wildfire in
California in October of 2007. The following

70
Proceedings of the 2008 IEMS Conference

table gives an overview of the number of on- Harvard researcher Stefanos N. Kales and
duty deaths and injuries that occurred from 1997 colleagues analyzed data on all firefighter deaths
to 2006. between 1994 and 2004, except those linked to
the 9/11. The research found that very high
Table 1: On-Duty Deaths and Injuries that percentages of firefighters are more likely to die
Occurred from 1997 to 2006 [2]5 of heart disease. Firefighters job expose them to
extreme exertion to toxic and high risk
environment. The heavy gears needed to
Year Deaths1 Fireground Injuries2 Total Injuries2
accommodate this type of job forces the need to
be fitted to curry such equipment. The
1997 100 40,920 85,400 environment and the heavy gears make it more
deadly for firefighters. Understanding the risk
1998 93 43,080 87,500 around firefighters and the reason behind that is
very vital.
1999 113 45,550 88,500 Research interest is to minimize the line of duty
deaths whether it’s due to heart attacks or other
related incidents on the scene. Some research
2000 105 43,065 84,550
has been done to maximize the safety and
minimize the deaths by tracking firefighters
2001 1053 41,395 82,250 through the incident scene and monitor their
status. A company in Florida announced that
2002 100 37,860 80,800 they will introduce a firefighter tracking system
that is called Pak-Tracker Firefighter Locator,
which integrates a motion sensor device with a
2003 113 38,045 78,750 transceiver to send and receive critical
information between individual firefighters and
2004 1194 36,880 75,840 a command station. The product was supposed
to be introduced in April of this year in the Fire
2005 115 41,950 80,100
Department Instructor's Conference in
Indianapolis. The prototype for the product
shows that it has a large size that can add more
2006 106 N/A Yet N/A Yet weight to the firefighter’s gear. Also there was
no price prediction that was released with this
1
prototype.
Researchers at Worcester Polytechnic Institute
just announced in August of 2007 that they have
1
been able to reduce the size of the locator
This figure reflects the number of deaths as transmitter so every firefighter can wear one.
published in USFA's annual report on firefighter The researchers expect that the system will be
fatalities. All totals are provisional and subject to
available in two years. The technology used in
change as further information about individual
fatality incidents are presented to USFA. the system was not released or published.
2 This figure reflects the number of injuries as
published in NFPA's annual report on firefighter This article will be looking at the design criteria
injuries. based on the previous research and products then
3 In 2001, an additional 341 FDNY firefighters, three present the conceptual design of a prototype
fire safety directors, and two FDNY paramedics died
in the line of duty at the World Trade Center on
September 11. the annual USFA report on Firefighter Fatalities in
4 The Hometown Heroes Survivors Benefit Act of the United States beginning with CY2004.
2003 has resulted in an approximate 10% increase to 5 Was published by U.S. 1Fire Administration, 2006
the total number of firefighter fatalities counted for
71
Proceedings of the 2008 IEMS Conference

system for tracking firefighters at fire scenes “training burn” by the Orange County Fire
based on Geographic Information system (GIS). Rescue Department (OCFRD) in mid 2007. We
The system consists of a base and mobile units. have also met and interviewed a few firefighters
The base unit consists of two reader antennas, with OCFRD. We believe that such a needs
fire-scene geo-referencing and management assessment study is fundamental not only from a
(GeoMan) software, and a laptop computer. The customer needs perspective, but also from a
mobile unit consists of active radio frequency conceptual design standpoint. From the training
identification (RFID) tag, an electronic compass, burn and the interviews we learned that weight,
and a Bluetooth transmitter. The GeoMan size, and power efficiency are really important
software in the base unit creates a local geo- for many reasons. First, firefighters are already
referencing system at the fire scene, processes loaded with devices that have the weight and
the received radio signals to infer the size to the extent that adding another device is a
instantaneous locations of the mobile units real burden. Figure 1 shows the QFD that
(firefighters), and display the real-time locations explains the customer needs and the relation
of these units in the locally defined reference with the product characteristics
system. The mobile units continuously broadcast
signals, which are received by the reader
antennas and processed by the GeoMan software
to track firefighters at the fire scene. The
electronic compass provides basic local
orientation information of firefighters, which is
transmitted to the base unit via the Bluetooth
transmitter installed in the mobile unit. Local
orientation information is also valuable for
inferring firefighter’s tracks in the events of
losing radio signals.

4. Conceptual Model
In the model proposed we considered the
following characteristics: light weight, small
size, power efficiency, real-time tracking, and
accurate positioning. The first three Figure 1: QFD for Tracking System
characteristics are very essential from a
firefighter perspective. The other characteristics
are crucial to the success of any tracking system 4.1 System Description
since pinpointing firefighter’s locations in real The proposed system consists of (a) a light
time in fire scenes is critically needed. This is weight, small size mobile unit that has a
important especially in the situations when a Radio Frequency ID (RFID) tag and a
firefighter is not responding or instantaneously digital compass, and (b) a base unit. The
un accounted for. Real-time tracking enables RFID will transmit firefighter information (a
the fire scene command center to not only unique identifier for the firefighter) to the
account for firefighters, but also to use their base unit. The direction and orientation
relative positions for making life-saving information of the firefighter will be
decisions. Accurate positioning is an important broadcasted to the base unit by the
characteristic given the size of fire scenes and electronic compass via Bluetooth. . The
the large scale map needed to show the locations. base unit is connected to two RF reader
To design a practical, usable tracking system it antennas, which will be used for inferring
needs to be power efficient as well. firefighter’s positions by triangulation. The
In order to better understand the process of positions and direction/orientation
firefighting more efficiently, we have attended a information will be processed by the
72
Proceedings of the 2008 IEMS Conference

proposed GeoMan software and a real-time we venerate them and although we care deeply,
map will be produced and displayed on the we are still resilient and never take on the full
laptop computer of the fire scene command effect of their tragic demise. We simply forget
center. . Figure 2 shows the architecture of about the families of these astounding men and
the proposed system women who are left with nothing but
despondency.
The main goal of designing the proposed
prototype system is to provide an efficient real-
time tracking device that satisfies firefighter’s
needs. Such a system will help save the lives of
those great people through accurately tracking
their position in a fire scene. Based on the needs
assessment study, the characteristics of light
weight, small size, power efficiency, and
reasonable cost have been identified to be
essential for designing a firefighter tracking
system that satisfies customer needs and operate
efficiently.

References

1. Jesse Beitel, Nestor Iwankiw, 2002,


“Analysis of Needs and Existing
Capabilities for Full-Scale Fire
Figure 2: The Positioning Tracking System as Resistance Testing”, NIST GCR 02-843
Proposed 2. U.S. Fire Administration, Topical Fire
Research Series: Firefighter Injuries
Volume 2, Issue 1 July 2001 (rev.
Conclusion March 2002)
3. Sheng Zhou and John K. Pollard, 2006,
Each year an average of 105 firefighters die on “Position Measurements Using
the line of duty, thousands more are injured. The Bluetooth”, IEEE, 2006 pages 555-558
injuries sustained in a firefighting scenario can 4. G. Knapp, “Microwave Coupler Feeds
often impact the life of the fire veteran for the Outdoor Antenna through Walls”. High
remainder of his/her life. As we progress we Frequency Electronics. 2006 Summit
clearly see many more examples arising which Technical Media. Pages 58-62
depict the intensity and overall difficulty of a 5. U.S. Fire Administration, Matching
firefighting career. Fires in these scenarios are Assistance to Firefighters Grants to the
unrelenting and tremendously difficult to Reported Needs of the US Fire Service,
contain. Although firefighters already have an A Cooperative Study Authorized by
array of rather intricate equipment which aids U.S. Public Law 108-767, Title XXXVI,
them, it is simply not enough. We are still losing October 2006
many brave men and women, courageous fire 6. Hui Tong and Seyed Zekavat, “ A Novel
soldiers that will never again be united with their wireless Local Positioning System Via a
families or the people they loved. All of them Merger of DS-CDMA and
share one thing in common; they take on the Beamforming: Probability –of-Detection
risks associated with an exceedingly difficult Performance Analysis Under Array
career to insure our safety and the safety of Perturbations”. IEEE Transaction Vol
others in these situations. Meeting their demise 56, No. 3, May 2007
73
Proceedings of the 2008 IEMS Conference

7. Marius Marcu*, Sebastian Fuicu* and


Anania Girbant. “Local wireless
Positioning System”. International
Symposium on Applied Computational
Intelligence and Informatics, IEEE 2007
8. The world RFID Authority, “ The
Basics of RFID Technology”, RFID
Journal 2007 .
http://www.rfidjournal.com/article/articl
eprint/1337/-1/1
9. Hae Don Chon, Sibum Jun, Heejae
Jung, Sang Won An, “Using RFID for
Accurate Positioning”. Journal of Global
Positioning Systems (2004) Vol3, No. 1-
2, Pages 32-39 Published Feb 2005.
10. Scott Electronic Management System,
http://www.scotthealthsafety.com,
Accessed November 25, 2007.

74
Proceedings of the 2008 IEMS Conference

Addressing the Engineering Needs of the Nuclear Power Industry


Wei Zhan and Jacob Schulz
Texas A&M University
wei.zhan@tamu.edu

John Crenshaw and Tim Hurst


South Texas Project Nuclear Operating Company

Abstract 10.0
The demand for electricity in the United States 8.0
is increasing. Nuclear power plants have better 6.0
capacity, price, and environment impact than other 4.0
types of power plants. In the last few years, the 2.0
nuclear power industry has been revitalized. There
0.0
is an urgent need to find new engineers to support
the needs of the nuclear power industry. This

95

97

99

01

03

05
paper discusses this issue and proposes some
actions that can be taken quickly by educational
Figure 2. U.S. Electricity Production Costs
institutes and industry to effectively address the
cents per kilowatt-hour
engineering needs of the nuclear power industry.
The coal power plants face a major challenge
1. A looming nuclear power renaissance of reducing carbon dioxide emissions. Meeting
The demand for electricity in the United States these requirements will raise the price of energy
is increasing. According to the Department of generated by coal power plants.
Energy, the demand for electricity in the United
States will increase by 45% by 2030[1]. Currently, the amount of green energy
generated from wind and solar power accounts for
less than 5% of the total energy consumption.

Nuclear power can provide a large amount of


energy needed. Currently, there are 439 nuclear
power reactors in 31 countries supplying 15% of
the world’s electricity. In France, 59 nuclear
reactors supply 78% of its electricity. In the U.S.,
there are 104 nuclear reactors supplying 15% of
Figure 1. Estimation for Electricity Needed
the electricity [1].
by 2030
With limited total global energy sources, the
Production costs for nuclear power have
rising demands for energy in USA and other
declined by over 30% in the last ten years
developing countries such as China and India have
(production costs include fuel, operation and
been driving up the prices of oil, natural gas, coal
maintenance, and fee paid into the Nuclear Waste
and other energy sources.
Fund). Currently it is 1.72 cent/kilowatt-hour.
The prices of power from oil and natural gas
Nuclear power plants are fueled by uranium,
tend to fluctuate much more than nuclear and coal
which is relatively abundant and is available from
power. The prices for electricity generated by
many stable countries such as Canada and
different power plants are shown in Figure 2.
Australia. Although the price of uranium has

75
Proceedings of the 2008 IEMS Conference

increased about 500 percent since 2002, including Nuclear reactors emit almost none of the
a doubling in price last year, the operating costs of greenhouse gases responsible for global warming.
nuclear power plants have changed very little. Without the nuclear power, the U.S. electric sector
Moreover, the rise in price has prompted an carbon dioxide emissions would be 27% larger.
exploration boom that will ultimately lead to more
mines and a greater supply. Construction accounts In summary, nuclear power offers the
for as much as ¾ of the cost of nuclear generation. possibility of large quantities of base load
The construction of nuclear power plants has electricity that is cleaner than coal, cheaper and
transitioned to a manufacturing process. Large more secure than gas, and more reliable than wind.
modules are shop-fabricated and then assembled
on site. This practice significantly reduces the time In addition to the cost and environmental
and cost to build nuclear power plants (see [2]). concerns, the Unites States would like to be less
dependent on foreign oil.

The Energy Policy Act of 2005 provides


$18/MWh tax credit for first 6,000 MW of new
nuclear capacity [6]. The Energy Policy Act also
created insurance against delays in commercial
operation caused by licensing or litigation. It also
authorized the Department of Energy to provide
loan guarantees for up to 80 percent of the cost of
projects, like nuclear plants, that reduce emissions.
These policies partly relive the pressure of finding
large initial investments for new nuclear power
Figure 3. Fuel as a percentage of electric plants. A new licensing process, COLA
power production cost (Combined Operating License Application),
speeds up the process of building new nuclear
The price of uranium will never be as much of power plants.
a challenge for the nuclear industry as fuel costs
are for other sources of electricity as illustrated by Public support for nuclear power is growing.
Figure 3 [6]. For example, 78 percent of the cost A poll conducted in 2006 shows that 68% would
of electricity from a coal plant is the cost of the accept a new reactor at the nearest nuclear power
coal. At combined cycle gas plants, fuel is 94 plant site to where they live and 81 percent said
percent of the production cost. At nuclear plants, that nuclear energy will be important in meeting
fuel costs are only 26 percent of production costs. the nation’s electricity needs in the future [1].
And only one-half of that nuclear fuel cost is the
cost of uranium. Out of the 1.72-cent-per-kilowatt- Geopolitics, technology, economics and
hour production cost, uranium represents only environmental concerns are all changing in nuclear
two-tenths-of-a-cent. power’s favor. This convergence of powerful
economic and political forces is leading to a
Nuclear power plants are safer, easier to renaissance of nuclear power. It’s no longer a
operate, and less expensive due to the availability question of “Do we need new nuclear plants in the
of new technology, new designs, and lessons United States?”, but a question of “How many,
learned during the past 50 years. Simpler designs where and when?”
cuts maintenance and repair costs. Unplanned
plant outages are less frequent (50% in the 70’s to 2. Shortage in the nuclear engineering force
less than 10% now). The average capacity factor France, Japan, Korea, and some developing
of 90% for nuclear power plants is very high countries continue to build nuclear power plants.
compared to 70% for coal plants, 40% for natural But in the US, Ukraine’s Chernobyl accident in
gas plants, and 30% for wind power plants [1]. 1986 along with the accident at Three Mile Island
in Pennsylvania in 1979 sent the industry in a

76
Proceedings of the 2008 IEMS Conference

decline. The public got scared. The regulatory Other countries such as France, Japan, and
environment tightened, and as a result cost went Korea are moving ahead in terms of nuclear power
up. Billions were spent bailing out companies plant building and educational programs in the
making loss on nuclear power plants. For two field. The outsourcing of jobs in the nuclear power
decades, neither the government nor bankers engineering to other countries is not desirable for
wanted to touch nuclear power plants. The nuclear the US. If we do not take actions immediately,
power industry in the US had experienced almost what happened to the manufacturing industry may
no development during that period. happen to the nuclear industry.

As a result, many of the existing nuclear 3. How to address the needs of the nuclear
power plants are using outdated technologies. This power industry?
is partly the reason that students are not interested There are needs for two types of engineering
in a job in the nuclear power industry. Therefore, resources in the nuclear power industry:
there are very few young engineers in the field of operators/maintenance workers and engineers. The
nuclear power industry. Very few young engineers needs of operators and maintenance workers can
even thought about entering the nuclear power be addressed by the community colleges with a
industry. two year degree relatively quickly. The
community colleges are usually more flexible in
On the other hand, over 25 new nuclear power changing their programs and curricula. They are
plants have been announced recently [7] and also more responsive to the needs of industry. The
efforts are in full swing to revitalize the industry. need for engineers is more difficult to meet. There
This new boom requires thousands of engineers to are very few four year nuclear power engineering
take on the expanding engineering tasks related to technology programs in the US. University of
the design, operation, and maintenance of the North Texas is one of them
plants [Error! Reference source not found.]. To [3]. The number of graduates from these
make matters worse, many current engineers in the programs falls far short of the needs by the nuclear
nuclear industry are at or near retirement age. Due power industry due to the latest boom in the
to the nature of the nuclear power industry, it takes industry. People in both industry and academia are
a few years for a new engineer to go through the realizing the seriousness of the problem faced by
rigorous training program in nuclear industry. As a the nuclear power industry.
result, there is an urgent need to find new
engineers to join the nuclear power industry work Faced with the challenge of providing the
force to support the existing and new nuclear increasing need of engineers by the nuclear power
power needs. Currently, the US education system industry, universities and the nuclear power
lacks the capability to provide the nuclear industry industry are making efforts to address this issue.
with the volume of qualified engineers needed by For instance, at Texas A&M, a nuclear
the industry. engineering technology program is being created
within the Department of Engineering Technology
Many nuclear engineering programs prepare and Industrial Distribution. However, major
their students for a career in academia and national changes in the educational system face resistance
research labs rather than the nuclear power in academia [4]. In addition, the effect of this type
industry since there was almost no need for new of changes will not be seen by the industry in the
engineers from the industry in the last two decades. next 4 to 5 years. But the sudden increase in need
The Navy has been one of the major sources of from nuclear power industry is immediate. A
nuclear power engineers in recent history. As a quicker and more efficient way to address the
result, students in the nuclear power engineering immediate needs of the nuclear power industry is
programs in many universities are getting more needed.
theoretical training and encouraged to do
fundamental research rather than the practical and We propose the following steps to partly
hands-on experience. They are not prepared for the address the immediate needs of the nuclear power
industrial jobs. industry:

77
Proceedings of the 2008 IEMS Conference

1). Curriculum modification: engineering technology. The senior project can


Modification to the curricula of several key lead to solutions to real problems of modest scale.
courses will be made to help indoctrinate students The typical cost of sponsoring a senior design
in the field of nuclear power engineering and project is around $5000. This money is well spent
electrical production. Modifications include considering the hands-on training students will
additional material on power generation theory in receive while trying to solve a real problem for the
an ac circuit analysis course with examples of company.
nuclear power generation to illustrate the basic
concepts. 3). Summer Faculty Fellowship:
Summer faculty fellowship in the nuclear
The control and instrumentation courses will power industry is another effective way of making
be modified to emphasize process control systems. sure the students learn the skills needed by the
In the instrumentation course, the concepts of nuclear power industry. If the faculty member
smart sensors and fieldbus technologies will be teaching control and instrumentation courses does
added, again with examples from the new nuclear not know what kind of problems a nuclear power
power plants. The incorporation of process control plant faces, chances are the material taught in the
concepts in the controls course includes exposing classroom will not be very relevant to the nuclear
students to Distributed Control Systems. The use power industry. The idea is simple: the trainer
of this knowledge is not limited to the nuclear needs to be trained first. The summer faculty
power industry. Many oil and gas companies and fellowship program can be a way for the faculty
chemical plants all need engineers with similar members to keep a close tie with the nuclear
skill sets. The proposed curricula changes are power industry. In addition to the educational
welcomed by most students based on initial benefits, faculty members can also work on some
feedback from students. practical research problems for the nuclear power
industry. Research and education can be
Guest lectures from the nuclear power seamlessly combined to have a win-win situation
industry will be invited to give classroom for academia, industry, and most importantly the
presentations to the students. This allows the students. The first author, Dr. Wei Zhan, spent
students to learn from the experienced nuclear three months in the summer of 2007 at South
power engineers about the real world problems. It Texas Project Nuclear Operating Company on a
is also another way to build connection between summer faculty fellowship program. The benefit
the students and the nuclear power industry. of that summer faculty fellowship experience is
enormous. Many of the ideas in this paper are
2). Senior Design Projects: inspired by that experience.
The limited amount of education in nuclear
power plant operation will expose many students 4). Laboratory Development:
to the industry and allow them to be better Laboratories can be established with focus in
prepared for work in the nuclear power industry. the area of control of field devices using DCS and
In addition, senior design projects in the field of fieldbus technologies and wireless communication.
nuclear power and plant controls will be This provides students with opportunities to
incorporated. It would be helpful if the nuclear conduct research work related to nuclear power
power industry can sponsor some of these projects. plant control systems. The lab development effort
The students can go to the power plant to can be tied together with the senior design projects
understand the problem that they choose to work and the curriculum modification discussed in items
on. They will get advice from engineers working 1 and 2. The students involved in the initial lab
in the nuclear power plant. The students working development work get the hands-on experience
on their senior design project in related topics will and the result can be used to improve the control
spend many hours outside the classroom trying to and instrumentation courses in terms of
learn the knowledge by themselves. In the end, developing new lab classes.
they will have more in-depth knowledge and
hands-on experiences in nuclear power

78
Proceedings of the 2008 IEMS Conference

5). Summer/Spring Student Internship having senior project design related to nuclear
Students interested in nuclear power industry power plants, student summer interns, faculty
will be encouraged to work as summer/spring summer fellowships, research projects supported
interns for the nuclear power plant companies. The by the nuclear industry, and a new nuclear power
internship provides an opportunity for both the engineering technology program are just some of
students and the company to know each other. the steps that we can take to address this issue.
During the internship, the students will learn the One of the objectives of this paper is to increase
basics of the nuclear power plant, some practical the awareness of this issue in academia so that
engineering experience, and better ideas about more universities will work on the solution to this
what they should be focusing on when they come problem.
back to school. The fourth author, Jacob Schulz,
worked at South Texas Project Nuclear Operating 5. References
Company as a student summer intern. His [1] Lighting the future with clean nuclear energy,
experience as an intern resulted in a white paper GP0066, Nuclear Energy Institute, Dec. 2006.
on the recruiting and retaining of young engineers
[5], and gave him a better idea of the coursework [2] T. Hurst, “Transfer ABWR construction techniques
to U.S. shores”, POWER, Vol. 151, No. 5, May 2007,
required to be successful in the industry.
pp. 43-46.
6). Recruiting [3] M. Plummer, J. Davis, and C. Bittle, Management
Again, because of the down turn in the nuclear changes as a threat to onsite delivery of nuclear
power industry in last two decades, very few engineering technology programs, 2007 ASEE
students know much about the industry. The Conference. AC 2007-3090.
nuclear power industry and universities need to
[4] Report of the Nuclear Energy Research Advisory
work together to increase the awareness of the
Committee Nuclear Power Engineering Curriculum
nuclear power industry among the students. This Task Force, Department of Nuclear Engineering &
can be achieved by teaching nuclear related Radiation Health Physics, Oregon State University,
material in different courses, nuclear power April 7, 2004.
industry participating in the career fairs, and
collaboration on research projects between nuclear [5] Ideas for recruiting/retaining young engineers, STP
power industry and universities. With a close tie white paper, J. Schulz, 2007
between the faculty members and nuclear power
[6] The changing climate for nuclear energy: Annual
industry established, the nuclear power industry briefing for the financial community, Nuclear Energy
would have a much better chance of finding the Institute, Feb. 22, 2007.
students they need. This can be done through
internships, sponsoring senior design projects, and [7] New Nuclear Plant Status, Nuclear Energy Institute,
founded research projects. August, 2007.

These intermediate steps should help Wei Zhan is currently an assistant professor in Department of
addressing the urgent needs of the nuclear power Engineering Technology and Industrial Distribution, Texas
industry as well as encourage students to enroll in A&M University, College Station, TX 77843-3367.
the 4 year nuclear power technology program. John Crenshaw is currently a vice president at South Texas
With the industry, universities and community Project Nuclear Operating Company in Bay City, TX 77414.
colleges working hand in hand, we are confident Tim Hurst is currently the principal I&C engineer at South
that the engineering needs for the nuclear power Texas Project Nuclear Operating Company in Bay City, TX
renaissance will be met. 77414. He is also the president of Hurst Technology in
Angleton, TX 77515

4. Conclusion Jacob Schulz is currently an undergraduate student in the


Department of Mechanical Engineering at Texas A&M
The shortage of engineers in the nuclear power
University, College Station, TX 77843.
industry is a serious problem. We need to find
different ways to address this issue as quickly as
possible. Change of curricula, building labs,

79
Proceedings of the 2008 IEMS Conference

Partial Discharge (PD): PSPICE Simulations


Andrzej J Gapinski
Penn State University-Fayette
ajg2@psu.edu

ABSTRACT material irregularities or along the surface


Partial discharge that may occur in electrical between two different insulating materials.
systems has been investigated in the past to It may or may not exhibit visible symptoms,
provide better understanding of the which affects detect-ability and thus may create
phenomenon. The understanding of the particularly dangerous conditions for equipment
discharge process is critically important or personnel.
especially in hazardous environment of chemical A corona discharge2 on the other hand usually
plants, oil and gas, and mining industries. The displays almost steady glow discharge in air.
purpose of the article is to discuss partial PD may begin quite innocently within cracks,
discharge and its software simulations using voids, etc. within a solid dielectric, at conductor-
PSPICE software. dielectric surfaces within solid or liquid
dielectric or insulating materials. PD may be
Partial Discharge – Can Cause Catastrophic infrequent event causing intermittent
Damage malfunction or steady degradation of the system
performance. The final effects are usually quite
Partial discharge (PD), which may occur in devastating, causing complete failure of the
electrical apparatus, is vitally important for apparatus.
many industries especially those, which are Partial discharge dissipates energy in various
explosion prone. Partial discharge may not only forms:
degrade the operating equipment but may also • Electromagnetic (radio, light, heat)
cause a total breakdown of an apparatus and • Acoustic (audio, ultrasonic)
create unsafe and dangerous environment. In • Gases (ozone, Nitrous oxides).
addition to negative impact on reliability and PD measurements have become well-established
operational life span of the apparatus, partial with their own techniques and standards. 4,5
discharge may contribute to creation of
conditions prone to explosions. Namely, partial PD in Dielectric – Equivalent Electric Model
discharge may generate or ionize dangerous
gases, which may be highly combustible. The Dielectric with PD fault occurring in a cavity is
industries such as chemical, oil and gas, and usually modeled1 as a capacitive voltage divider
mining are especially vulnerable. Thus, in parallel with another capacitor, as in figure 1.
naturally, partial discharge measurements are A fault in the insulating material is represented
part of testing explosion protected electrical by a ’void’ with capacitance C1, which is in
equipment .
1
series with the “good,” intact part of insulation
represented by C2. C3 represents the remaining,
Partial Discharge – What is it? intact dielectric.
The total capacitance, Ctot, for low voltage, of
A partial discharge is a localized dielectric the capacitor configuration is:
breakdown of a small portion of a solid or liquid
electrical insulation system under high voltage Ctot = C 3 + C1 * C 2 /(C1 + C 2) (1)
stress2. Partial discharge might occur in voids
within solid insulation, across the surface of
In the case of increased voltage, PD is initiated
insulating material due to contaminants or
in C1, resulting in a low voltage across C1.
Then, the total capacitance is:

80
Proceedings of the 2008 IEMS Conference

including most often used medium voltage


Ctot = C 3 + C 2 (2) apparatus.
The difference between (1) and (2) is:
∆C = C 2^ 2 /(C1 + C 2) (3) Conclusion

The purpose of the article was to discuss partial


discharge (PD) and to provide software
An abrupt change in the capacitance is reflected
simulations of the phenomenon, which is
in a change of a compensating charge, which
experienced frequently in power and signal
causes spike in values of electric current.
transmissions. The performed simulations match
closely the field data reported by the technical
PSPICE Simulations
literature. The understanding of the phenomenon
and early detection is critically important for
The simulations were performed for two values
number of industries that include mining,
of AC input voltage: 120V(see figure 2 and 3),
chemical, telecommunication, power
and 2.4KV(see figure 4 and 5). The capacitor
transmission, utilities, etc. Often, the partial
C1, representing void, with increasing input
discharge is not easily detectable and may cause
voltage, will experience partial discharge, which
severe degradation of the insulating material
will cause a drop in voltage magnitude across it.
affecting apparatus performance either
“Firing” is simulated here with a switch coupled
temporarily or permanently. Undetected partial
with pulse voltage source, which closes at a peak
discharge may create unsafe and dangerous
value of capacitor voltage, C1, providing
environment for the equipment and personnel.
constant, low voltage across capacitor C1. The
switch is activated by a pulse voltage source. A
References
switch is essentially open and closes only for a
short duration of the discharge, .2ms. Due to
1. Heinrich Groch. 2004. Explosion
capacitive nature of the overall impedance, there
Protection: Electrical Apparatus and
is a phase angle shift between input voltage and
Systems for Chemical Plants, Oil and
capacitive current.
Gas Industry, Coal Mining. Elsevier
As expected, the simulations plots show spikes
Butterworth-Heinemann.
in electrical current experienced by capacitor
2. www.wikipedia.com Dec. 15, 2006.
configuration and consequently in electrical
3. E. Kuffel, W.S. Zaengl. 1992. High
voltage.
Voltage Engineering Fundamentals.
Pergamon Press. First Edition.
Partial Discharge Detection
4. IEC 60270:2000/BS EN
60270:2001”High-Voltage Test
There is a variety of methods employed in
Techniques – Partial Discharge
partial discharge detection. Most common non-
Measurements.” International Standards.
intrusive methods6 involve the detection of
5. IEEE 400-2001”IEEE Guide for Field
transient earth voltages (TEV) in the radio
Testing and Evaluation of the Insulation
frequency of spectrum and ultrasonic
of Shielded Power Cable Systems.”
frequencies of emitted emissions. The methods
6. www.aetechnology.com. March 8, 2008.
allow for comprehensive assessment of the
condition of insulation in many systems

81
Proceedings of the 2008 IEMS Conference

C3 C2

C1

Where:
C1 – capacitance of the void
C2 – capacitance of the intact part of dielectric in series with C1
C3 – capacitance of the rest, intact part of the dielectric surrounding void

Figure 1. Equivalent electric circuit of a PD in a solid dielectric

Figure 2

82
Proceedings of the 2008 IEMS Conference

Figure 3

Figure 4

83
Proceedings of the 2008 IEMS Conference

Figure 5

84
Proceedings of the 2008 IEMS Conference

On Learning Performance Analogy Between Some Psycho-Learning


Experimental Work and Ant Colony System Optimization
H. M. Hassan and Saleh M. Al-Saleem
Arab Open University
(Kingdom of Saudi Arabia Branch, IT Department)
salehms@arabou.org.sa hassan_sasa_2004@yahoo.com

Abstract minimum error. Thus, cat's stored experience


seems to sequence improvement as number of
Simulation results of Thorndike's Psycho- trials increases. So, that experience modified
Learning Experimental Work using artificial well continuously following dynamical
neural network (ANN) modeling are synaptic adaptation (internal weights
presented. Moreover learning performance of dynamics), resulting in better learning
related to Thorndike's work is analogously performance. It is worthy to note that response
compared with ant colony systems Ant Colony speed in Thorndike's experimental work,
System (ACS) performance. Respectively the learning performance curves (for different
first model, presents Thorndike's cat individuals), agree well with a set of odd
behavioral learning that to get out from a cage sigmoid functions. Similarly, this set obeys
for obtaining food. The second ACS model performance speed curves (for different
used for solving traveling salesman problem communication levels) concerned with (ACS)
(TSP) optimum by brining food from different optimization processes while solving (TSP).
food sources to store (in cycles) at ants' nest. In natural world it is observed some diverse
Conclusively, both types of natural bio- learning aspects for the two suggested natural
systems (suggested Thorndike and ACS biological system (crenatures types for cats
models) investigated relations between and ants). However, both are shown to behave
adaptive behavior reinforcement learning and similar to each other, considering there
combinatorial optimization problem. learning performance carves. In details,
simulations of learning performance curves
1. Introduction behave as exponential decay function till
reaching stable minimum error value or
This work presents comparative analytical minimum time response. Conversely, this
study for some behavioral learning processes, minimum error implies maximum optimum
referring to simulated experimental results. speed for both models. That conversed
That is derived from adaptive behavior performance shown to agree well with odd
reinforcement learning and combinatorial sigmoid function behavior. Conclusively,
optimization observed for two types of natural maximum speed for cat to get out from cage at
bio-systems. Namely such experimental work Thorndike's work corresponds well to
based firstly on original animal learning optimum speed reaching solution of TSP
Thorndike's work, and secondly concerned solved by ACS model. Moreover, some other
with ant colonies solving the traveling models considering pattern reconstruction
salesman problem. Behavioral learning problem and genetic programming are also
processes for both are simulated by artificial given in comparison with two above presented
neural network and artificial ant modeling models.
respectively, [1], [2]. The paper is organized as follows. At next
However, commonly, both systems section revising of Thorndike's work is
observed to obey learning paradigm originated presented briefly. Description of modeling
from principles of learning without a teacher. process of Thorndike's work is illustrated at
Thorndike's cat behavioral learning observed section three. The fourth section is dedicated
to converge for optimization, through to illustrate ant colony system modeling.
sequential trial and error steps, [3] (without Finally, at last section, some conclusive
any supervision). The convergence process remarks and discussions are given.
obeys exponential decay function till reaching

85
Proceedings of the 2008 IEMS Conference

2. Revising of Thorndike's work

Referring to learning behaviorism


approach, Thorndike had carried out – about
one century ago – some psycho-learning
experimental work on animals, [3], [4]. This
work adopts behavioral learning approach
through trials and errors repeated steps
(cycles), [5]. Initially, the cats performance
trials results in random outputs. By sequential Fig.2 Thorndike normalized results
trials, errors observed to become minimized as seem to be near as exponential decay
learning cycles’ number increases. Referring to
fig.1, Original Thorndike's results are shown
as relation between response time and number
of trials. The results shown at fig.1, represents
behavioral learning performance of
Thorndike's work. However, normalized
learning curve presenting performance is
approximately given at figure 2.
Conversely to above given presentation of
learning performance curve, response speed
curve (for Thorndike's work) introduced at
fig.3. Interestingly, this curve seems to behave Fig.3 Illustrates normalized
similarly to sigmoid function behavior, [6]. performance curve presenting
Time (sec)
response speed rather than original
160
curve presenting response time at
140 figure 2.
120
3. Description of Thorndike's work
100 modeling
80
On the basis of original observed
60 experimentally measured performance,
40
realistic modeling for Thorndike's work is
presented at this section. The behavioral
20 learning process of the original experiments is
0
considered to belong to learning by interaction
0 5 10 15
No. of trial
20 25
with environment principle. However, the
error detected during that process is corrected
Fig.1 the original results of Thorndike spontaneously depending upon stored
representing learning performance for experience inside cat's internal neural system
a cat to get out from the cage for i.e. cat's experience has to be modified
reaching food. dynamically by synaptic adaptation (plasticity)
of internal weights' connectivity.
Consequently, suggested model for
Thorndike's work considers dynamical
changes of learning rate of as number of
learning steps increases. The statistical
analysis of model results proved the
consistency of learning performance
distribution as shown at following table 1.

86
Proceedings of the 2008 IEMS Conference

Table 1. Illustrates statistical analysis of model results description of original work performance
representing Thorndike's work: results.
Average
Additionally, it is interesting to observe that
Learning Coefficient
rate number Standard of
suggested Thorndike's modeling performance
Variance
of
σ appears to be analogously similar to behavioral
value
σ variation
η training
cycles
deviation
ρ performance of another ACS optimization
0.1 100.7 93.7889 9.6845 0.0962 model. By applying ANT-density algorithm for
0.2 51.3 22.6778 4.7621 0.0928 solving some TSP, results therein, [7], show
0.3 34.8 10.1778 3.1903 0.0917 that efficiency per ant (required to reach
0.4 26.5 6.2778 2.5055 0.0945
optimum) well improved as number of ants
0.5 21.5 2.9444 1.7159 0.0798
0.6 18.2 2.4 1.5492 0.0851 increase see figure 5. Consequently, it could be
0.7 15.8 1.5111 1.2293 0.0778 stated that number of trials for given number
0.8 14.2 1.2889 1.1353 0.08 of trials for given Thorndike modeling is
0.9 12.6 1.1556 1.0750 0.0853
analogous to number of ants. That considering
ACS behavior adopting ANT- density
Virtual initial infinite
error value
algorithm for solving TSP.
120
Finally, the presented model seems to take
100
into account the mixed learning paradigms
80
constituted of reinforcement learning, [8],
Average number
error correction, [9], and interaction with
environmental conditions, [10].
60
of training cycles

40

20
Minimum error limit
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

4 illustrates the relations between


error values and adaptive learning rate
values indicating the increase of stored
experience in biological neural
systems. Fig.5 Number of cycles required to
reach optimum rated to the total
The model performance learning curve is number of ants adapted from, [7].
shown at the above figure. This figure
illustrate, modeling similarity to the 4. Ant colony system modeling
approximated original learning performance
curve shown at Fig.2. The learning rate This section briefly introduces analysis of
parameter (η(m)) is changed as a function of behavioral learning process of ACS. that
training cycles number (m) in accord with the analysis considers solving TSP on the basis of
following formula: natural ecological observations in an ant
⎛ −m
⎞ colony model ,[11]. Additionally, interesting
⎜1− e τ ⎟ analysis of some recently published work is
η ( m) = η 0 ⎜ −m ⎟
(1)
⎜1+ e τ ⎟ taken into account, [1].
⎝ ⎠ Referring to Fig.3 shown at section 2 this
where m is number of training cycles figure illustrates the speed response for
during learning process simulating Thorndike's Thorndike's excremental work. Obviously for
experimental work. η0 is maximum learning different individuals (cats) the maximum
rate for our case equals 0.9, and τ is the search reached response speed may differ from some
time for the cat to get out from its cage to individual to another. This different response
obtain food it is suggested to equals unity. speed is analogous to different communication
Referring to this formula, it is noticed that levels among agents (artificial ants) as shown
maximum value of learning rate parameter at the following figure in blow. It is worthy to
converges to 0.9, as number of learning cycles note that communication among agents of
increases. So, this model provides realistic artificial ants model develops different speed

87
Proceedings of the 2008 IEMS Conference

values to obtain an optimum solution


considering variable number of agents (ants).

Fig.7 Graphical representation of


learning performance of ACS model
with different communication levels ( ).

Fig. 6: Communication determines a The above sit of sigmoid functions could be


synergistic effect with different conversely presented as a set of normalized
communication levels among agents decay exponential curves given by following
leads to different values of average formula where suggested (ηi) as individual
speed. differences factor. These differences are
suggested to be as one of factors (ηi). These
Consequently as this set of curves reaches factors could be presented in case of
different normalized optimum speed to get considering different learning performance
TSP solution (either virtually or actually) the curves (when normalized) as follows:
solution is obtained by different number of y(n)= exp(-ηi(n-1)) (4)
ants, so this set could be mathematically Where (n) is the number of training cycles.
formulated by following formula: That set of curves is illustrated graphically at
figure 8 given in blow.
⎛ 1 − e−λ n ⎞
f (n) = α ⎜⎜ ⎟
−λ n ⎟
(2)
⎝1+ e ⎠
where α is an amplification factors
representing asymptotic value for maximum
average speed to get optimized solutions and λ
in the gain factor changing in accords with
communication between ants. However by this
mathematical formulation of that model
normalized behavior it is shown that by
changing of communication levels
(represented by λ) that causes changing of the Fig.8 illustrates different learning
speeds for reaching optimum solutions. In performance curves for the examples
given figure 6 in blow, it is illustrated that
given considering normalization of
normalized model behavior according to
output response values adapted from,
following equation.
[13].
y(n)= (1-exp(-λi(n-1)))/ (1+exp(-λi(n-1)))
(3)
5. Conclusions
where λi represents one of gain factors
(slopes) for sigmoid function.
According to above animal learning
experimental work (Thorndike's cat), and its
analysis and evaluation by ANNs modeling,
agree well as for ACS, optimization process.
Also, the performance of both (ants and cats)
is similar to that for latency time minimized by
increasing of number of trials, [4].

88
Proceedings of the 2008 IEMS Conference

Referring to, [6], therein, shown that both [9] Haykin, S. 1999, Neural Networks, Englewood
work for Thorndike and Pavlov are supporting Cliffs, NJ: Prentice-Hall pp 50-60.
each other for learning performance, [3],[4]. [10] Fukaya, M., et al., 1988: Two level Neural
So, it is obvious that both obeys generalize Networks: Learning by Interaction with
Environment, 1st ICNN, San Diego.
(LMS) algorithm for error minimization by
[11]Dorigo, M. et al, 1997: Ant colonies for the
learning convergence, [4]. Also, that algorithm traveling salesman problem, at web site
agrees with the behavior of brainier mouse www.iridia.ulb.ac.be/dorigo/aco/ aco.html
behavior (that is genetically reformed) as [12] Hassan, H. M., On principles of biological
shown at, [13]. Additionally; adaptation information processing concerned with learning
equations for all of presented systems in the convergence mechanism in neural and non neural
above are running in agreement with dynamic bio-systems" Published at CIMCA 05 ,28-30
behavior of each other. Moreover, the learning Nov.2005,Vienna ,Austria.
algorithms for suggested systems are close to [13] Hassan, H. M., 2005: On quantitative
each other with similar iterative steps (either mathematical evaluation of long term Potentiation
and depression phenomena, using neural network
explicitly or implicitly). Finally, it is precious
modeling, published at SimMod 2005, pp.237 –
to note that the rate of increase of speed of 241.
response is analogous to rate for reaching [14] H. M. Hassan "On Learning Performance
optimum average speed in ACS optimization Evaluation for Some Psycho-Learning
process. Moreover, the increase on number of Experimental Work versus an Optimal Swarm
artificial ants is analogous to number of trials Intelligent System." Published at IEEE -
in Pavlov’s and Thorndike's work, as shown at Symposium on Signal Processing and Information
[1],[6],[14]. On the basis of above learning Technology ISSPIT 2005 Fifth Symposium held
algorithmic models, the issue of optimal during 18-20 Dec. 2005, Athens – Greece.
methodology for teaching how to read has [15]H. M. Hassan, et al, "Towards Evaluation of
Phonics Method for Teaching of Reading Using
been recently investigated, [15].
Artificial Neural Networks (A Cognitive Modeling
Approach)",to be published at IEEE Symposium on
References Signal Processing and Information Technology
ISSPIT 2007 , Seventh Symposium , 15-18 Dec.
[1]H. M. Hassan, 2004.” On evolutional study of 2007,Cairo-Egypt.
comparative analogy between swarm smarts and
neural network systems (part 1), published at NICA
2004, Natural inspired computer and applications,
Hiefe, China,
[2]Simon Garnier, Jacques Gautrais, and
Guy Theraulaz, 2007 “The biological principles of
swarm intelligence "Swarm Intelligence Journal
Volume 1, Number 1 / June, pp.3-31
[3] Thorndike E.L., 1911: Animal Intelligence,
Darien, Ct. Hafner. October 24-29
[4] Pavlov, I.P., 1927: Conditional Reflex, An
Investigation of The Psychological Activity of the
Cerebral Cortex, New York, Oxford University
press.
[5] Hampson, S.E., 1990: Connectionistic Problem
Solving, Computational Aspects of Biological
Learning, Berlin, Birkhouser..
[6] Hassan H. and Watany M.,2003: on
comparative evaluation and analogy for Pavlovian
and Throndikian psycho-learning experimental
processes using bioinformatics modeling, AUEJ,
6,3, 424-432, July.
[7] Alberto C., et al., 1991: Distributed
optimization by ant colonies. Proceeding of
ECAL91, Elsevier Publishing pp 134-142.
[8] Mariarty,E.D et al,1999: Evolvolutionary
algorithms for reinforcement learning, journal of
AI research 11:241-276.

89
Proceedings of the 2008 IEMS Conference

A Multivariate Statistical Approach to Analyze Nursing Errors

Xiaochun Jiang, BaaSheba Rice, Gerald Watson, and Eui Park


North Carolina A & T State University
xjiang@ncat.edu

Abstract test and laboratory results, and medication


instructions. The growth in the number of and the
Healthcare has been a contributing factor for wages earned by employed Registered Nurses (RN)
the increasing lifespan in the United States and mirror the growth of the healthcare industry. For
around the world. However, the healthcare instance, employment by registered nurses
industry has also seen medical errors that have increased by 5.4 percent during a 3 year period
serious consequences. Nursing errors are one of from 2003 to 2005. This growth rate was 84 percent
the problems. As the U.S. faces shortage of greater than the overall growth rate of total number
nurses, it is more important to identify the of employees which grew by only 2.9 percent
causes for such errors and provide suggestions to during that same 3 year period [1].
improve the situation. This research aims to Unfortunately, along with the rapid growth of
investigate the relationship between some key the healthcare industry comes the startling
factors that contribute to various nursing errors. number of medical errors. It has been estimated
Data from a voluntary medication error that medical errors result in the deaths of
reporting system at a local hospital were between 44,000 and 98,000 hospital patients
collected and analyzed. A multiway frequency annually [2]. Furthermore, medical errors do not
analysis was applied to study the relationship occur in just hospitals but in all settings, such as
among the variables. Results indicated that there nursing facilities, individual homes, pharmacies,
was no significant fifth order or fourth order physician’s offices and urgent care facilities. As
association among the five variables that were a result of the increasing nature of diagnostic
identified. However, two third order, five second tests and the more complex work environment,
order, and five first order associations were the potential for these errors has become more
found to be significant. These findings will help and more serious and life threatening [3, 4, 5, 6,
researchers to focus on those critical 7, 8]
associations and better design interventions to Over the years, studies revealed that a large
reduce nursing errors in the future. number of medical errors were preventable. For
instance, the results of one study indicated that
1. Introduction more than two-thirds of the adverse events were
preventable [4]. Another study estimated that
The healthcare industry plays an important almost half of the annual cost of 37.6 billion
role in the world. Over the years, the healthcare dollars due to medical errors was associated with
industry has achieved some success as errors that were preventable [2].
evidenced from the fact that the average lifespan Given the important role nurses play in the
worldwide during the second half of the 20th healthcare industry, more attention needs to be
century has increased by 20 years. Healthcare given to nursing errors. For instance, the factors
industry also provides employment opportunities that contribute to nursing errors need to be
for a large number of people with a variety of investigated. The causes of the nursing error
skill and educational levels. vary and usually consist of many factors. This
Among all the employees in the healthcare nature of the nursing error indicates that
industry, nurses have the largest number. Nurses multivariate statistical analysis instead of
play a crucial function in healthcare. They provide univariate statistics is needed to study nursing
patients with guidance and interpretation of orders, errors [9].

90
Proceedings of the 2008 IEMS Conference

This research aims to study the relationship The types of nursing error relate to the 5 R’s --
among some key variables that was identified to right patient, right dose, right medication, right
be factors that contribute to nursing errors and route and right time – of medication
identify the key associations that need to be administration, The values that this variable
prioritized. Data from a local hospital were could have include, Medication incorrect,
collected and analyzed in this study. Omitted medication, Dose/Rate incorrect, Other,
Extra medication, Medication given to wrong
2. Methodology patient, Time that medication was administered
was incorrect, Route incorrect, Label incorrect,
2.1 Sampling and data collection and Order manipulation.
Variance categories of the error (i.e.
Nursing error data in this study were collected dispensing variance, prescribing variance, and
from a large hospital located in the southeast medication administration variance), and reasons
region. A voluntary reporting system for and contribution factors (i.e. dosage or rate
reporting medication errors was used in the incorrect, wrong medication, extra medication,
hospital. The form used in this hospital is similar label incorrect, wrong patient and etc.) for such
to the U.S. Pharmacopeia (USP) Medication variance to occur, and severity of the variance
Error Reporting Form which is commonly used were recorded in the form by the nurses. To
to report medication errors to the USP. better understand the variances defined in the
A random sample of 6000 medical variances form, Table 1 provides more detailed description
(errors) that spanned a 4 year period was of the categories of the variances. Variances are
collected from the hospital. Based on the classified into categories from A to I.
literature review and the availability of Categories A and B are used to identify process
information, the following variables were improvement issues and represent near misses.
identified and selected in the sample: (1) day of Categories E through I usually require the
the week, (2) the appearance of the medication immediate notification of risk management.
on the list of high risk medication or confused
names, (3) type of error, and (4) variance 2.2 Statistical Analysis
category. Initially reports had a time that the
error was committed but it was thought that this All the variables identified in this study were
information was incorrect so this information is discrete in nature.
no longer reported. It is assumed that the
medication error occurred on the same day that 2.2.1 Descriptive statistics
was reported.
Day of the week provides information of on Descriptive statistics for the five discrete
what day of the week the nursing error occurred. variables can be seen in Tables 2 through 6.
Its values include Sunday, Monday, Tuesday, Results indicated that the occurrence of errors is
Wednesday, Thursday, Friday, and Saturday. not evenly distributed across all the days of the
High-risk medications, as defined by the week. Wednesdays have the most number of
Institute of Safe Medication practices, are drugs errors and Sundays have the least as shown in
that bear a heightened risk of causing significant Table 2.
patient harm when it is used in error. Examples
on the list of High Risk Medications include
insulin and oxycotin, if given intravenously.
Confused drug names are those that have look-
alike and sound-alike name pairs and consists
only of those name pairs that have been involved
in medication errors published in the ISMP
Medication Safety Alert. An example of a pair
of drugs on this list includes Cedex and Cidex.

91
Proceedings of the 2008 IEMS Conference

Table 1. Categories of the variances Table 2 Descriptive statistics for


day of the week
Category A: Circumstances that have the
Day of the
capacity to cause error.
week Frequency Proportion
Category B: An error occurred but did not Sunday 544 9.1%
reach the patient.
Monday 932 15.5%
Category C: An error occurred that reached
the patient but did not cause patient harm. Tuesday 1013 16.9%

Category D: An error occurred that reach the


Wednesday 1073 17.9%
patient and required monitoring to confirm Thursday 965 16.1%
that it resulted no harm to the patient and/or
required intervention to preclude harm. Friday 845 14.1%
Saturday 633 10.5%
Category E: An error occurred that may have
contributed to or resulted in temporary harm Total 6005 100%
to the patient and required intervention
Table 3 Descriptive statistics for type of error
Category F: An error occurred that may have
contributed to or resulted in temporary harm Type of error Frequency Proportion
to the patient and required initial or prolonged 1= Medication 879 14.6%
hospitalization. Incorrect
Category G: An error occurred that may 2= Omitted 1345 22.4%
have contributed to or resulted in permanent Med
patient harm. 3= Dose/Rate 1456 24.2%
Category H: An error occurred that required Incorrect
intervention necessary to sustain life. 4=Other 448 7.5%
Category I: An error occurred that may have 5= Extra Med 521 8.7%
contributed to or resulted in the patient’s
6= Wrong 642 10.7%
death.
Patient
7= Time 257 4.3%
Regarding the type of errors, three error types
contributed more than three-fifths of the total Incorrect
errors (61.2 percent) as indicated in Table 3. 8= Route 167 2.8%
These three error types were incorrect Incorrect
medication, omitted medication and incorrect 1.9%
dose or rate. 9= Label 117
Incorrect
10= Order 173 2.9%
Manipulation
Total 6005 100%

As reported in Table 4, less than one eighth of


the errors involved high risk medications (11.7
percent).

92
Proceedings of the 2008 IEMS Conference

In terms of variance, category C has the 2.2.2 Inferential statistics


largest frequency as shown in Table 6.
Table 5 showed that about three fourth of the A five way exploratory multi-way frequency
errors involved confused name medication. analysis was performed to develop a hierarchical
log-linear model that explains the relationship
Table 4 Descriptive statistics for between five variables: day of the week, high
high risk medication risk medication, confused name medication, type
of error, and variance.
High Risk Multiway frequency analysis is a multivariate
Medication Frequency Proportion statistical technique that is used to discover if
there is an association among the discrete
0 = Low 5301 88.3% variables and to develop a model of expected
cell frequencies that best predicts the observed
1 = High 704 11.7%
frequencies with the minimum number of
Total 6005 100% associations.
Since there are five variables in this research,
Table 5 Descriptive statistics for thirty one (1 fifth order association, 5 fourth
variance category order association, 10 third order association, 10
second order association, and 5 first order
association) hypotheses were tested. For
Variance instance, the following hypotheses were used to
category Frequency Proportion test the fifth order relationship:
1=A 306 5.1%
H0: There is no relationship among day of the
2=B 2026 33.7% week, high risk medication, confused name
medication, type of error, and variance category.
3=C 2567 42.7% Ha: There exists a relationship among these
4=D 976 16.3% variables.

5=E 85 1.4% 3. Results


6=F 35 0.6%
A multiway frequency analysis using the
7=G 6 0.1% maximum likelihood ratio statistic (G2) was
8=H 4 0.1% used to analyze the data. Results showed that
there was neither significant fifth order
Total 6005 100.00% association among the five variables nor any
fourth order association among them.
Table 6 Descriptive statistics for However, there were significant associations
confused names among day of the week, high risk medication,
and type of errors (G2=287.85, p<0.001), and
among confused name, high risk medication,
Confused name
and variance category (G2=217.34, p<0.001).
medication Frequency Proportion
At the 0.05 level of significance, the second
0 = Yes 4348 72.4% order effect tests revealed that there were
significant associations between high risk
1 = No 1657 27.6% medication and type of error, confused name
Total 6005 100.00% medication and high risk medication, day of the
week and type of error, high risk medication and
confused name medication, and high risk
medication and variance category.

93
Proceedings of the 2008 IEMS Conference

In addition, all the first order effects were medication safety system – preventing high risk
medication errors at the point of care,”
found to be significant. Journal of Nursing Administration, 34, pp. 437-439.

4. Discussion [6] J. Narumi, S. Miyazawa, H. Miyata, A. Suzuki,A.


Kohsaka, and H. Kosugi, (1999) “Analysis of human
error in nursing care,” Accident Analysis and
One of the important findings of this research Prevention 31, pp. 625-629.
is that there was no significant fifth order or
[7] M.C., Balas, L. Scott, and A.E. Rogers, (2004)
fourth order association among the five variables “The Prevalence and Nature of Errors and Near
studied. However, significant effects were found Errors Reported by Hospital Nurses,” pp. 224-230.
for two significant third order associations, five
second order associations and five first order [8] J. Osborne, K. Blais, and J. Hayes, (1999)
“Nurses’ perceptions: When is it a medication error?”
associations. These findings will help Journal of Nursing Administration, 29(4), pp. 33-38.
researchers to focus on those related variables in
order to develop interventions to reduce nursing [9] B. Tabachnick, and L. Fidell, (2001), “Using
Multivariate Statistics,” 4th edition. Allyn & Bacon,
errors. Needham Heights, MA.
Another thing that needs to be pointed out is
that multiway frequency analysis can also
provide insight to the nature of the relationship.
For instance, results from the multiway
frequency analysis can help predict which
category of a variable a case of nursing error
will fall into if a category of another variable
this case falls into is given. Such information,
then, will provide assistance and guide the
researchers to conduct further analysis.

5. References

[1] Bureau of Labor Statistics, U.S. Department of


Labor, “Census of Employment and Wages,” http://0-
www.statisticalwarehouse.com.sheba.ncat.edu/
Last retrieved on 5/19/2007.

[2] L. Kohn, J. Corrigan, and M. Donaldson, eds,


(2000) “To err is human: building a safer health
system,” Institute of Medicine.

[3] J., Ioannidis, and J. Lau, (2001) “Evidence on


interventions to reduce medical errors,” Journal of
General Internal Medicine, 16, pp. 325-334.

[4] L., Leape, A., Lawthers, T. Brennan, and W.


Johnson, (1993) “Preventing Medical Injury,”
Quality Review Bulletin, 19, pp.144-149.

[5] E., Lambert, (1978), “Modern Medical Mistakes,”


University Press, Bloomington, Indiana.
[6] J. Reinertsen, (2000) “Lets talk about error,”
British Medical Journal, 320, pp. 730-735.

[7] I., Hatcher, M. Sullivan, J. Hutchinson, S.


Thurman, and F. Gaffney, (2006) “An intravenous

94
Proceedings of the 2008 IEMS Conference

Implementing Innovative Performance Improvement Practices:


Effects of Workplace Deception
Jerry W. Koehler Thomas W. Philippe
University of South Florida University of Houston, Clear Lake
jkoehler@coba.usf.edu philippet@uhcl.edu

Abstract overcome organization cultures that foster


deception in the workplace.
Successfully implementing The purpose of this article is to develop
innovative performance improvement practices a theory to explain why deception is a
in the United States has been a difficult journey. significant organization behavior variable that is
Significant resistance to quality initiatives have organized in US organization cultures that
been reported by researchers and practitioners prevents innovative performance improvement
alike. They have attempted to identify causes of practices from being successful, we will first
resistance, such as the Lack of top management define workplace deceptions. Second, identify
commitment. This article hypothesis posits the the problem. Third, advance a hypothesis.
primary cause for the lack of success in Fourth, provide a discussion supporting the
American companies is due to cultures that are hypothesis, followed by a summary and
embedded with workplaces deception. And the suggestions for future research.
first priority for companies in the United States
is to develops culture that remove the presence 2. Defining of Workplace Deception
of workplace deception,
Workplace deception is a broad term
1. Introduction that signifies the intentional misleading of
another causing them to believe something that
Although companies in the United is not true it is (Grover, 1993, Bok, 1978). In the
States have adopted innovative performance workplace, an employee intentionally deceives
improvement practices, why have practices such management and/or another worker for the
as, quality circles, total quality management, six purpose of deflecting the truth in order to protect
sigma, Lean, etc., received mixed results and themselves from embarrassment and/or the
even failure? Surveys conducted of Gittleman cause of an error. Such behavior is quite
et.a.l. (1998) and Osterman (2000) found that common in the workplace primarily to protect
while organization’s innovative work practices, employees from being blamed for a mistake.
they are “still far from being widely accepted,” Deception is sometimes associated with
(Erickson and Jacoby; 2003). From their a more harsh definition that includes “lying”
research and theoretical perspective, Erickson (BOK, 1978). In the workplace however,
and Jacoby (2000), “employee networks - - deception is not likely to associate to be
social ties to individual outside the associated with plain lying. For example, we
establishment - - play an important role in (Phillipe and Koehler, 2005) many subjects were
innovation diffusion and organizational comfortable with not coming forth with the
learning”. We do not take issue with the whole truth. As one manager stated, “you would
Erickson and Jacoby findings. Establishing solid have to be fool to tell them everything.” Another
networks both outside and inside the company employee stated, “What management doesn’t
can be very helpful in creating successful work know won’t hurt them.”
practices. However, we maintain that the real
challenge to implementing successful innovative
performance improvement practices is to

95
Proceedings of the 2008 IEMS Conference

3. Issues disclose detrimental information. It appears that


deception in the workplace is acceptable in some
Innovative performance improvement situations, but not in others. This attitude creates
practices require no intentional deceptions by a culture that often makes it extremely difficult
management and employees. Success is not for organizations to be successful in achieving a
achieved with smoke and mirrors, half truth and quality culture where any deception is contrary
acts that cause another to be mislead. In fact, it to implementing a quality Philosophy.
is just the opposite. For quality improvement to This article advances that one of the
be successful, truth at all levels of the most significant causes of failing to implement
organization must be achieved. There can be no successful innovative performance improvement
inconsistency between one’s professed standards practices is workplace deception. We
and the operating standard’s. Newman (1986) hypothesize that creating a culture absent of
asserts that such inconsistency between exposed workplace deception is a required for successful
commitment not reflected by corresponding implementation of innovative performance
actions creates disconnects that causes improvement practices.
organization members to question their
credibility. “Disconnects” in performance 5. Discussion
improvement initiatives are absolutely deadly.
The effects of disconnects create a culture of Organization culture is defined as a
mistrust and is likely to prevent quality shared set of beliefs, values, assumptions and
initiatives from being successful (Philippe and patterns of behavior (Ouchi, 1981, p.150). Deal
Koehler 2005). and Kennedy (1982, p.4) regard corporate
In a nutshell, the problem facing culture as the cohesion of values, myths, heroes,
companies in the United States from rituals, and symbols, that have come to mean the
successfully implementing innovative essence of the company. Strong organizational
performance practices is that many organization culture reduces organizational uncertainties and
continue foster a culture of workplace deception. facilitates a common explanation of the system
Workplace deception cultures emerged early in among the organization’s employees. It creates a
the past century, when organization cultures common social order in which each person has a
were manifested by management versus labor. clear expectation of his duties. Further it
Workplace deception cultures emerged connects employees to a collective identity and
early in the last century and were manifested by purpose (Trice and Beyer, 1993; Cameron and
management – labor disputes. Also, the culture Quinn, 1999, p.4). If an organization is
in the United States is very individual oriented implementing a quality initiative, a strong
and tends to be more tolerable of deception culture of common workplace deception will
methods as opposed to a collective orientation become an obstacle and will lead employees to
that prevail in other culture. (XXXX) perceive a disconnection between what
management says and what they actually do.
4. Hypothesis Cameron and Quinn (1999) report there
are several “scientific studies that suggest a
When making a hotel reservation, positive relationship between dimensions of
obtaining a credit card, securing a bank loan, organizational culture and organizational
purchasing a car, etc., should the organization’s effectiveness (1999, p.5). The effects of the
representative disclose hidden chargers? In all organizational culture on individuals are well
likelihood, the policy of the organization is: “If a documented (Kozloski, Chao, Smith and
person doesn’t specifically ask, don’t disclose Hedlund, 1993). When individual and the
“hidden” information.” In fact, an employee organization values are harmonious, a
would likely be reprimanded for volunteering tremendous amount of energy is created. On the
information. Management creates a double other hand, when values such a quality are not
standard by requiring employees to fully harmonious, commitment and productivity are
threatened. Inconsistency or incongruence over

96
Proceedings of the 2008 IEMS Conference

quality values results in conflict, false However, management by fact is not often
expectations and diminished capacity (Kouzes achieved. It is more likely that employees in a
and Pozner, 1993, p.121). Kotter and Heskett deceptive workplace culture will ignore the facts
(1992) argue that organizations with adaptive, or provide deceptive explanations to mitigate
performance enhancing cultures out performed negative reactions. Bies, Shapiro, and
non-adaptive, unhealthy ones precisely because Cummings (1998) and Shapirom Buttner and
of the emphasis on attending to their Barry (1994) found that Lies, excuses and
constituencies. These constituencies can be stories are used to reduce conflict and mitigate
described as the organization’s customers, negative effects of a decision or action. When
stockholders and employees. They found, by failures or harmful outcomes occur, those
contrast, that in organizations with non-adaptive involved may lose a great deal of status, power,
and unhealthy cultures, most managers are credibility and their competence may come in to
motivated by self interest and deception question (Sitkin & Bies, 1993; Sutton &
becomes a common organization behavior. Callahan, 1987). Deceptive explanations can
In our studies on hypocrisy (Phillipe and serve to plausibly deny responsibility for the
Koehler, 2005) we found promoting self problem (Sitkin & Bies, 1993; Browning &
interests tend to be common and perceived to be Folger, 1993). However, such explanations only
acceptable. Our findings are supported by prevent success in implementing quality goals.
agency theory which assumes that individual The seduction of misleading excuses or
behavior is guided by self-interest (Eisenhardt, conceiving stories to protect one’s status can
1989). Conflict emerges when the agent cause serious harm to the organization.
interprets the situation differently that the Most change management specialists
principal and feels it is acceptable to protect advance that change initiatives must be led to
their self-interest. Further, our finding is top management. Top management actions have
supported by ethical ambivalence theory (Jansen to be consistent with their exposed views. We
and Gilnow, 1985). This theory assumes that argue that even when top management is on
individual ethical behavior is a function of self- aboard; it does not create a culture absent of
interest. Deception, although unethical, appears deception in the workplace. Top management
to be acceptable if it is in your self-interest, e.g., actions are pivotal, but do not overcome a
ENRON. culture that foster deception. For top
Kouzes & Posner (1993) Cite a survey management to expose quality and to ignore
conducted for the American Society of Quality fundamental assumptions made by employees is
Control (ASQC) by the Gallup organization. a mistake. The first step in initiating innovative
Fifty-five percent of employees surveyed performance improvement practices is to create
reported that their companies stated that it was a culture where employee deception is not a
extremely important to show their customers common organization behavior. When cultures
that they are committed to quality. However, are able to cultivate an environment of openness
only 36 percent of the respondents stated that the and honesty where removal of blame and
company followed through extremely well on retribution are absent. Organizations that
this commitment. When asked if their companies remove workplace deception are more likely to
considered making quality everyone’s top be successful in creating cultures that achieve
priority as extremely important, 53 percent innovative performance improvement practices.
stated yes. But only 35 percent said the company
was actually doing very well in that arena (1993, 6. Summary and Future Research
p.39). When the perception of leadership action
falls behind leadership promises, the credibility This article hypothesizes that creating a
gap widens. This gap induces perceptions of culture that minimizes workplace deception is a
workplace deception. requirement for successfully implementing
Quality initiatives are likely to be innovative performance improvement practices.
successful when employees manage by facts. It provides reasoning and research to support our
This principle is often cited by organizations. hypothesis. It suggests the companies in the

97
Proceedings of the 2008 IEMS Conference

United States much meet the challenge of Erickson, C.L. and Jacoby, S.M. (2003). The effects
removing workplace deception in their culture if of Employee Networks on Workplace Innovation and
they are going to be successful in implementing Training. Industrial and Labor Relations Review, Vol
performance improvement initiatives such as 56. No. 2
Total Quality Management, Six Sigma, and
Erickson, C.L. and Jacoby, S.M. with Copeland, R.,
Lean.
Goldschmidt and Tropher, J. (1998) “Training and
Work Organization Practices of Private Employers in
Future research should focus on the California.” Policy Research Report, California
development of a workplace deception Policy Seminar, Berkeley.
measurement instrument once the instrument is
validated, a test of the proposed hypothesis Grover, S.L. 1993, Lying Deceit and Subterfuge: A
could be tested by administering the instrument model of dishonesty in the workplace. Organization
in companies that have been successful and Science, 4, 478-795.
unsuccessful in implementing innovative
performance improvement practices. If the Gittleman, M. Horrigan, M. & Joyce, M. (1998).
proposed theory is correct companies that are “’Flexible’ Workplace Practices: Evidence from a
successful should have cultures that are Nationally Representative Survey.” Industrial and
significantly less in workplace deception. Labor Relations Review, Vol. 52, No.1 (October), pp.
99-115.

7 References Jansen, E. & Von Gilnow, M.A. 1985. Ethical


ambivalence and organizational reward systems.
Bies, R.J., Shapiro, D.L., & Cummings, L.L. 1988. Academy of Management Review. 10, 814-822.
Causal accounts and managing organizational
conflict. Communication Research, 15, 381-399. Osterman, P. 2000. “Work Reorganization in an Era
of Restructuring: Trends in Diffusion and Effects on
Bies, R.J. & Shapiro, D.L. 1998. Voiced and Employee Welfare.” Industrial and Labor Relations
Justification: Their influence on procedural fairness Review, Vol. 53, No. 2 (January), pp. 179-96.
judgments. Academy of Management Journal, 31,
676-685. Phillippe, T.W. and Koehler, J.W. 2005. A Factor
Analytical Study of Perceived Organizational
Bok, S. 1978. Lying: Moral choice in public and Hypocrisy. SAM Advanced Management Journal.
private life, New York: Random House. 70(2): 13-20.

Eisenhardt, K.M. 1989. Agency theory: As Shapiro, D.L. 1991. The effects of explanations on
assessment and review. Academy of Management negative reactions to deceit. Administrative Science
Review. 14, 57-74 Quarterly. 36, 614-630.

98
Proceedings of the 2008 IEMS Conference

Developing an Information Strategy for Civil Aviation Competitiveness: The


Joint Technical Data Integration (JTDI) Project at Morgan State University

Cynthia Brown-LaVeist, S.K. Hargrove, J. Doswell, Deborah Ihezie


Morgan State University, Juxtopia
LaVeistinc@netzero.com

Abstract
The JTDI system was developed by the Naval
The objective of the Joint Technical Data Aviation Systems Command in 1998 to alleviate
Integration (JTDI) project at Morgan State the use of aviation maintenance paper based
University is to demonstrate specific Department processes. The military wanted a system to
of Defense advances in aviation maintenance improve communication and data reliability
and aircraft availability in ways that would among support groups. These groups consisted
benefit the civil aviation community. This of weapon systems program offices and soldiers
project will describe a research strategy to [1].
configure an information and system
architecture to assist the civil aviation industry The key components of the JTDI system are:
through a collaborative project of academia,
industry, and government. ƒ Deployable caching servers. These
servers provide automatic technical
1. Introduction manual data updates.
The aviation industry utilizes vast amounts of
paper aircraft maintenance documents. The ƒ Deployable test systems. These systems
conversion to a paperless environment provides receive technical manual updates from a
maintenance data that is more organized and caching server and supply the manual
easily accessible. The Joint Technical Data information to the Portable Electronic
Integration (JTDI) system is a 3-three-tiered Display Devices (PEDDs) at the
web-enabled Delivery Management System that worksite.
provides updated maintenance information to
individuals in the field of aviation. The Unites ƒ Portable Electronic Display Devices
States Military utilizes JTDI to deliver (PEDDs) also known as Portable
Interactive Electronic Technical Manuals Electronic Maintenance Aid (PEMA)
(IETM) to its pilots. These electronic manuals display the technical manual
are accessed on demand electronically by information to the maintainer at the
Portable Electronic Display Devices (PEDDs) worksite.
rather than pull maintenance information from
stacks of paper manuals [1]. • TechCam capabilities. Real-time
collaboration tools to communicate with
2. Background technical representatives and subject
matter experts.
The Joint Technical Data Integration (JTDI)
system is a 3-three-tiered web-enabled Delivery The JTDI system design utilizes Commercial
Management System that provides updated Off the Shelf (COTS) hardware and software
maintenance information to individuals in the products, which can be easily upgraded over
field of aviation. time.

99
Proceedings of the 2008 IEMS Conference

Through the efforts of Morgan State


University School of Engineering, the Intergraph
Corporation and the NASA funded Chesapeake
Information Based Aeronautics Consortium
(CIBAC), the JTDI system was implemented at
Morgan State University in December 2007. A
second Morgan State JTDI system was
implemented at the Intergraph Corporation in
Patuxent River, Maryland.

3. Objective
The research objective is to determine what Figure1. The architecture of the JTDI system
roles the JTDI technology can play in civil
aviation. JTDI can impact flight safety when Figure 1 displays three levels: the Data Owners,
integrated with civil aviation maintenance. JTDI (Top Tier and Mid Tier) and JTDI Users.
Examples are: The Data Owners provide the data for the
system at the top tier. The Subject Matter
ƒ Increase the effectiveness of information Experts (SMEs) provide supplemental
through Dynamic Personalized Data knowledge for the system data via video
Delivery. JTDI can provide the right conferencing at the top tier. The top tier also
data to the right user at the right time. provides a system help desk. The JTDI Users are
the on-site employees who are available to
ƒ Enable continuous access to provide the updated data to the war fighters [2].
aviation/aircraft/safety data by aircraft
operators and maintainers. The top tier capabilities are:

ƒ Provide aircraft operators and ƒ JTDI Top-Tiers are located at Morgan


maintainers with powerful search tools State University and Intergraph at
so they can effectively execute their Patuxent River Naval Station, MD
mission.
ƒ JTDI Top-Tiers have tech data caching
Through research of the civil aviation capacity
maintenance process with the FAA and civil
airline companies, a determination of the best ƒ Top-Tiers are directly accessible via
adaptation of the JTDI system for civil aviation web-based interfaces
will be obtained.
ƒ Provides TechCam remote support
4. Morgan State JTDI Implementation capability

Morgan State University has two JTDI ƒ Provides search capability or direct
systems. One system is located at Morgan State synchronization via PEMA or work
University School of Engineering in Baltimore, station
MD. The second JTDI system is located at the
Intergraph Corporation in Patuxent River, ƒ Same standard look and feel on the Mid-
Maryland. These redundant systems are Tier/Top-Tier for each weapon system
implemented in 3 tiers. The top, middle and
PEMA level tiers synchronize to provide the ƒ Weapon system oriented Web sites with
maintenance data. standard look and feel for each weapon
system

100
Proceedings of the 2008 IEMS Conference

Deployable Test Systems are configured to


contain a wireless antenna, a router, a local
caching server and PEMAs (ruggedized
laptops). The PEMA level capabilities are:

• Provide wireless communication of data


to PEMA.

• Provide caching of data from top and mid


tiers.

• Provide TechCam support to maintainers.


Figure2. JTDI website [4]
Currently under research is an addition to
JTDI PEMA level which would provide
maintenance and TechCam information to
The JTDI mid-tier is optional in a JTDI wireless and wearable head mounted displays.
implementation. The mid-tier capabilities are: These goggles would provide hands free
maintenance information to the maintainer.
ƒ JTDI Top-Tier and Mid-Tier
synchronize data The data flow process of the JTDI system is
done in the following process steps: The data
– Data delivered per site needs owner transfers the data to the top tier server via
e-mail, CD, or maintenance websites. The data
– Tech data library subscriptions owner gains access to the top tier through a web
and weapon systems at a given based user interface.
site
The top tier then caches the data from the data
ƒ Mid-Tier is accessible via a local LAN owner so it can be transported. Once the data is
stored on the top tier, JK Update software from
The PEMA level provides access to the JTTI the Joint Knowledge Caching Server (JKCS)
system IETMs via the Deployable Test System. pulls the information down from the top tier and
transfers it to the mid tier.

The mid tier server then synchronizes the data


from the top tier in order for the PEMAs to
access the data. The mid tier has an Internet
Information Services web service that acts as a
host server to all the data files. The Coveo
Enterprise Search™ engine allows access to the
data on the mid tier. The JK update software
transfers the data from the mid tier to the lower
tier. The PEMAs gain access to the updated data
via the Coveo Search™ Engine. The users can
detach the device from the network and use for
their on-site needs [2].
Figure3. JTDI Deployable Test System (PEMA
LEVEL)

101
Proceedings of the 2008 IEMS Conference

5. Methodology carriers. The Federal Aircraft Regulations


(FARs) for civil aircraft maintenance are:
Utilize a Systems Engineering Approach
ƒ Part 121 – Operating Requirements:
Domestic, Flag, and Supplemental
Operations.

ƒ Part 125 – Certification and Operations:


Airplanes Having a Seating Capacity of
20 or More Passengers or a Payload
Capacity of 6,000 Pounds or More.

ƒ Part 135 – Operating Requirements:


Commuter and On Demand Operations.

The FAA states that the maintenance


document procedures for civil aviation are
varied from carrier to carrier.

ƒ Some carriers are fully automated for


maintenance documents.
Figure4. System Development Lifecycle [5]
ƒ Some carriers are still using paper
To research of the civil aviation maintenance
methods.
process and determine the best adaptation of the
JTDI system for civil aviation, a systems
ƒ Many carriers are in transition from
engineering approach will be used. (See Figure
paper method to automated methods.
4) This approach requires the standard phases of
analysis, design, development and
The cost of a “civil” JTDI system will be a
implementation as well as incorporates review
major consideration. The cost of deployment of
steps at the end of each phase.
the JTDI system was approx 1 Million. It is
expected that many carriers cannot afford this
Documentation of the current civil aviation
cost.
maintenance process will be obtained from FAA
and civil aviation companies. The FAA approves
During the requirements and design phase,
the document maintenance procedures for all
two paths will be considered:
civil aviation companies.
ƒ “Right Sizing JTDI” - The JTDI
Requirements Analysis of an adapted JTDI
software, software processes and
system will be performed. The requirements
hardware can be right sized to
document will be used for the design of an
accommodate a more reasonable cost.
adapted JTDI system.
ƒ “JTDI Service” – The current JTDI
Lean Six Sigma tools will be incorporated at
system can provide maintenance data to
each phase to enhance the research.
carriers as a service. The Morgan State
JTDI systems would be approved at the
6. Preliminary Results appropriate security levels and
After initial research with the FAA, the maintenance data would be provided to
following it was determined that FAA approves all approved requestors.
all civil aircraft maintenance programs for

102
Proceedings of the 2008 IEMS Conference

The maintainer equipment is another [2] Eddie Boyd, “The Investigation of Three-Tiered
consideration. Currently JTDI provides COTS Software Technology for Utilization of
maintenance manuals on PEMAs (Laptop) Data Acquisition and Abstraction of Aviation
devices. Although PEMAs provide a Data”, 2007 CIBAC Summer Institute Research
Reports, August 2007
paperless method of obtaining information,
[3] S. Cummings 2000, Summer. Joint Technical
the maintainer can experience mobility Data Integration Knowledge as a Force
limitations with PEMAs when servicing Multiplier. The DISAM Journal. Summer
parts of an aircraft, such as the wing area. 2000(issue), p. 57-59
Hands free devices will be researched to
supply data to maintainers, such as Head [4] Joint Technical Data Integration
Mounted Displays (HMDs). https://jatdi.redstone.army.mil/

REFERENCES [5] The System Development Lifecycle


http://ww2.cis.temple.edu/barampublic/sdlc.jpg
[1] Nathan Jones. “The Investigation of the JTDI’s
3-Tiered Hardware Architecture for the
Utilization of Data Acquisition and Abstraction
of Aviation Information”, 2007 CIBAC Summer
Institute Research Reports, August 2007

103
Proceedings of the 2008 IEMS Conference

Dimensions of Service Quality for TV Satellite Channel Programs:


A Framework for Quality Improvement

Ashraf H. Galal Tamer A. Mohamed


Qatar University British University in Egypt
a.galal@qu.edu.qa tamer.mohamed@bue.edu.eg

Abstract makes it difficult and sometimes impossible to


manage.
There is no doubt that nowadays we are A number of authors had explained
living in the era of TV satellite channels. Mass different quality systems in the form of quality
media has been transformed from a dimensions which constitutes the overall
communication medium to an enormous service perceptions of the word quality. These include
industry that needs to be monitored, controlled, service systems as well as product
and improved. It is taken for granted that these manufacturing systems; however, this paper
channels whether governmental or commercial focuses on service systems.
are directed by customer (viewer) orientation. Garvin [3, 4, and 5] were among the most
All these channels are seeking to develop and significant researchers in the area of defining
improve their quality, but what type of quality? dimensions for quality. These researchers
to what extent? and how? This paper seeks to identified eight competitive dimensions of
explore and develop a deeper understanding of quality that could guide companies and
the service quality dimensions of satellite institutions in tackling their own quality. These
channel programs. These include Attractiveness; include performance, features, reliability,
Clarity; Diversity of Contents; Freedom; conformance, durability, serviceability,
Transparency; Balance; Convenience for Values aesthetics, and perceived quality. Garvin also
and Traditions; Accuracy; Respect; declared that focusing on these dimensions; any
Responsibility and Accountability; Usefulness; company/institution can differentiate itself from
and Persuasiveness. their competitors. Other scholars and researchers
This will then be utilized by the satellite framed the word quality in the form of a number
channels top management to enhance the quality of dimensions based on their perception to the
of the broadcasted programs and thus achieve expression, as follows:
better customer (viewer) satisfaction. Minjoon, et al. [9] understood the
Utilizing these dimensions, this paper also importance of in-depth investigation of what
presents a framework that can be used to customers perceive to be the key dimensions of
improve the service quality for the TV satellite service quality and what impacts the identified
channel programs. dimensions have on the customers' satisfaction.
The statistical analysis in their study identified a
1. Introduction. total of four key online banking service quality
dimensions: courtesy, attentiveness/
Satellite channel service is becoming attractiveness, reliable/prompt service, and
increasingly crucial to people and organizations content.
both for entertainment and information retrieval Sebastianelli et al. [12] pointed out that
as one of the basic forms of mass media. the factors (dimensions) which affect the quality
Limited authors explained TV satellite channel of online retailing include: homepage, online
service in a process framework where it can be product catalogue, order form, and customer
monitored, controlled and thus improved. This service and support.

104
Proceedings of the 2008 IEMS Conference

Other researchers pointed out dimensions tangibles; reliability; responsiveness;


like: reliability, convenience, diversity, competence; courtesy; credibility; access;
availability, responsiveness, empathy, post- communication; and understanding the
service, and security which will affect the customer. Defining the above dimensions made
attraction of potential consumers to utilize it easy to establish relationships among service
online retailing [15]. quality, quality performance, and customer
Hernon and Calvert [7] explored the satisfaction. This helps to highlight the need for
features and dimensions of library e-service customer-oriented approaches that focus on
quality among university students. These improving the delivery of the public-service
dimensions include: ease of use; web site quality.
aesthetics; linkage; collections; reliability; Rao et al., [11] investigated some
support; security/privacy/trust; ease of access; dimensions that can be used to measure the
flexibility; and customization/personalization. quality of the knowledge management system
Ma et al., [8] identified the dimensions of and to compare knowledge management system
service quality for the application service quality across systems. The dimensions were
providers industry through both qualitative and grouped into ontology dimensions, knowledge
quantitative approaches. Seven dimensions were item dimensions, knowledge retainer dimensions
identified namely features; availability; and knowledge usage dimensions. Under each of
reliability; assurance; empathy; conformance; these groups, a number of dimensions were
and security. presented with a definition and a way of
Srikatanyoo and Gnoth [13] studied the assessment or measurement. These sets of
quality dimensions that drive students' choice of quality dimensions appear to offer the possibility
international tertiary education providers among of many benefits that will help to improve the
Thai potential students. These dimensions were: quality of service for such systems
academic and supporting facilities, academic From the above literature, it is clear that
staff performances, environmental conditions, defining service quality in the form of
entry requirements, academic reputation of a dimensions can help achieve better quality
country, and academic reputation of domestic management, improve quality, and thus attain
institutions. customer satisfaction.
Bedi and Gaur [2] defined quality model The purpose of this study is to identify and
that addresses various dimensions of quality and define the basic dimensions of service quality in
proposes a method to evaluate the quality of the the TV satellite channel programs. i.e. explores
multi-agent distributed software system. This the different dimensions that are critical to
was based on quality factor-criteria relationship. service quality in satellite channel programs.
Harroff and Valentine [6] explored and These are explored with an appropriate way of
developed thorough understanding of the quality assessment or measurement. Quality
dimensions of Web-based adult education. dimensions definition is the first in a framework
Through their study, dimensions of program for quality improvement of satellite channel
quality were found to be quality of instruction, programs service. This framework is proposed
quality of administrative recognition, quality of through the context of the paper to assist
advisement, quality of technical support, quality monitor, control, and improve satellite channel
of advance information, and quality of course programs service quality. The following section
evaluation. presents these quality dimensions. The quality
Agus et al., [1] defined the dimensions of improvement framework for satellite channel
service quality in the Malaysian public service programs is shown in Section 3, followed by
sector. Nine of the ten original service conclusion and future research in Section 4 and
dimensions identified by Parasuraman et al. [10] references in Section 5.
were utilized to relate to distinctive features of
public-service quality. These dimensions are

105
Proceedings of the 2008 IEMS Conference

2. Multiple Dimensions of Service Quality programs is diversified both demographically as


for TV Satellite Channel Programs. well as psycho-graphically.
Demographic diversity include: gender
In literature, there are no specific (M/F); age (children/teenager/adult/
definitions that match all of the TV satellite mature/elderly); areas (rural/urban/
channel programs service quality dimensions. cosmopolitan); income (low/moderate/ high);
Review of related literature in the area of service occupation (professional/non-professional);
quality dimensions together with interviewing education (illiterate/ moderated/educated/ highly
mass media experts identify the following educated); and SES (low profile/
dimensions that define the word quality in the moderate/high).
context of TV satellite channel programs. These Psychographic diversity include leisure
dimensions include Attractiveness; Clarity; time; life style; wishes; hobbies; traditions;
Diversity of Contents; Freedom; Transparency; values; motives; drives; incentives; and instincts.
Balance; Convenience for values and traditions;
Accuracy; Respect; Responsibility and 2.4. Freedom
Accountability; Usefulness; and Persuasiveness.
The authors of this paper try to state This dimension tries to assess many other
(restate) the quality dimension definitions to items that collaborated in the end result of
match the service quality of the satellite freedom including:
channels. In what follows these dimensions will - Areas, topics, issues of discussion which are
be explained in more details. allowed/not allowed;
- Guests or speakers which are allowed/not
2.1. Attractiveness allowed;
- Types of guests in terms of their position
This includes the screen uniqueness and (official/governmental guests, current/ former
creativeness in terms of structuring, organizing, guests, opposition figures, public figures,
lighting, quality of reception, language, way of artists, stars …etc.)
presentation, usage of special sounds and music/ - On air or recording programs
visual effect, editing techniques (linear or non - The extent of cutting content of recorded
linear). This should be fulfilled in the channel programs
program, in the promo about the programs, or - Allowance of receiving telephone calls/ e-
about the channel itself. mails/ faxes on air

2.2. Clarity 2.5. Transparency

This is a dual concept. One related to the This is one of the most important
format which has been already discussed in the dimensions due to the difficulty of assessment
previous dimension (attractiveness). The other because of its intangibility. This item can be
concept is discussed in terms of: clarity of idea; measured in terms of: freedom of interest or
clarity of questions (simple/complicated/ benefit from the crew of the program to cover an
sophisticated); clarity of answers; announcer issue more than another; honesty in selecting
probing for the guest answers; and clarity of and covering issues in terms of the professional
conclusion. criteria; freedom from any compliment in
selecting figures/guest speakers who are
2.3. Diversity of content participating in the programs; and finally
flexibility to announce facts, figures, evidence,
This dimension tries to measure the extent proofs…etc. as it is.
by which the content of TV satellite channel

106
Proceedings of the 2008 IEMS Conference

2.6. Balance freedom because, if the channel is targeting


freedom, it must be responsible enough for all of
To what extent the channel is balanced in its broadcasted programs. Intensive effort should
dealing with the following areas: replying rights; be made to assess the following aspects: types of
correction rights; representation of different discussed topics/issues in terms of professional
intellectual schools (capitalization, communism; criteria; commitment with its vision and
socialism; Islamic...etc); and representation of mission; achieving the strategic goals of the
minorities. channel; bias towards audience interest;
keenness of the channel to achieve significant
2.7. Convenience for values and traditions solutions for different customer problems; and
monitoring and evaluating its performance.
This dimension is very difficult to assess
because it is a very respective item. What is 2.11. Usefulness
accepted in a certain time, environment, or
culture may be rejected in another. The authors If all of the above dimensions are fulfilled,
of this paper depend on measuring this item the channel will be useful for all viewers, but
from the point of customer/viewer personal some items could be elaborated as a measure of
view. For example: the extent by which the usefulness. This includes the extent by which the
customer considers a specific channel is pro or channel programs achieved the communication
against his/her values and traditions. functions starting from the functional model [14]
to the modern functional school of
2.8. Accuracy communication, which covers the following
items: the audience should be informed about
This dimension reflects the extent by different important issues; the audience should
which the satellite channel is respecting the learn, be educated from the broadcasted
viewer/customer in the following items: the programs; they should achieve social heritage;
accuracy of the time and duration of the be entertained; and should be able to solve their
programs; the commitment with the program problems using program outcomes.
plan; the interventions of the advertisements
which may interrupt or noise the viewer-ship 2.12. Persuasiveness
process; and the accuracy in covering topics and
issues. This is not actually a dimension but rather
an outcome. If all of the above dimensions are
2.9. Respect highly fulfilled, the channel programs will
persuade the viewer.
This dimension measures how much the
channel respect the audience mentality in 3. Quality Improvement Framework.
mentioning figures and numbers, receiving real
or artificial mail or calls, utilizing artificial or Defining quality dimensions is the first
real guests, and forcing speakers to tell pre- step in any service improvement model. It is
specified opinions. required to define the critical dimensions for
such service, assess the quality performance,
2.10. Responsibility and accountability point out areas of malfunction, perform
improvement actions in these areas to improve
It is not enough for the satellite channel to quality and thus achieve better customer
be responsible for their programs, but also it satisfaction. A framework for improving TV
must be accountable enough for all the content satellite channel programs service quality
of the broadcasted programs. Responsibility and is presented in Figure 1 below. This framework
Accountability is the other face of the coin of is divided into four parts:

107
Proceedings of the 2008 IEMS Conference

(a) Dimension Defining and Ranking 4. Conclusions and Future Research


(b) Measurement of service quality
(c) Ranking TV Satellite Channels The authors were highly interested to
(d) Improvement take place answer the question what is a “good quality”
The framework starts by interviewing television program for audience? Viewers differ
media expert to recommend quality dimensions in the ways in which they assess the quality of a
for the TV satellite channel programs. In parallel program, and are the best judge of what
to this stage, literature review for the quality considered suitable. However, the dimensions
dimensions in different service areas is presented in the context of this paper might help
performed. These two steps help achieve a in selecting programs to view. It also facilitates
comprehensive list of dimensions that define the the quality improvement process for the satellite
quality in TV satellite channel programs. Upon channel programs.
achieving the list of dimensions, satellite There is an enormous need to define and
channels potential viewers should be surveyed to quantify the quality dimensions in a way to be
prioritize/rank these dimensions with respect to measured precisely. Further research needs to be
their perspectives. High priority dimensions done via quantitative and qualitative content
should achieve high top management attention. analysis as well as measuring the audiences in
Following this, each channel performance terms of demographic, personal and
should be assessed using customer surveying psychographic analysis.
and content analysis. Customer surveying will There is no way to force satellite channel
reveal the quality performance of the channel by these criteria but in the very competitive era
programs from the customer point of view, of communication, satellite channel has to deal
whereas, content analysis will explain the with the audience as a customer and it has to
quality performance from the expert fulfill customer needs and expectations even if it
perspectives. Using these two prospects should is not scrambled.
in turn help to improve the reliability of the The quality model would be the same in
collected data. logic for different countries although significant
Knowing the performance of each channel differences due to region, environment, and the
in each dimension and the importance of each communication system might be available.
quality dimension, it will be easy to rank the These differences are worth being identified in
channels based on an overall score that depends future researches.
on both the importance of each dimension and
the quality performance of such dimension. This
will help achieve a list of ranked channels with
respect to customer and mass media expert
opinions.
Malfunctioning dimensions (low score in
performance and high score in importance)
should require top management focus to carry
out a list of recommended improvement actions
that will help enhance quality and achieve better
customer satisfaction.
The above improvement process is not a
one shot process but rather a continuous process
that can repeat itself as necessary to achieve not
only customer satisfaction but also delight.

108
Proceedings of the 2008 IEMS Conference

Literature review Media Expert


in service quality Interviewing

Developing a list of important


dimensions for satellite
channels quality

Potential Viewers
Questionnaire Distribution

Dimensions Defining
and Ranking Data Collection

Dimensions Ranking
Statistical Data Analysis

Reach the most important


dimensions that need
management focus

Customer Channel program


Surveying Measuring the performance of Content Analysis
channels in each of the above
dimensions
Measurement of Service
Quality

Rank satellite channels with


Ranking TV Satellite respect to customer and media
Channels expert views

Recommend improvement
actions for each channel in
Improvement Take Place their malfunction dimensions

Improvement actions take


place

Improved Quality Better customer satisfaction

Figure (1) Framework for Service Quality Improvement of TV Satellite Channel Programs
109
Proceedings of the 2008 IEMS Conference

5. References.
[1] A. Agus, S. Barker, and J. Kandampully, “An [12] R. Sebastianelli, N. Tamimi, and M. Rajan,
exploratory study of service quality in the “Consumers' perceptions of the factors affecting
Malaysian public service sector”, International e-quality”, Decision Sciences Institute
Journal of Quality and Reliability Management, Proceedings, 2003, pp. 1855-1860.
Vol. 24, No. 2, 2007, pp. 177-190.
[13] N. Srikatanyoo, and J. Gnoth, “Quality
[2] P. Bedi, and V. Gaur, “Multi dimension quality dimensions in international tertiary education: a
model of MAS”, Proceedings of the 2006 Thai prospective students' perspective”, Quality
International Conference on Software Management Journal, Vol. 12, No. 1, 2005, pp.
Engineering Research and Practice and 30-40.
Conference on Programming Languages and
Compilers SERP'06, Vol.1, No.1, 2006 pp. 130- [14] W. R. Wright “Functional analysis and mass
135. communication”, Public Opinion Quarterly, Vol.
24, 1960, pp. 610-613.
[3] D. A. Garvin, “What does product quality really
mean?”, Sloan Management Review, Vol. 26 [15] N. Ye, and J. Jia, “Customers' perceived service
No.1, 1984, pp. 25-43. quality of internet retailing”, Proceeding of the
International Conference on Service Systems and
[4] D. A. Garvin, “Competing on the eight Services Management, 2005, pp.514-519.
dimensions of quality”, Harvard Business
Review, Vol. 65, No. 6, 1987, pp. 101-108.

[5] D. A. Garvin, “Managing Quality”, the Free


Press, New York, NY, 1988.

[6] P. Harroff, and T. Valentine, “Dimensions of


Program Quality in Web-Based Adult
Education”, American Journal of Distance
Education, Vol.20, No.1, 2006, pp.7-22.

[7] P. Hernon, and P. Calvert, “E-service quality in


libraries: Exploring its features and dimensions”,
Library and Information Science Research, Vol.
27, 2005, pp. 377-404.

[8] Q. Ma, J.M. Pearson, and S. Tadisina, “An


exploratory study into factors of service quality
for application service provider”, Information
and Management, Vol. 42, 2005, pp. 1067-1080.

[9] J. Minjoon, Shaohan Cai, and Daesoo Kim,


“Linkages of Online Banking Service Quality
Dimensions to Customer Satisfaction”, Decision
Sciences Institute Proceedings, 2002, pp. 2125-
2130.

[10] A. .Parasuraman, W. Zeithmal, and L. Berry, “A


conceptual model of service quality and its
implications for future research”, Journal of
Marketing, Vol. 49, 1985, 41-50.

[11] L. Rao, and K. Osei-Bryson, ‘Towards defining


dimensions of knowledge systems quality”,
Expert Systems with Applications, Vol. 33,
2007, pp. 368-378.

110
Proceedings of the 2008 IEMS Conference

An Intelligent Expert System


for Those Who Wish to Become Billionaires
Robert L. Mullen
Southern Connecticut State University

ABSTRACT expert systems as a means to manage knowledge


for easier and wider distribution. The software
This paper describes the conceptual industry has matured to build “intelligence” into
development of an intelligent–based expert software, sometimes known as “learning
system with the knowledge of a several well- systems” which can adapt themselves based on
known billionaires. The top ten billionaires, how they are used. Expert Systems continue to
including Bill Gates and Warren Buffet, were become more complex but require constant
researched on the Internet to develop a upkeep.
knowledge-base of advice from their collective This paper addresses the area of
philosophies as a guide for others who might advising non-experts who may be interested in
wish to attain similar status with their work learning from experts who are currently
lives. billionaires how they might achieve that status
as well. It provides an opportunity for linking
INTRODUCTION several independently developed expert systems
all with the common thread of being on a list of
Since the early 1980s expert systems current billionaires with an intelligent front-end
have been developed for the purpose of passing to consider the nature of the users interest and
expertise from those with the knowledge to select only those experts whose advice is
those who need the knowledge in a convenient appropriate for that particular system user. This
and easily accessed manner. The idea is to tap paper will be used in an undergraduate course in
the mind of the expert such that a less expertise Expert Systems to demonstrate the current focus
person can solve problems normally handled by of expert system development to support
the expert without the expert having to be decision making in management and
present. In the early 1980s, expert systems were engineering thereby extending the value of
crude, self developed by only a few firms that expert systems to support such decision making
saw value in them. By the late 1980s, special as the technology of expert systems evolves
languages such as VP-Expert and CLIPS from a class of information systems to a member
emerged and software shells developed to of the new knowledge management systems.
increase the ease of development of these
systems. The first decades of the 21st century ADDITIONAL BACKGROUND TO PAPER
has witnessed the emergence of intelligent
expert systems which provide access to multiple Since early 1990s, I have taught an
experts within a narrow field of interest. elective course in the MIS program at SCSU
In the 1990s with the focus on titled “Expert Systems”. In MIS work, expert
downsizing and need to cut costs in all area, systems have long been considered the most
managers of large corporations had turned to difficult and complex of all information system
expert systems to provide advice when the cost types. They are included in the family of
of hiring an expert (consultant) or access to a applications known as “artificial intelligence”.
previous expert employee who has been They represent an application attempting to
downsized was gone. The recent interest in the mimic the human brain. These applications
first decade of the 21st century on knowledge capture the experience of an “expert” in a
management has caused a renewed interest in narrow area of expertise in computer form such

111
Proceedings of the 2008 IEMS Conference

that this vast “knowledge base” can be used to technology from providing information from a
produce advice for a non-expert to solve a single expert to a knowledge-based system
problem to which they have not gained providing appropriate advice from an ever-
appropriate experience as if they were equal to growing number of experts.
the expert. This is quite a challenge. Students
were having difficulty understanding the nature FIGURE 1
of these systems and how they differ from COMPONENTS OF AN EXPERT SYSTEM
traditional information systems from a design Dialog Screen – Determine Nature of Problem
standpoint. Figure 1 outlines the components of Knowledge Base – A collection of “If ...Then”
an expert system. statements From Experts
In support of this course, which was my Inference Engine – Software to search
most challenging to teach, I developed a model knowledge base
of an expert system to demonstrate to the Presentation – Determine how to display advice
students how such a system is developed using a to system user
language, CLIPS. The model involved picking
two individuals a year who were experts in their FIGURE 2
field and who were living at the time so CONFERENCES WHERE EXPERT
theoretically I could have interviewed personal SYSTEM PAPER WERE PRESENTED
to capture their expertise. However, these Association of Management
chosen individuals were well known enough in Decision Science Institute
their field to have had books and later Internet Industry, Engineering, and Management
articles published about them. This was the Systems
source of information for these systems. I began Information Resource Management Association
presenting my work at various conferences for
peer comments to improve the process. A list of
FIGURE 3
organizations where these papers have been
COMPONENTS OF AN INTELLIGENT-
presented and published is shown as figure 2.
BASED EXPERT SYSTEM
I realized recently that I had reached a
Dialog Screen – Determine the Nature of
collection of experts handled individually that
Advice Being Sought
had been focused on two areas of expertise –
Knowledge Base – of multiple experts in a
management and the computer industry. This
focused field of expertise
year I focused attention on adding new experts
Inference Engine – Modified to select only those
to those I had researched in the past who for this
experts appropriate
paper are also billionaires but not from the
Presentation – modified to prioritize
computer industry. I will demonstrate how
through the use of an intelligent front-end I
could build a larger system as an example of
extending the generality of my 2006 system for
my MIS undergraduate students on an
application of intelligence to expert system
technology. This system is modeled after the
2006 IEMS paper titled “An Intelligent Expert
System for Those Entering the Computer
Industry”.
Figure 3 denotes the changes required
from the tradition single expert system
development components to a development that
would add front-end intelligence and the ability
to combine a group of experts into a single
system. This would extend the value of this

112
Proceedings of the 2008 IEMS Conference

FIGURE 4 5. Mullen, Robert (2006): An Intelligent


LIST OF INDIVIDUAL EXPERT SYSTEMS Expert System for Those Entering the
INCORPORATED IN THIS SYSTEM Computer Industry; Decision Making in
Jeff Bezos, Founder of Amazon.com Management and Engineering Track; 2006
Michael Dell, Founder of Dell Computers Industry, Engineering and Management
Bill Gates, CEO and co-founder of Microsoft Systems Association (IEMS) Annual
Paul Allen, co-founder of Microsoft Conference; March 13-15, 2006; Cocoa
Lawrence Ellison, CEO of Oracle Beach, Florida, page 47.
Eric Schmidt, CEO GOOGLE 6. Mullen, Robert (2006): An Intelligent Expert
Larry Page, GOOGLE co-founder System for MBA Students on Management
Sergery Brin, GOOGLE co-founder Principles; Decision Making in
Warren Buffett, Investor Management and Engineering Track; 2006
Micky Arison, CEO Carnival Industry, Engineering and Management
Ralph Lauren, Chairman and CEO Systems Association (IEMS) Annual
Rupert Murduch, CEO News Corporation Conference; March 13-15, 2006; Cocoa
Fred Smith, CEO FedEx Beach, Florida, page 50.
Willard Marriott, CEP Marriott International

CONCLUSION

This paper describes the conceptual


development of an intelligent–based expert
system applied to capture the knowledge of a
several well-known experts who happen to have
become billionaires from their success.. A list of
experts included is identified in figure 4. It
provides a learning tool for use in a course on
expert systems so that students in that course
can select some other group experts in a narrow
domain of interest and experience the
development a mid-2000s version of expert
system technology with intelligence added.

REFERENCES:
1. Beerel, Annabel (1998): Expert Systems in
Business: Real World Applications; Ellis
Horwood Publishing; New York.
2. Chandler, John S. (1998): Developing
Expert Systems for Business Applications;
Merrill Publishing; Columbus, Ohio.
3. Durkin; John (2000): Expert System Design
and Development; Maxwell MacMillan
Publishing; New York.
4. Harmon, Paul (1999):Creating Expert
Systems for Business/Industry; Wiley and
Sons, NY.

113
Proceedings of the 2008 IEMS Conference

An Intelligent Expert System


for Those Entering the Internet/Web Industry
Robert L. Mullen
Southern Connecticut State University

ABSTRACT management has caused a renewed interest in


expert systems as a means to manage knowledge
This paper describes the conceptual for easier and wider distribution. The software
development of an intelligent–based expert industry has matured to build “intelligence” into
system applied to capture the knowledge of a software, sometimes known as “learning
several well-known experts who were either systems” which can adapt themselves based on
founders of the new technology or developers of how they are used.
successful businesses in the Internet/Web This paper addresses the area of
Industry. The list of experts includes such advising non-experts who may be considering
experts as Sir Tim Berners-Lee (developed html entering the highly competitive computer
for the Web) and Jeff Bedos, founder of industry segment involving the Internet or the
Amazon.com. By applying intelligence to the Web. It provides an opportunity for linking
knowledge-base of these experts, common several independently developed expert systems
advice from the combined expertise is from within this segment of the computer
developed to advise others attempting to enter industry with an intelligent front-end to consider
this fast growing industry. the nature of the users interest and select only
those experts whose advice is appropriate for
INTRODUCTION that particular system user. This paper will be
used in an undergraduate course in Expert
Since the early 1980s expert systems Systems to demonstrate the current focus of
have been developed for the purpose of passing expert system development to support decision
expertise from those with the knowledge to making in management and engineering thereby
those who need the knowledge in a convenient extending the value of expert systems to support
and easily accessed manner. The idea is to tap such decision making as the technology of
the mind of the expert such that a less expertise expert systems evolves from a class of
person can solve problems normally handled by information systems to a member of the new
the expert without the expert having to be knowledge management systems.
present. In the early 1980s, expert systems were
crude, self developed by only a few firms that ADDITIONAL BACKGROUND TO PAPER
saw value in them. By the late 1980s, special
languages such as VP-Expert and CLIPS Since early 1990s, I have taught an
emerged and software shells developed to elective course in the MIS program at SCSU
increase the ease of development of these titled “Expert Systems”. In MIS work, expert
systems. systems have long been considered the most
In the 1990s with the focus on difficult and complex of all information system
downsizing and need to cut costs in all area, types. They are included in the family of
managers of large corporations had turned to applications known as “artificial intelligence”.
expert systems to provide advice when the cost They represent an application attempting to
of hiring an expert (consultant) or access to a mimic the human brain. These applications
previous expert employee who has been capture the experience of an “expert” in a
downsized was gone. The recent interest in the narrow area of expertise in computer form such
first decade of the 21st century on knowledge that this vast “knowledge base” can be used to

114
Proceedings of the 2008 IEMS Conference

produce advice for a non-expert to solve a single expert to a knowledge-based system


problem to which they have not gained providing appropriate advice from an ever-
appropriate experience as if they were equal to growing number of experts.
the expert. This is quite a challenge. Students
were having difficulty understanding the nature FIGURE 1
of these systems and how they differ from COMPONENTS OF AN EXPERT SYSTEM
traditional information systems from a design Dialog Screen – Determine Nature of Problem
standpoint. Figure 1 outlines the components of Knowledge Base – A collection of “If ...Then”
an expert system. statements From Experts
In support of this course, which was my Inference Engine – Software to search
most challenging to teach, I developed a model knowledge base
of an expert system to demonstrate to the Presentation – Determine how to display advice
students how such a system is developed using a to system user
language, CLIPS. The model involved picking
two individuals a year who were experts in their FIGURE 2
field and who were living at the time so CONFERENCES WHERE EXPERT
theoretically I could have interviewed personal SYSTEM PAPER WERE PRESENTED
to capture their expertise. However, these Association of Management, Decision Science
chosen individuals were well known enough in Institute
their field to have had books and later Internet Industry, Engineering, and Management
articles published about them. This was the Systems
source of information for these systems. I began Information Resource Management Association
presenting my work at various conferences for
peer comments to improve the process. A list of
FIGURE 3
organizations where these papers have been
COMPONENTS OF AN INTELLIGENT-
presented and published is shown as figure 2.
BASED EXPERT SYSTEM
I realized recently that I had reached a
Dialog Screen – Determine the Nature of
collection of experts handled individually that
Advice Being Sought
had been focused on two areas of expertise –
Knowledge Base – of multiple experts in a
management and the computer industry. This
focused field of expertise
year I focused attention on adding new experts
Inference Engine – Modified to select only those
to those I had researched in the past who for this
experts appropriate
paper are also billionaires but not from the
Presentation – modified to prioritize
computer industry. I will demonstrate how
through the use of an intelligent front-end I
could build a larger system as an example of
extending the generality of my 2006 system for
my MIS undergraduate students on an
application of intelligence to expert system
technology. This system is modeled after the
2006 IEMS paper titled “An Intelligent Expert
System for Those Entering the Computer
Industry”.
Figure 3 denotes the changes required
from the tradition single expert system
development components to a development that
would add front-end intelligence and the ability
to combine a group of experts into a single
system. This would extend the value of this
technology from providing information from a

115
Proceedings of the 2008 IEMS Conference

FIGURE 4 4. Guida, Giovanni. (2001): Design and


LIST OF INDIVIDUAL EXPERT SYSTEMS Development of Knowledge Based Systems;
INCORPORATED IN THIS SYSTEM Wiley and Sons, New York.
Lawrence Roberts, of MIT does paper on 5. Harmon, Paul (1999):Creating Expert
ARPANET plan (1966) Systems for Business/Industry; Wiley and
Roy Tomlinson, invents e-mail program (1971) Sons, NY. Chandler, John S. (1998):
Vinton Cerf and Bob Kahn, develop a protocol Developing Expert Systems for Business
for packet-switching (1974) Applications; Merrill Publishing; Columbus,
Sir Tim Berners-Lee, program to link nodes Ohio.
(1980), coins hypertext (1989), 6. Mullen, Robert (2006): An Intelligent
develops hypertext browser known as “World Expert System for Those Entering the
Wide Web” (1990) Computer Industry; Decision Making in
Marc Andreessen, forms Mosaic later Netscape Management and Engineering Track; 2006
(1994) Industry, Engineering and Management
Jeff Bezos, founder and CEO AMAZON.COM Systems Association (IEMS) Annual
(1997) Conference; March 13-15, 2006; Cocoa
Jerry Yang and David Filo, co-founders of Beach, Florida, page 47.
YAHOO (2000) 7. Mullen, Robert (2006): An Intelligent
Larry Page and Sergery Brin, co-founders of Expert System for MBA Students on
GOOGLE (2003) Management Principles; Decision Making
in Management and Engineering Track;
2006 Industry, Engineering and
Management Systems Association (IEMS)
CONCLUSION Annual Conference; March 13-15, 2006;
This paper describes the conceptual Cocoa Beach, Florida, page 50.
development of an intelligent–based expert 8. Wiig, Karl (1999); Expert Systems: A
system applied to capture the knowledge of a Managers Guide; Geneva International
several well-known experts in the internet/Web Labor Office.
industry. A list of experts included is identified
in figure 4. It provides a learning tool for use in
a course on expert systems so that students in
that course can select some other group experts
in a narrow domain of interest and experience
the development a 2008 version of expert
system technology with intelligence added.

REFERENCES:
1. Beerel, Annabel (1998): Expert Systems in
Business: Real World Applications; Ellis
Horwood Publishing; New York.
2. Chandler, John S. (1998): Developing
Expert Systems for Business Applications;
Merrill Publishing; Columbus, Ohio.
3. Durkin; John (2000): Expert System Design
and Development; Maxwell MacMillan
Publishing; New York.

116
Proceedings of the 2008 IEMS Conference

Assessing Six Sigma Project Improvements- A Statistical Perspective


Ali Ahmad
Design Interactive, Inc.
ali@designinteractive.net

Isabelina Nahmens
Louisiana State University
nahmens@lsu.edu

Abstract Introduction

Six Sigma is a well known business strategy that In today’s rapidly changing markets, companies
has helped many organizations globally, across need to deliver their products and services to
all industries, by achieving huge business customers in increasingly faster times, and these
benefits through the implementation of products and services need to meet and exceed
successful projects. These projects are identified, customer expectations. Applying Six Sigma
planned, and executed by employees from methodologies allowed companies to focus on
different levels of the organization. To deliver their customers and deliver products and
those business benefits, employees first measure services with exceptionally higher quality and
the baseline performance of the key process thus allowing enabling bottom-line benefits in
output, establish a performance target and then terms of dollars and cost savings.
seeks to close the gap between the baseline and
target applying the systematic and structured Six Six Sigma is both a quality management
Sigma methodology- DMAIC. Comparing philosophy and a problem solving methodology
process performance before and after completing that is project and team-based. The Six Sigma
a Six Sigma project is not a trivial task. This approach was pioneered at Motorola and then
comparison depends on 1) types of measures witnessed its glorious application in General
(discrete or continuous), and 2) availability of Electric (GE) and IBM (Breyfogle, 1999). It has
data (complete, limited, or no data). Adopting applications in both service and manufacturing.
the right comparison technique is imperative to The principles, concepts, and tools of Six Sigma
assess the results of a six sigma improvement provide a measurable goal, a framework for
project. This paper discusses comparison problem solving, and the foundations of a
techniques including t-tests, Bayes estimation, corporate culture designed to reduce costs by
and Monte Carlo simulation. Then, guidelines to responding to the voice-of-the-customer, and
facilitate the comparison of process performance “doing it right the first time.” Six Sigma
before and after, under different circumstances methodologies are flexible and can be used to
are identified. address the various facets of the business. It can
be applied to financial, planning, service, and
The adoption of methodologies outlined in this execution functions of the organization. The
paper in companies executing Six Sigma principle of Six Sigma is simple; control the
projects under those circumstances will enable variation in the process so that there are only 3.4
them to compare their before and after metrics defects per million opportunities.
regardless of their type of measures and data Six Sigma utilizes a scientific problem solving
availability. approach to address real world problems.
Typically, a Six Sigma project is executed in
five consecutive phases; Define, Measure,
Analyze, Improve, and Control (Breyfogle,
1999; Rath & Strong, 2002), or DMAIC for

117
Proceedings of the 2008 IEMS Conference

short. The Define phase starts by creating a The result of this phase is an implemented
charter that clearly states project goals, scope solution for the problem.
and boundaries, project team, business case, and
stakeholders affected by the project. It aims to The last phase of Six Sigma application is the
set the rhythm for the project and can be thought Control phase. It deals with making sure that
of as a contract between the process owner and the improvement stays in place, and that the
the project team. Several tools can be employed gains achieved by applying Six Sigma are
in the define phase, these include; SIPOC sustained. The tools used in the control phase
(Suppliers, Inputs, Process, Outputs, and include control charts, flow diagrams, and tools
Customers), and the VOC (Voice of the to compare before and after like run charts and
Customer), Kano model, and critical to quality frequency plots. Many companies add a fixed
driver tree. The Define phase results include a period realization phase, where the improved
charter, an overview of the process, and critical process is monitored to ensure that the gains are
to quality information. sustained.

The second phase of a Six Sigma project is A key feature of a successful six sigma culture is
Measure. It aims to pinpoint where the problem the creation of an infrastructure that supports
lies in a clear and precise way. The tools and invests in process performance improvement
commonly used in the Measure phase include (Brue 2002). Six sigma projects involve major
data collection plan, control charts, gage R&R investment of resource and must deliver bottom-
(Repeatability and Reproducibility), Pareto line results. However, determining statistically
charts, and FMEA (Failure Modes and Effects the success of process improvement after the
Analysis). The Measure phase establishes a completion of a six sigma project might not be
baseline for the process performance in terms of feasible- depends on the availability of data.
calculating a process sigma or Defects per Unit Furthermore, is not a common practice in the
(DPU). It results in collecting the data that feeds cases where the data after implementation is
the analyze phase of the project. limited. Usually six sigma teams realize only the
short-term paybacks and most obvious results of
The Six Sigma process proceeds by the Analyze the improvement. A major difference between
phase, which aims to identify the root cause(s) Six Sigma and failed programs of the past is the
of the problem under consideration. The tools emphasis on tangible, measurable results
commonly used in the analyze phase include (Pyzdek 2003). Six Sigma advocates argues that
root cause analysis and statistical tools such as projects are selected to provide a mixture of
regression analysis, hypothesis testing, scatter short and long term results, that justify the
plots, control charts, and Pareto charts, among investment and the effort. It is vital that
many others. The 5-why technique can be used information regarding results be documented
to identify the root cause of the problem. The and statistically compared with the process
results of the analysis phase are the root causes baseline pre-improvement, for the project to be a
of a particular problem, with a set of potential success and properly design an effective control
solutions. plan.

The fourth phase of a Six Sigma project starts Selection of Comparison Technique
upon identifying the root causes of a particular
problem. The Improve phase deals with Comparing process performance before and after
developing, implementing, and verifying completing a six sigma project is not a trivial
solutions. The tools commonly used in this task. This comparison depends on 1) types of
phase include brainstorming, creative problem measures (discrete or continuous), and 2)
solving techniques such as Delphi method, and availability of data (complete, limited, or no
planning tools. To assess the degree of data). Adopting the right comparison technique
improvement, a new process sigma or DPU is imperative to assess the results of a six sigma
value is calculated for the improved process. improvement project. This section explores the

118
Proceedings of the 2008 IEMS Conference

use of comparison techniques such as t-tests, may observe defect levels of the output of a
Bayes estimation, and Monte Carlo simulation, process before and after improvements are
to assess the process performance of a implemented. Then the mean difference between
completed Six Sigma project. Figure 1exibits a the two repeated observations (e.g. before and
flow chart of the selection of a comparison after) is documented and compared. If the
technique. The decision about what comparison difference is significantly great then there is
test to use for a particular analysis is of vital evidence that the process improvement
importance to making unbiased and correct implemented caused some change in the
decisions about improvement results. observed metric (e.g. defect level).

Statistical Analysis One of the benefits of the t-test is that they are
usually easy to understand and generally easy to
Pyzdek, a noted quality consultant, states that perform. However, the fact that these tests are so
more than 400 tools are now available for Six widely used does not make them the correct
Sigma (Pyzdek 2003). However, most firms analysis for all comparisons. T-tests are used to
rarely go behind the basic improvement tools perform comparisons on continuous variables,
and fail to recognize the benefits from more and assumes normally-distributed data. When
sophisticated statistical tools (Evans and Lindsay dealing with non-continuous data, other
2008). The DMAIC methodology was designed statistical tests, such as Chi-squared analysis and
to be used with advanced statistical methods. proportions tests can be applied. In cases where
Hence, statistical analysis is essential in normally-distributed data cannot be assumed
implementing a continuous improvement (p.s. Anderson-Darling test of normality can be
philosophy. used to check the data distribution), non-
parametric tests should be used.
Six Sigma DMAIC methodology uses statistical
analysis to interpret statistical information to Bayesian Estimation
understand and predict process performance. A
roadmap for selecting one of the available Traditional statistical techniques such as t-tests
statistical testing procedures is illustrated in and ANOVA comparisons are applied when
Figure 2. The selection of a statistical sufficient data exists for the process after
comparison is dependent on the type of data, improvement. Nevertheless, in many instances,
whether continuous or discrete. A common there exists few (1-5) data points for the new
form of statistically comparing process process, especially when the process
performance of a specific continuous metric of a performance measure under consideration is
before and after improvements of a completed cycle time, which makes the application of
six sigma project is a t-test. A t-test is a traditional statistics a challenge. To conduct
statistical tool used to determine whether a such comparison an approach adapted from
significant difference exists between the means reliability engineering is used to demonstrate six
of two distributions or the mean of one sigma project improvements.
distribution and a target value (Evans and
Lindsay 2008). The two most widely used Weibull and Weibayes analysis has been used
statistical techniques for comparing two groups extensively in reliability engineering
(normally distributed), are the independent (Abernethy, 2000). The traditional reliability
group t-test and the paired t-test. The theory deals with product/part lifetimes.
independent group t-test is use to compare Reliability is the probability that a product will
means between two groups where there are perform its intended function for a specified
different subjects in each group. Whereas, paired period of time, under stated operating
t-test compares subjects for the two groups by conditions. Weibull analysis is based on using
matched pairs (e.g. same subjects are observed the Weibull distribution to model the lifetime for
twice, often with some intervention taking place a product or a part. The Weibull distribution has
between measures). For example, the researcher 2 parameters: shape and scale. The shape

119
Proceedings of the 2008 IEMS Conference

parameter Beta denotes designates the whether Case 1: Complete data is available after process
the Weibull hazard rate is decreasing (Beta<1), improvement project: 13, 14, 15, 15, 16, 18, 18,
constant (Beta=1), or increasing (Beta>1). The 20, 20, and 21
scale parameter Eta, also known as the
characteristic life, represents the time or age at In this case, since sufficient data exists for the
which 63.2% of the parts have failed. If Beta is application of traditional statistical analysis
equal to 1, the Weibull distribution is identical to techniques, a t-test comparison is used. The
the exponential distribution with mean Eta. In listing below indicates that the new process has
addition, a shift parameter can be applied to the a statistically significant lesser cycle time as
Weibull distribution to take into consideration indicated with the p-value of 0.001 (Running the
the parts’ minimum life. Weibayes analysis is a Anderson-Darling test of normality shows that
Weibull type of analysis that assumes a known both original and new data can be assumed to be
Beta value, this assumption can improve the normally distributed, p-value>0.1).
estimation of Eta when there are few failure data
obtained for a new product design. Case 2: Only four new cycle times were
obtained: 13, 14, 18, and 20.
The Weibull analysis can be extended to use to
modeling different types of times, which are not In this case, only a limited set of cycle time data
necessarily product/part failure related. For is available after process improvement,
example, modeling cycle time in situations therefore, Weibayes analysis will be used. First,
where few data points are available after process a Weibull distribution is fitted to the original
improvement. Weibayes analysis can be used to data. The resulting distribution has a shape
estimate a scale parameter and confidence parameter of 7.03 and a scale parameter of 23.28
interval for the new data. The approach starts by (the P-value from goodness of fit test is greater
using Weibull analysis to obtain the shape and than 0.1). After that and assuming the shape
scale parameters for the original data. After parameter stays the same, the new scale
that, Weibayes analysis is used to estimate the parameter is 17.57 (the p-value from goodness
new scale parameter by assuming the same of fit test is greater than 0.5). To conduct the
original shape or Beta (thus requiring less data statistical comparison using confidence
points for comparison). The Weibayes analysis approach, a Weibull probability plot such as that
will result in drawing parallel lines (same slope shown in Figure 4 is used. Since the confidence
or Beta) on the Weibull probability plot. A key intervals don’t overlap, this indicates that the
assumption in applying Weibayes analysis is new cycle time shows reduction from the
that resulting improvements didn’t change the original cycle time data.
cycle time distribution shape parameter (Beta) to
a significant degree. The confidence intervals Monte Carlo Simulation
for the new and original data can then be used to
assess whether a statistical difference exists or Typically, Monte Carlo simulations are used in
not. the Analyze or the Improve phase of a Six
Sigma project. For instance, this approach can
Case Study: Statistical Analysis vs. Bayesian be use to improve the capability of processes
Estimation and to quantify the expected variation of the
improvement. However, simulations are not only
A six sigma project in a manufacturing very powerful when utilized in the earlier phases
organization was conducted to reduce the cycle of a Six Sigma project; they also are a powerful
time of proposal development process. Before tool in statistical process control (Jordan 2007,
improvement, the original cycle times were 15, Perry 2006). Monte Carlo is a powerful tool
18, 19, 20, 20, 21, 21, 21, 21, 22, 22, 25, 26, 28, because it allows the use of statically model to
and 29. For this case study, two cases will be account for realistic variation and display range
used: and shape of the process output. In addition, it
can help detect hidden factors that are not

120
Proceedings of the 2008 IEMS Conference

directly controlled and measured but may be


indirectly lined to the process inputs (Jordan
2007).

Another application of Monte Carlo Simulation


is for testing process improvements when no
data exists upon completing of a six sigma
project. A common example for this application
is comparing cycle time distributions of an
original and improved process that are each
composed from a set of sequential steps, where
probability distributions are used to model the
time distribution for each process step and the
overall cycle time is computed by summing the
cycle times associated with individual process
steps.

When using Monte Carlo Simulation to test


cycle time reduction in cases where no
improvement data is available, the first step is to Figure 1- Monte-Carlo simulation for process
breakdown original and improved processes into cycle time
their steps. Then, using available historical data,
probability distributions are fitted using the
Guidelines for Comparison of Process
times associated with each step from the original
Performance Before and After
process. Monte Carlo- based cycle time is then
computed by summing up individual process Findings from the literature review and the
cycle times. Once modeled cycle times for the examples given in the paper are summarized in
original process are computed, they are validated the following set of guidelines for successfully
against those obtained in the real world. After comparing process performance before and after
that, hypothetical distributions are assumed for a Six Sigma project are completed.
each step in the improved process. It is typical
to use Triangular distributions in cases where no 1. If complete and sufficient data are
data are available (Law and Kelton, 2000). To available for the process after
obtain a Triangular distribution, experts are improvement, the six sigma practitioner
consulted to estimate a minimum, a maximum should apply appropriate statistical
and a most likely time value for each process analysis technique based on the type of
step. After that, using the hypothetical the measure (i.e., discrete or continuous)
triangular distributions associated with each step while making sure that the assumptions
in the improved process, the improved process associated with each test are maintained
cycle time is computed using Monte-Carlo (such as normality in the case of t-test).
Simulation. Finally, modeled original and 2. If only a limited set of data is available
improved process cycle times are compared and for the process after improvement, and
conclusions are made based on simulation data, there is sufficient evidence that there is
as illustrated in the example in Figure 5. no drastic changes to process shape,
Weibayes estimation can be used to
verify if the new process cycle times are
less than the original process cycle
times.
3. If there is no data associated with the
process after improvement, the six

121
Proceedings of the 2008 IEMS Conference

sigma practitioner can use Monte-Carlo Law, A., and Kelton, W.D., 2000, Simulation
simulation using Triangular distributions Modeling and Analysis, 3rd Edition,
fitted based on expert inputs on each McGraw-Hill.
step of the improved process to assess Perry, R., 2006, Monte Carlo Simulation in
whether the six sigma project resulted in Design for Six Sigma, Proceedings of
cycle time reductions. the 2006 Crystal Ball User Conference.
Rath & Strong (2002). “Six Sigma Pocket
Conclusions Guide”. Rath & Strong, a subdivision of
AON Management Consultants.
This paper has focused on providing guidance
for six sigma practitioners when comparing
process performance before and after a six sigma
project. Adopting the right comparison
technique is imperative. Based on the
availability of post improvement data, either
traditional statistical analysis, Bayesian
estimation, or Monte-Carlo simulation are used.
The type of data (discrete or continuous) is a key
factor in selecting appropriate statistical analysis
procedure. Six sigma practitioners should
always check the assumptions associated with
the procedures they apply and they should
validate the models prior to their use. The paper
is concluded with a set guidelines derived from
literature and practice in terms of comparing
process performance.

References

Abernethy, R.B., 2000, The New Weibull


Handbook- Reliability & Statistical
Analysis for Predicting Life, Safety,
Survivability, Risk, Cost and Warranty
Claims, 4th edition.
Evans, J.R. and Lindsay, W.K., 2008, Managing
for Quality & Performance Excellence,
7th Edition, Thomson South-Western,
Mason.
Breyfogle Forrest W. III (1999).
“Implementing Six Sigma: Smarter
Solutions Using Statistical Methods”.
John Wiley, New York, NY.
Brue, G., 2002, Six Sigma for Managers,
McGraw-Hill.
Jordan, D., 2007, Using Monte Carlo Simulation
as Process Control Aid, iSixSigma.com,
Retrieved March 2008,
http://www.isixsigma.com/library/conte
nt/c070402a.asp.

122
Proceedings of the 2008 IEMS Conference

Selection of Comparison Technique

Availability of Performance Data


after Improvement

Complete Data is Limited Data is


No Data is available
available available

Statistical Monte Carlo


Bayesian Estimation
Analysis Simulation

Figure 2- Selection of Comparison Technique

Figure 3- Statistical Comparison Roadmap

123
Proceedings of the 2008 IEMS Conference

Two-sample T for Original vs New

N Mean StDev SE Mean


Ori 15 21.87 3.74 0.97
New 10 17.00 2.79 0.88

Difference = mu (Original) - mu (New)


Estimate for difference: 4.86667
95% CI for difference: (2.15485, 7.57848)
T-Test of difference = 0 (vs not =): T-Value = 3.72 P-Value = 0.001 DF = 22
Figure 4- Listing of two sample t-test results

Figure 5- Weibayes Comparison of new vs. original cycle time data

124
Proceedings of the 2008 IEMS Conference

A Novel Method to Investigate the Possible Locations of


Human Body Injuries During a Vertical Fall

Imshaan Somani Ha Van Vo R. Radharamanan


Mercer University Mercer University Mercer University
Somani_i2@mercer.edu Vo_hv@mercer.edu Radharaman_r@mercer.edu

Abstract understanding of how bones respond to loads


that cause fractures. This can help in
Falling is one of the most accidental understanding of how these forces can cause
injuries occurring in construction works. This damage. This also helps in drawing a correct
is an important issue involving worker accidental reconstruction model along with
compensation cases. Falling accident injuries possible location of bone fractures. Some of
can be from minor fractures of bones to life the major causes of bone fractures include
threatening. Some people testify incorrectly falling down from a height, car accidents,
for the injury in which they are involved in sporting activities such as football, wrestling,
order to get medical benefits from their and soccer etc. Each of these activities that
insurance companies. The objective of this cause injuries results in types of fractures that
study is to use Finite Element Methods are very predictable because of the way loads
(FEM) software Solidworks to simulate are applied at the surface of the bone.
injuries sustained during a vertical fall from
different heights. Adjustable geometries were 2. Functional Anatomy
used and could be customized to any body
position desired especially for simulating The human skeletal system is made up of
different scenarios involving accidental bones, ligaments, tendons, and associated
injury cases. The model has the flexibility to tissues of joints including cartilage. Bones
assign mass and material properties for provide a rigid structure and aid in
different segmental bones based on person’s supporting most of the soft tissues of the
specific biomechanical structures and body. The bones act as levers and struts to
anthropometrical data. The simulations were move parts of the body which provide a place
carried out for four different falling heights for skeletal muscles to attach thereby
10ft, 20 ft, 25 ft, and 40 ft. The results permitting movement of the body [1]. The
revealed comminuted type fractures at the bones are connected to each other at joints
falling heights of 20ft and above. and they are separated by cartilage. The
bones at joints are stabilized by tough but
1. Introduction flexible ligaments. The skeletal muscles
attach to the bone by strong and flexible, yet
People have been falling for many inextensible, tendons which are inserted into
centuries and have suffered minor fractures the bones. This anatomical assembly forms
to life threatening injuries. A common the musculoskeletal system. This study helps
application for falling and getting injured is to predict locations of injuries in the human
seen in the field of injury mechanics. This is joints focusing on lower extremities during a
very common in worker compensation cases. vertical fall.
People can fall while they are at work and
could be involved in serious injuries. 2.1. Functional anatomy: Lower extremity
However, this might not be true for certain
cases. During the last three years, impact The bones by which the lower limbs are
biomechanicians are trying to develop an attached to the trunk constitute pelvic girdles.

125
Proceedings of the 2008 IEMS Conference

The hip bone is a large, flattened, irregularly


shaped bone, constricted in the center and
expanded above and below. It consists of
three parts, the ilium, ischium, and pubis.
These three fused bones unite around a large
cup-shaped articular cavity called the
acetabulum which is located on the middle
lateral side of the bone on each side [1].
The femur is the longest and strongest
bone in the skeleton, is almost perfectly
cylindrical in the greater part of its extent. Its
head articulates with the acetabula to form
the hip joint. The hip joint is a ball and
socket joint and contains four major
Figure 1. Ligaments of the hip [2]
ligaments: the iliofemoral ligament, the
ischiofemoral ligament, the pubofemoral
ligament, and the ligamentum capitis
femoris. Figure 1 shows these ligaments [2].
The distal end of the femur contains
condyles articulate with the tibia to form the
knee joint which is a hinge joint. The tibia is
located at the medial side of the leg, and is
the second longest bone of the skeleton after
femur. The fibula is located on the lateral
side of the tibia. There are four major
ligaments that make up the knee joint: the
anterior cruciate ligament (ACL), the
posterior cruciate ligament (PCL), the lateral
collateral ligament, and the medial collateral
ligament. Figure 2 shows the anatomical
structure of the knee joint. Figure 2: Anatomical structure of the
The ankle-joint is a ginglymus, or hinge- knee joint [2]
joint. It is formed by the lower end of the
tibia and its malleolus, the malleolus of the
fibula, and the transverse ligament. The
bones are connected by four major ligaments:
The anterior talofibular ligament (ATF), the
deltoid ligament, the posterior talofibular
(PTF), and the calcaneofibular (CF). Figure 3
shows the anatomical structure of the ankle
joint.

3. Materials and Methods


Figure 3: Anatomical structure of the
To perform the simulations and determine ankle joint [3]
the locations of bone fracture, a FEM model
must be created. However, due to complexity
of the human anatomy a simplified model as
shown in Figure 4 was used to predict
locations of the bone fracture in a fall.

126
Proceedings of the 2008 IEMS Conference

and ligaments already tear before the joints


are affected. The simulations were performed
for heights of 10ft, 20ft, 25ft, and 40ft.
Because of the limitations of the software,
the results were calculated only at the time of
impact and therefore the model did not
bounce or experienced any change in
momentum as time progressed.
Figures 5-8 show the stress contours for
person landing vertically on the feet and
Figures 9-12 show the displacements in order
to determine the possible dislocations at the
Figure 4. Simplified model of the joints.
human skeleton [4]

The geometry was simulated in Solid


Works and the dimensions were assigned
from anthropometrical data. It comprised of a
total of 11 parts of human skeletal structures
that were movable and adjustable to any
motion or position desired. The parts were
attached by using the mating feature of the
Solid Works. The mass of the person was
assigned to be 150 lbs and each of the 11
Anterior view Posterior view
parts was assigned their mass properties
based on anthropometrical data. The material Figure 5. Stress contour for
properties of the cortical bone as shown in 10ft vertical fall
Table 1 were assigned to determine the
fracture behavior. Simulations were From Table 2 it can be inferred that there
performed to predict locations and will be a fracture of the bone if the maximum
characteristics of bone fracture from different stress exceeded the yield strength value of
heights of vertical fall. 120 MPa. For most of these cases, the yield
strength value is not reached. However, since
Table 1. Material properties of cortical the ankle supports five times the body weight
bone [5] during standing position, there is a high
Physical Properties Metric probability of developing calcaneus fracture.
Density 1.70 g/cc During impact type of fracture, the maximum
Mechanical Properties Metric fracture is located at the joint which absorbs
Tensile Strength, Ultimate 90.0 MPa
the maximum energy at the time of impact
Modulus of Elasticity 12.4 GPa
and the fracture is gradually transferred to the
connecting joints as the height is increased.
Compressive Yield Strength 120 MPa
As the height is increased, fracture is located
Compressive Modulus 7.60 GPa
at the ankle joint but also transfers to other
anatomical joints like knee, hip, lumbar (L4-
4. Results and Discussion L5), thoracic, cervical, and atlanto-occipital
joints.
FE simulations were performed for Higher heights will result in complete
positioning the model such that the person shattering of the ankle joint along with
lands on his feet. This was used to predict injuries to other lower extremities and
injuries and fracture behavior in the lower cervical spine. Spine injuries generally affect
extremities. It is assumed that the muscles the spinal cord and could result in severe life

127
Proceedings of the 2008 IEMS Conference

Anterior view
Anterior view

Posterior view
Figure 6. Stress contour for Posterior view
20ft vertical fall Figure 8. Stress contour for
40ft vertical fall

Anterior view

Figure 9. Displacement during 10ft


vertical fall

threatening injuries and can result in the


paralysis of all four limbs (quadriplegia) or
the lower half of the body (paraplegia).
Posterior view
Figure 7. Stress contour for
25ft vertical fall

128
Proceedings of the 2008 IEMS Conference

load and therefore has to perform many load


bearing activities. These ligaments are a lot
stronger compared to ligaments at the other
joints. As the height increases, ligaments that
make up the knee (LCL and the MCL), the
hip, the spine (lumbar, thoracic, and the
spine) also tear starting from partial tear to a
complete tear at different heights. Higher
heights also result in severe biomechanical
deformity. At heights greater than 20ft, varus
knee deformity can be seen. In the varus
knee, the distal segment represented by the
Figure 10. Displacement during foot deviates medially (towards the center
20ft vertical fall line of the body), relative to normal. This
leads to the damage of lateral collateral
ligament. Partial tearing of the ligament is
generally followed by swelling and bruising
and the use of the joint is painful and
difficult. A complete tearing of the ligament
results in swelling and sometimes bleeding
under the skin. As a result, the joint is
unstable and unable to carry any weight
bearing activities. The ligaments that are
commonly affected by the spine injuries are
the anterior longitudinal ligaments, the
posterior longitudinal ligaments, and the
Figure 11. Displacement during 25ft ligamentum flavum. Spine injuries will
generally affect the spinal cord and are
vertical fall
generally classified as complete and partial
[6].

Table 2. Injury behavior of joints at


impact
Height Max. Bone Joints
[ft] Stress Fracture Affected
[MPa]
Ankle (1) ,
Knee (2),
Lumbar(3),
10 113.4 Possible Cervical(4)
Ankle (1) ,
Figure 12. Displacement during 40ft Knee (2),
vertical fall Lumbar(3),
20 115.1 Yes Cervical(4)
Clinical data suggests that when the Ankle (1) ,
Knee (2),
ligament tear is between 3mm and 5mm, Lumbar(3),
there is partial tear or subluxation. However, Cervical(4),
any ligaments tear greater than 5mm results 25 116 Yes Thoracis(5)
in complete tear of the ligament. Table 3 Ankle (1) ,
shows that at 10 feet, there is partial tear of Knee (2),
Lumbar(3),
the ATF and CF ligaments. This is because Cervical(4),
the ankle is subjected to 5 times the body 40 117 Yes Thoracis(5),

129
Proceedings of the 2008 IEMS Conference

Atlanto-
Occipital(6)
Table 3. Ligament tears due to impact 5. Conclusion and Future Studies
FH MD LA JA TT
ATF, CF, LCL, (1), (2) A
10 3.36 MCL Results revealed that comminuted type
ATF, CF, (1), (2) B fracture occurs at the vertical fall of 20ft or
LCL,MCL, (3), (4) higher because at these heights the joints
IFL, ILL, FL. (5) were subjected to high impact force or stress.
20 4.6 PLL, ALL As the height was increased in the
ATF, CF, (1), (2) C simulations, the stress-strain plot reached the
LCL,MCL, (3), (4)
IFL, ILL, FL. (5), (6) yield strength value of cortical bone. It starts
25 11 PLL, ALL showing failure when the strain is greater
ATF, CF, (1), (2) D than 0.7 mm/mm. Also higher heights
LCL,MCL, (3), (4) resulted in partial tear to complete tear of the
IFL, ILL, FL. (5), (6) ligaments surrounding the cervical, thorax,
40 14 PLL, ALL (7)
lumbar, shoulder, elbow, hip, knee, and the
Where:
FH = Fall Height in ft ankle joint causing severe joint fractures and
MD = Maximum Displacement in mm requiring immediate medical attention.
LA = Ligaments Affected
JA = Joints Affected
TT = Type of Tear
Reference
LCL = Lateral Collateral Ligament
MCL= Medial Collateral Ligament [1] Gray, H., Gray's Anatomy, New York:
IFL = Illiofemoral Ligament Elseiver, 2005.
ILL = Illiolumbar ligament
LF = Ligamentum Flavum [2] Netter, F., Atlas of Human Anatomy,
PLL= Posterior Longitudinal Ligament Teterboro, New Jersey: ICON Learning Systems,
ALL = Anterior Longitudinal Ligament 2004.
Ankle (1) , Knee (2), Hip(3), Lumbar(4), Cervical (5),
Shoulder (6), Thoracic(7)
[3] My Ankle Replacement.com, DePuy, 2007.
A = Subluxation (1 and 2)
<http://www.jointreplacement.com/DePuy/index.html>
B = Subluxation (1, 2, 3, 4, and 5)
.
C = Complete (1, 2, 3, 4, 5, and 6)
D = Complete (1, 2, 3, 4, 5, 6, and 7) [4] Hawley, N., Free 3D Models, Free CAD
Models, Solidworks-Dassault Systems, 2007.
Figure 13 evaluates the relationships <http://3dcontentcentral.com/3DContentCentral/S
between stress and strain of bone during the earch.aspx?arg=manikin>.
vertical fall of 40ft height. It shows the [5] Sawbones Third-Generation Simulated
similarity of stress-strain curve of bones in a Cortical Bone, Material Web: Material Property
joint undergoing a compressive force. Also it Data.11 Oct.-Nov. 2007.
reveals the fractures of the ankle bones when
the yield stress is close to 120 MPa. [6] The Spinal Cord and Injury, The Cleveland
Clinic Department of Patient Education and
Health Information, 24 Nov. 2007.
Stress Vs. Strain at 40 ft-Vertical Landing

140

120

100
Stress [MPa]

80

60

40

20

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Strain [mm/mm]

Figure 13. Stress-strain plot at 40 ft

130
Proceedings of the 2008 IEMS Conference

Product and Process Improvement in Electronic Manufacturing

Angela P. Ansuj R. Radharamanan


Cassandra Rodrigues dos Santos Mercer University
Federal University of Santa Maria Macon, GA 31207
Santa Maria (RS), Brazil radharaman_r@mercer.edu
angelaansuj@yahoo.com

Abstract manner are very powerful and can be


applied in any type of organization [1].
At present, every manufacturing industry The basic purpose of CE is to be more
that is involved in making complex products effective by means of cooperation among
must work towards constantly reducing all departments involved in the creation
product costs, shorten time to place the of a product [2].
product in the market and continuously
improve product quality. Today, the first
acceptable design must be close to optimum Market Product
and rapidly made with little or no need for Analysis Design
quality induced modifications. The
concurrent engineering design process
provides a stable, repeatable process by
which increased accuracy is achieved in a
shorter time with less variation. In this Sales and
Distribution Production
paper, the concurrent engineering design System
concepts, the tools that are used to achieve Design
the concept of design for manufacturability,
quality standards (ISO-9000), and quality
function deployment (QFD) are applied for
product and process improvement in an Manufacturing
electronic manufacturing industry. The
results are presented and discussed. Figure 1. CE Model

1. Introduction The CE design process [3, 4] in its


simplest form is the integrated execution of
Concurrent Engineering (CE) is basic principles, such as process
defined as the earliest possible management, design, manufacturability and
integration of overall company's automated infrastructure support. Process
knowledge, resources, and experience in management is probably the most important
design, development, marketing, of the four principles, since it coordinates
manufacturing, and sales into creating and facilitates the CE design process [5, 6].
successful new products, with high The design phase is the execution of the
design process for developing a product and
quality and low cost, while meeting its manufacture, and many different people
customer expectations (Fig. 1). The most that are involved in the project execute this
important result of applying CE is the phase. Manufacturability involves the
shortening of the product concept, interaction of the people involved in the
design, and development process from a actual production of the product with its
serial to a parallel one. The general designers, so that different parameters of the
concepts when applied in an integrated product can be reviewed and discussed and
any problems quickly identified and solved.

131
Proceedings of the 2008 IEMS Conference

Finally, automated infrastructure support surveys, for example. Once the problem has
provides all the processes that facilitate all been identified, iteration takes place where
the other processes, such as computers, data different ideas are exchanged and methods
transfer and retrieval systems, and the like. A tried in order to completely define and design
very basic and important ingredient of CE is the product. This part of the DFM design
Design for Manufacture (DFM). DFM is the process is very important since at this stage
practice of designing products with the major decisions will be made regarding
manufacturing in mind so that they can be the production and marketing of the product.
designed in the least time with the least cost. This iteration period can vary in length, and
Also, DFM allows a smoother transition from the length is usually subjected to a deadline
the design of a product into its production as in order to meet an optimal time-to-market
well as minimizing the cost of assembling for the product. In addition to that, the
and testing the product. Quality and iteration period allows the design to be
reliability are also affected by DFM in a continuously improved and optimized over
positive way, and therefore the needs and time as better and more complete design
satisfaction of the customers are met and the information becomes available and all the
product automatically becomes more teams involved in the design process
competitive in the market [7-9]. communicate their discoveries and ideas. A
This paper provides an overview of comprehensive planning, research and
concurrent engineering design concepts and development reduces the amount of iteration
quality tools. These tools are applied for and makes any engineering change possible
product and process improvement in a small at a reduced cost in the event that a product is
electronic manufacturing company located in being revised and redesigned. As a result
the State of Rio Grande do Sul, Brazil to the quality, cost and delivery of the product
reduce the rate at which defective parts are are greatly improved thanks to early design
produced, increase the quality of decisions and increased communication
manufactured products, increase among all members involved in the
productivity, minimize product cost, supply design, production, and marketing of the
products on time to customers, provide product [11, 12]. A typical product design
customer satisfaction, and keep the company cycle for manufacture is shown in Fig. 2.
competitive in the market place.
Concept Physical Prototype
2. CE/DFM Tools Design Design Test

A simple definition of DFM is the


comprehension and optimization of
Release to
interactions between different facets of the Manufacturing
’LITY
complex manufacturing systems for effective
quality, cost and delivery, with the ultimate Manufacturability
aims of producing products with better Maintainability
quality, lower cost, and a reduced time-to Reliability
Testability
market [2, 10]. The design process is the Quality
heart and foundation of DFM. It is an
iterative, decision-making activity involving
the use of scientific and technological Figure 2. Product design cycle
information to produce a system, device, or
process intended to meet specific needs 2.1. DFM tools
[2, 7]. The DFM design process should
immediately and accurately identify the There are two axioms that can be applied
problem at hand, namely, the production of a when using DFM tools to improve a product
product within certain specifications, usually or process. This approach operates under the
design constraints and criteria set by the belief that fundamental principles (axioms)
engineering teams, marketing, and consumer exist for any product to have a good design

132
Proceedings of the 2008 IEMS Conference

and that the use of those principles will guide needs of the market place. These goals are
and evaluate design decisions that will universal acceptance, being adopted and
ultimately lead to a good design. The axioms used worldwide; current compatibility,
must be applicable to the full range of design combined use without conflicting
decisions and to all stages, phases and levels requirements; forward compatibility, with
of the design process. Axiom 1: in good successive revisions being accepted by users;
design, the independence of functional and forward flexibility, using architecture
requirements is maintained. Axiom 2: among that allows new features to be incorporated
the designs that satisfy axiom 1, the best readily [16, 17].
design is one that has the minimum ISO-9000 certification occurs when a
information content. In order to apply both neutral and independent “registrar” uses one
axioms first the team must identify the of the ISO-9000 standards to certify a
functional requirements and constraints to be supplier. This results in an official
used when designing the product. Once designation of the supplier as an “ISO-9001
those requirements and constraints have been or ISO-9002 or ISO-9003” certified supplier.
found, the axioms should be applied to each A supplier registered in this manner is in the
individual design decision to ensure the enviable position of being seen as a reliable
success of the project under the DFM worldwide supplier of quality products and
guidelines [8, 13]. The condensed listing of services; its customers can reduce or
the DFM guidelines are found in [13, 14]. eliminate inspection of purchased parts
thereby resulting in an efficient system of
2.2. Quality tools: ISO-9000 standards global trade [18].
The ISO quality standards do not refer
Quality is a competitive advantage in the directly to the products or services delivered,
marketplace. This means that a company but rather to the production and
must have the same or better quality than its administrative processes that produce them.
competitors. The quality of competitors, It is generic enough to be applicable to any
however, is improving continuously. This industry type. Specifically, the standards
situation keeps a company improving quality focus on the need for organizational
continuously to remain a viable force in the structure, well-documented procedures, and
market place. Companies can no longer management’s commitment of resources to
tolerate adverse gaps in product quality implement quality management [15]. The 20
between their products and those of their clauses of ISO-9001 standard, the most
competitors. To prevent this, a viable comprehensive of the three standards are
program of continuous improvement must be found in [16, 18].
developed and put into practice [15].
ISO-9000 standards exist principally to 2.3. Quality function deployment
facilitate international trade. The driving
forces that have resulted in widespread Measurement and evaluation of product
implementation of ISO-9000 standards can quality is not difficult, since one can
be summed up in one phrase: “the establish conformance standards, inspect and
globalization of business”. Expressions such test products, identify defect rates, correct
as “post-industrial economy” and “the global errors, and impose performance level. The
village” reflect profound changes during quality function deployment (QFD) permits
recent decades. These changes have led to to assemble a considerable amount of
increased economic competition, increased information in a concise form, in a small
customer expectations for quality, and number of documents - QFD diagrams. The
increased demands upon organizations to graphical format is more effective in
meet more stringent requirements for quality simplifying complex information [19, 20].
of their products [16]. The QFD aims at the customer. It is
Vision 2000 report proposed four goals necessary to hear the “voice of the customer”
that relate to maintaining the ISO-9000 during the development of the products and
standards so that they continually meet the processes, identify and focus on the details

133
      

and decide what is important. This tool defective plastic parts from supplier, visual
simplifies a set of information through the external finish, defective raw material,
use of graphs based on matrices. The QFD defective circuit board, and improper use of
is a system that translates the requirements of the product. 2. Factors affecting the product
the customer, identifies necessary cost:       

modifications before major expenses take transportation, quality program, rework, idle
place, reduces risks during the development time in the assembly line, marketing, taxes,
phase, creates basic knowledge of the and product warranty. 3. Problems related to
product, and introduce the “voice of the the production process: burnt resistance and
customer” in the development process motors, obsolete machines and equipment,
[6, 20]. This method permits to collect a lack of preventive maintenance, and
large quantity of information in the form of insufficient tools. 4.     
graphs. Each customer differs in his/her from the consumer:      
expectations and necessities, consequently, in delivery, high cost of the product, better
the requirements. QFD identifies customers product from the competitors, and defective
in the various stages of development life products.
cycle, and employs techniques to compile
their requirements. The QFD demands a list 4. Results and Discussions
of “WHATS” AND “HOWS”, where the
items “WHATS” are the necessities of the Over a period of three months, 625 units
customers and the “HOWS” are the installed and sold through two car
measurable requisites that can affect the manufacturers have been monitored. Out of
understanding of the requisites of the these 625 units installed, 248 units (39.7% of
customers. Quality function deployment – total units sold) were returned from the
specifically, the house of quality – is an customers to the manufacturer for repair and
effective management tool in which customer rework. The units were inspected before they
expectations are used to drive the process were sent for repair and/or rework. Table 1
under consideration [19, 20]. Some of the shows the inspection results on the 248
advantages and benefits of implementing returned units. It is observed from Table 1
QFD are: an orderly way of obtaining and Fig. 3 that burnt resistance/motor is
information and presenting it; shorter responsible for 41% of repair and rework.
development cycle; considerable reduction in About 44% of repair/rework is due to
costs; fewer process changes; reduced chance defective plastic parts and raw materials.
of oversights during process execution; an
environment of team work; consensus Table 1. Inspection results
decisions; and preserving everything in   No. of
writing. QFD implementation results in a repair/rework units Percentage
satisfied customer [13, 19]. 1. Burnt resistance/motor 102 41.129
2. Defective plastic part 67 27.016
3. Electronic Manufacturing 3. Defective raw material 42 16.935
4. Improper use 21 8.468
A study has been conducted for 5. Visual external finish 16 6.452
implementing some of the CE/DFM tools in Total 248 100.00
a small size electronic manufacturing
industry located in the southern part of The customer complaints received along
Brazil. The main products of this industry with the returned 248 units were analyzed
are: sirens and alarms for the cars. These and 344 responses were registered as shown
products are distributed to the car in Table 2. It is clear from Table 2 and Fig. 4
manufacturers and retail auto shops located that defective units and low quality (68%) are
in several states. The initial study with the major causes for return of the products from
management, design and manufacturing the customers for repair and rework.
teams indicated the following: 1. Types of Based on the preliminary study results, the
problems with respect to the product: following actions were taken to minimize the

134
      

number of defective units reaching the existing products. The assembly line has
customers, increase product quality, been redesigned incorporating new tools and
minimize unit cost of the product, and deliver equipment. An intensive training program
the products on time to the customers. has been arranged for everyone in the shop
floor on the new tools and equipment. The
existing raw materials and plastic parts
Inspection results
suppliers have been changed to ISO-9000
120 certified suppliers. The above changes
significantly reduced repair and rework and
No. of units/percentage

100

80
increased productivity.
No. of units Product Quality: Final product testing
60
Percentage procedures have been modified. Everyone in
40
the enterprise starting from the shop floor
20 employees to top management personnel has
0 been trained in total quality management
concepts. Product quality is improved
ls

e
rt
as o r

h
us
ia
pa

is
ot

fin

through ISO-9002 certification.


er
/m

ex p er
tic

at

al
e c nc e

rn
pr

Product Cost: The selling price/unit is


pl

te
a

Im
ra
ive
st
si

e
t

al
re

iv

reduced to that of the competitors in the


su
ct
ef
t
rn

fe

Vi
D
Bu

De

Type of defect
market. This was made possible due to
reduced repair and rework, and increased
Figure 3. Inspection results productivity.
 Delivery Time: Product delivery time is
Table 2. Customer complaints significantly reduced due to the incorporation
Customer No. of Percentage of DFM guidelines and increase in
complaints complaints productivity.
1. Defective units 154 44.767 Customer Satisfaction: Quality function
2. Low quality  82 23.837 deployment (QFD) concepts have been
3. High cost 67 19.477
incorporated to meet the customer needs and
4. Delay in delivery 41 11.919
Total 344 100.00 requirements. This also provided continuous
feedback from the customers.
Before the incorporation of the concurrent
Customer complaints engineering concepts almost 40% of the
items manufactured and sold were sent back
No. of complaints/percentage

180
160
by the customers for repair and rework
140 within three months time of purchase. This
120 has been significantly reduced after the
100 No of complaints
80 Percentage incorporation of quality standards, design of
60
40 assembly, and QFD concepts. The customer
20 complaints have been significantly reduced,
0
and the repair and rework dropped less than
Defective units

Delay in delivery
High cost
Low quality

10%. At present, the company is selling their


products in several states of Brazil and
planning to export their products to
neighboring Latin American Countries.
Type of complaint
5. Conclusions
Figure 4. Customer complaints 

In this paper, many concepts have been


Defective Units: Some of the concurrent listed that when properly applied will lead a
engineering tools have been adapted and design team to an optimal design of any
applied to this industry. First the DFM product or service. The guidelines and
guidelines were used for redesigning the methods explained here have been proved to

135
Proceedings of the 2008 IEMS Conference

be highly effective when applied in small [8] Helander, M., and M. Nagamachi, Design for
electronic manufacturing industry. Any Manufacturability: A Systems Approach to
industry that is not currently applying Concurrent Engineering and Ergonomics, Taylor
CE/DFM in its design and production phases & Francis, Washington, DC, 1992.
is urged to do so. The benefits that can be
[9] J. P. Tanner, “Product Manufacturability”,
obtained by correctly applying and Journal of Automation, , 1989.
implementing CE/DFM, QFD, and ISO-9000
are great when compared to other methods of [10] H. W. Stoll, “Design for Manufacture”,
design and production. From this study, it is Journal of Manufacturing Engineering,  ,
very clear that only by the skillful integration 1998.
of design, manufacturing, and marketing an
industry today can survive in this highly [11] Boothroyd, G., Product Design for
competitive world where cost, time-to- Manufacture and Assembly, Marcel Dekker, New
market, and good quality are imperative for York, NY, 1994.
survival.
[12] Norman, R., Concurrent Product and
Process Development (CP/PD) - A Current
Acknowledgment Design Methodology: Making it Happen,
International TechneGroup Incorporated, Revised
This project was funded by FAPERGS Edition, 1990.
(Process No.: 01/1218.7), Rio Grande do Sul,
Brazil. The authors wish to thank the [13] Shina, S. G., Concurrent Engineering and
FAPERGS for providing equipment, Design for Manufacture of Electronics Products,
scholarship and travel funds. Van Nostrand Reinhold, New York, NY, 1991.

[14] Nevins, J. L., and E. W. Daniel,


References Concurrent Design of Products and Processes: A
Strategy for the Next Generation in
[1] Allen, W. C., Simultaneous Engineering: Manufacturing, McGraw-Hill, New York, NY,
Manufacturing and Design, SME, Dearborn, MI, 1989.
1990.
[15] Winchell, W., Continuous Quality
[2] Anderson D. M., Design for Improvement: A Manufacturing Professional’s
Manufacturability: Optimizing Cost, Quality, and Guide, SME, Dearborn, MI, 1991.
Time-to-Market, CIM Press, Lafayette, CA, 1990.
[16] Peach, R. W., The ISO 9000 Handbook,
[3] Susman, G. I., Integrating Design and Richard D. Irwin, Chicago, IL, 1997.
Manufacturing for Competitive Advantage,
Oxford University Press, New York, NY, 1992. [17] Juran J. M., and A. B. Godfrey, Juran’s
Quality Handbook, McGraw-Hill, New York,
[4] Turtle, Q. C., Implementing Concurrent NY, Fifth Edition, 1999.
Project Management, Prentice Hall, Englewood
Cliff, NJ, 1994. [18] Lamprecht, J. L., ISO 9000: Preparing for
Registration, ASQC Quality Press, Milwaukee,
[5] Miller, L. C. G., Concurrent Engineering WI, 1992.
Design: Integrating the Best Practices for Process
Improvement, SME, Dearborn, MI, 1993.
[19] Akao, Y., Quality Function Deployment–
[6] R. Radharamanan, “Concurrent Engineering Integrating Customer Requirements into Product
and Design for Manufacture”, Keynote Speech, Design, Productivity Press, Cambridge, MA,
XIII Encontro Nacional de Engenharia de 1990.
Produção, Florianópolis, Brazil, 1993.
[20] Besterfield, D. H., et. al., Total Quality
[7] Corbett J., et. al., Design for Manufacture: Management, Prentice Hall, Englewood Cliffs,
Strategies, Principles and Techniques, - NJ, Second Edition, 1999.
Wesley Publishing Company, New York, NY,
1991.

136
Proceedings of the 2008 IEMS Conference

Determining the Most Effective Midsole Material Thickness in


Preventing Calcaneal Fractures

Jonathan B. Ksor Ha Van Vo R. Radharamanan


Mercer University Mercer University Mercer University
Jonathn_jb@mercer.edu Vo_hv@mercer.edu Radharaman_r@mercer.edu

Abstract Often times intra-articular fractures lead


to severe sequelae such as painful subtalar
The objective of this research is to use a joint arthritis for which fusion is usually the
patient specific study to determine the treatment of choice. This can lead to
optimal midsole thickness of polyurethane detrimental biomechanical changes resulting
for shoes to reduce the probability of having in damages to other joints such as the ankle
a calcaneal fracture when facing a sudden and knee. A study done by Patricia
fall from a height of approximately six feet. Kolodziej and James Nunley [4] showed that
The authors applied the energy governing patients that underwent subtalar joint
equations to calculate the energy absorbed arthrodesis as a treatment for painful
from the midsole material and Solidworks arthritis showed improvement in pain scores
software to model the midsole and analyze (not complete resolution) but suffered in
the stress and displacement to determine the areas of physical conditioning and
optimal midsole thickness of a six feet 200lb functioning.
white male. Results showed that the midsole The calcaneus is the largest tarsal bone in
thickness of about 30mm provided the most the foot. It articulates superiorly to the talus
energy absorption. Thus the optimal midsole and anteriorly to the cuboid. It is a major
thickness of polyurethane for this individual load bearing surface during gait and its
patient that falls suddenly from a height of muscular attachment to the posterior
six feet is in the range of 20mm-30mm. compartment muscles of the lower leg
allows the foot to forcefully plantarflex. A
1. Introduction common cause of calcaneal fractures are
falls from a height of at least 6 feet [5].
Calcaneal fractures account for Approximately 10 percent of these injuries
approximately one to two percent of all are associated with lumbar spine fractures
fractures and, in addition, it is considered [2, 5]. Therefore the severity of the injury is
the most frequently fractured tarsal bone in often times unpredictable and precautions
the foot [1-3]. Calcaneal fractures are should be taken to prevent this type of injury
subdivided into two groups: intra-articular from happening. This can be achieved
and extra-articular [2, 3]. Intra-articular through shoe gear which is an important
fractures are the most common (70-80%) factor in providing the necessary protection
and are the hardest to treat giving the from all foot injuries. All shoes are made up
poorest long term results [2, 4]. While on of four basic components: the outsole,
the other hand, extra-articular fractures are midsole, insole, and upper. The outsole is
less common and have a more favorable the outer most portion of the shoe which
outcome with a lesser morbidity rate. functions to provide traction. The insole lies
Treatment options range from conservative in direct contact with the foot and functions
reduction and casting to a more invasive to provide support for the foot. Lastly, the
open reduction and internal fixation [5]. midsole is situated between the outsole and
insole and functions to provide shock

137
Proceedings of the 2008 IEMS Conference

absorption during gait. This is considered ( E M − Eo )


the most important part of the shoe in E% = *100 (3)
Eo
reducing impact type injuries to the foot
more specifically calcaneal fractures. One of
the main materials used for the midsole is The first equation represents the energy
polyurethane and the average midsole absorbed directly by the heel, EH when the
thickness is about 20mm [6]. By controlling patient fell to the ground. This is important
midsole thickness this will influence the because this is the amount of energy above
amount of energy absorbed by the shoe the yield stress of bone needed to fracture
during walking, running or jumping. the calcaneus of this specific patient. Yield
Therefore, it was the objective of this study stress is a material property that quantifies
to determine the optimal thickness range of the highest point before the material goes
midsole for a specific patient that will best from the elastic region into the plastic region
reduce the probability of having a calcaneal as shown in Figure 4 [2, 8, 9].
fracture when falling from a height. This
range will provide a reference for fitting or
building shoes for individuals working in
risk environment.

2. Materials and Methods

A six feet 200lbs white male presented


with a painful swollen left foot secondary to
falling from a height of approximately six
feet while painting the exterior of a
residential home. X-rays and Computerized
Tomography (CT) scans were taken and
revealed an intra-articular calcaneal fracture
with unremarkable lumbar radiographs as
shown in Figures 1 and 2.
Simpleware software was used to convert
the two dimensional (2D) CT scan of the left A – Left rear foot (medial view X-ray)
foot and ankle to a 3D model to analyze the
stress distribution in the calcaneus as shown
in Figure 3. Multiple fractures of the
calcaneal left foot (red is the calcaneus,
yellow is the fibular, dark blue is the tibia,
and light blue is the talus) are noted.
Therefore, in order to determine the
optimal midsole thickness for this patient,
three governing energy equations were used:

EH = PE + translational KE + Rotational KE

1 1
E H = mgh + mv 2 + Iω 2 (1)
2 2

EM = F * d (2)
B – Right foot (calcaneal axial view X-ray)
Figure 1. Calcaneal fracture

138
Proceedings of the 2008 IEMS Conference

Figure 3. 3D model of the ankle


complex

A. Transverse view
Figure 4. Stress-strain curve of the
bone

The second equation measures the


amount of energy, EM absorbed by the
material polyurethane. “F” is force and “d”
is the displacement of the material. The
displacement was measured using the
Solidworks software. It was used to simulate
the displacement of material when placed in
the shoe of this particular patient as seen in
Figure 5. The block above was given the
material properties of bone. The block
below was given the material properties of
polyurethane. Previous studies have shown
that the amount of reaction force
B. Saggital view experienced by the lower leg when jumping
is approximately 25 times that of the
Figure 2. CT scan of left foot
individual’s body weight [7]. Therefore the
reaction force applied to the bottom material
The yield stress of bone is approximately
was 22,240 N. This simulation was done for
120 GPa [7]. This represents the amount of
four different polyurethane thicknesses:
stress that is exerted on the bone as the
15mm, 20mm, 30mm, and 40mm. Five
patient ambulates. Therefore the total
amount of energy needed to fracture this samples of each thickness were used in
simulation to determine the validity of the
patient’s bone is equal to: (yield stress
results of the simulation by comparing it
energy) + EH.

139
Proceedings of the 2008 IEMS Conference

with what previous studies have shown. For


instance, polyurethane that is 7mm thick can 4. Discussions
absorb approximately 83J of energy.
Therefore, for a material that is 20mm thick, In this study, the results show that the
the energy that can be absorbed with impact, thickness of 30mm had the best EM and E%
Eo is 273J. This was considered the optimal values. It had the only positive E% value
thickness since the average midsole whereas the 15mm and 40mm were
thickness for shoes is about 20mm. negative. This means that this thickness had
the better absorption result compared to the
others, even when compared to the optimal
thickness of 20mm. It can also be seen that
the 40mm absorbed less than that of optimal
because at this thickness the material started
to develop bending moment which can be
seen in Fig. 9.

Table 1. The results of displacement,


Figure 5. The model for Solidworks material energy absorption (EM) and
simulation percentage energy (E%).
Material Displacement
The third equation calculates the Thickness (mm) EM (J) E%
percentage of extra energy absorbed by 15mm 9mm 201.06 -26.35
different material thicknesses to the energy 20mm 12mm 266.88 -2.24
absorbed by the optimal thickness of 20mm. 30mm 18mm 401.12 46.93
40mm 8mm 177.92 -34.83
3. Results

The total amount of energy absorbed by


the heel was calculated to be 1812.97J.
According to the simulation, the amount of
displacement for the midsole thickness of
20mm was approximately 266.88J. This
result is about the same as that calculated
previously (273J) using the energy
absorption index reported from previous
studies. This result therefore helps to Figure 6. Displacement results of
validate the results of the simulation of other 15mm thickness polyurethane
material thicknesses. Table 1 shows the material
remaining results of the simulations and
calculations of displacement, energy
absorbed by different material thicknesses,
EM and the energy percent difference from
optimal, E%. Figures 6-9 show the results
of the Solidworks simulations. The
results obtained from these simulations
will be used for predicting the range of
material thickness. Clinical studies are
needed to confirm the optimal results.

140
Proceedings of the 2008 IEMS Conference

Figure 7. Displacement results of effects related to calcaneal fractures. Future


20mm thickness polyurethane studies would include a large study group to
material compile a chart that would cover patients of
different sizes. This chart would be used to
Therefore, the thickness of material has its custom make shoes for patients working in
limits. For this patient, a midsole thickness risk environments such as construction
of 30mm would have been beneficial. It workers, carpenters, and painters. Other
would have prevented or at least decreased additional studies would include the effects
the injuries incurred to the calcaneus, for of different materials or different geometric
example the added thickness could have arrangements. Future studies might involve
kept the fracture from extending into the the use of different materials with different
joint (extra-articular). thickness for landing tests on pressure plates
to obtain realistic data.

References

[1] Bridgman, S. A., Intervention for treating


calcaneal fractures, The Cochrane Collaboration,
2007.

[2] www.emedicine.com.

[3] Benirschke, S. K., and P. A. Kramer,


“Fractures of Os Calcis”, Clinical Orthopaedics,
Figure 8. Displacement results of 1999.
30mm thickness polyurethane
material [4] Kolodziej, P., and J. A. Nunley, “Outcome of
subtalar joint arthrodesis after calcaneal
fractures”, J. South Orthop. Association, 10(3),
pp. 129-139, 2001.

[5] Banks, A., M. Downey, D. Martin, and S.


Miller, McGlamery’s Comprehensive Textbook
of Foot and Ankle Surgery, Philadelphia, PA.,
2001.

[6] www.sneakerhead.com.

Figure 9. Displacement results of [7] Nigg, B. M., and W. Herzog, Biomechanics


40mm thickness polyurethane of the Musculo-skeletal System Second Edition,
material West Sussex, England, 1999.

[8]http://en.wikipedia.org/wiki/Image:Stress_v_s
5. Conclusions
train_A36_2.png.

It was the objective of this study to [9] Ratner, B. D., A. S. Hoffman, F. J. Schoen,
determine the optimal thickness of midsole and J. E. Lemons, Biomaterial Science an
for this patient to prevent or limit the Introduction to Materials in Medicine, Elsevier
damages to the heel incurred from a fall. Inc., 2004.
The optimal range was determined to be
between 20mm and 30mm thick. Therefore,
a patient of his size would benefit from these
results. By adding the necessary amount of
material, this would prevent the long term

141
Proceedings of the 2008 IEMS Conference

Developing a Database that Executes Comprehensive, Comparative Analyses


Among Undergraduate Industrial Engineering Programs Nationwide
Federica Robinson, Mario Marin, Serge Sala-Diakanda,
Jose Sepulveda, Luis Rabelo, Kent Williams
University Central Florida

Abstract continuously input and update their information


and conduct analyses concentrated to their
Institutions are continuously altering their purpose.
curriculum to adapt to changes in academia and
industry requirements. Therefore, to improve 1. Introduction
the undergraduate Industrial Engineering
Program at the University of Central Florida, Industry and academia are constantly
changing the requirements of prospective
comparative analyses among IE programs at
Industrial Engineers (IEs) to adapt to changes
other U.S. institutions were necessary. To and developments in society. Therefore,
simplify the process and increase the usability of institutions must incorporate these changes into
its information, a database was created their curriculum to ensure students’ preparation
incorporating a school’s undergraduate IE and suitability for the ‘IE’ title. In an attempt to
program capacity, its curriculum, and other ease the task of performing comparative
significant historical and statistical facts. analyses among different institutions, a database
incorporating curriculum requirements and other
To facilitate the capture of this information,
identifying characteristics was created. This
several research efforts were initiated, beginning database can be used to weigh the importance of
with a determination of principle schools to be specific courses or course categories at a
analyzed and the development of course particular university or group of universities,
categories to record each curriculum. while also denoting whether the course is an
Complementing charts and tables were hence emerging topic (A previous study identified the
18 most desirable IE topics new graduates
created to organize the wealth of data found.
should be knowledgeable; these topics have
These findings were entered into a relational been termed ‘emerging topics’).
database, later resulting in the ability to generate
queries and reports based on a user-defined 2. Approach
criterion. To enable comprehensive analyses among
This paper describes how this initiative was undergraduate IE programs, it was essential to
approached and thoroughly explains the determine the schools to analyze and the extent of
database’s design and structure, means of the data sought. Based on the 2006 U.S. News &
extending its contents and general analysis tasks World Report on America’s Best Colleges, it was
available. With more than 40 schools entered, determined that the scope would not only include
the database is in no way complete, partly due to public institutions, but private schools as well.
the extensive number of schools that exist and The schools would be further classified as those
the frequency of their curriculum changes. It is offering doctoral programs and those that do not.
projected that as a result of this paper and other Aside from gathering general descriptors
initiatives, the database will become available about each school, i.e. department size and
via the World Wide Web, enabling schools to
142
Proceedings of the 2008 IEMS Conference

historical facts, spreadsheets were created to category courses were to be placed but after
help organize the remaining information deemed attempting to categorize several schools,
necessary to accomplish this initiative. First,
twelve categories were designed to thoroughly
evaluate each program while maintaining Exceptions and conflicts arose requiring
emphasis on IE specific courses. Eight of the additional consideration- 1. Schools operating
categories are dedicated solely to industrial on a track basis, requiring students to select a
engineering to allow for a thorough analysis of specialization track early in the program,
programs in terms of the type of IE courses required the creation of a separate curriculum for
being taught and the emphasis each program each track; 2. Institutions operating on a
quarterly system were denoted because these
may be placing on particular IE fields (See
programs tend to require a higher number of
Table 1). Among those, the senior design course credit hours, which will potentially skew the
present in most undergraduate IE curriculums results of some analyses; 3. Distinguish whether
was considered independently because it usually a school is public or private to account for
requires students to utilize an array of IE skills possible differences due to accreditation
to address real world problems. requirements, and political and departmental,
organizational structure.
Table 1. Dedicated IE Categories
4. Database Design
Human Factors Engineering and Ergonomics
Manufacturing (Systems) Engineering
Following the structured, compilation of
Operations Research & Systems Engineering
Production & Management Systems Engineering
necessary information, a relational database was
Quality Management created incorporating all findings. Using
Engineering Management Microsoft Access 2003, a common field
Simulation & Modeling (schoolID) was implemented as the link among
Senior Design all tables, queries, forms, and reports.

Electives were then divided into three This field is automatically generated for
categories to separate IE electives from each school added to the main informational
Engineering & Science electives and other non- table. Other relationships are depicted in Figure
technical electives. The remaining category,
1 below.
General Education, was designed to account for
the mandatory, yet non-IE courses in the
curriculum.

3. Data Gathering

With great consideration given to the wealth


of information available, three spreadsheets
were created to filter and organize this data. The
IE curriculum table (Table 2), the general
education table (Table 3) and a course topic
worksheet were used in the evaluation of each
curriculum. Each team member used best
judgment and consensus to determine which Figure 1. Database Relationships

143
Proceedings of the 2008 IEMS Conference

5. Database Employment Courses, and Course Topics) or by adding or


deleting information directly onto the basic
Upon entering the database, the main tables.
interface appears displaying four buttons Within the IE Programs tab, any general
attributed to editing and four buttons for information about a university (i.e. # of credit
conducting analyses and searches (Figure 2). hours in the categories represented, department
Editing tasks allow for the addition and/or size, etc.) is available for edit. If the addition of
modification of general school information, their a university was desired or multiple curriculums
courses and topics, and course categories as need to be added to a school, this may be
needed. Meanwhile, Analysis Tasks allow for accomplished as well. A link to the tables used
the evaluation of courses by category, specific in the data gathering process are embedded
course details or topic. This section provides within this interface, as is working links to each
informational reports on schools, topics, and IE department’s website. These features serve a
categories based on a combination of criteria number of purposes including the ease of
defined by the user. The results are generated verifying existing information.
based on queries from combinations of reports Course Categories allow the viewing of the
and forms, and allow for clear and simple existing course categories and the addition of
representation of the information sought. new categories, while the Courses interface
requires the selection of both a university and a
5.1 Editing the Database catalog year to yield a list of courses and topics
that can be edited as desired.
Course topics allow for manual scrolling or
the use of the alphabetical search tabs to view
existing topics. Additional topics can be added
and whether they are emerging topics can be
selected.

5.2 Conducting Analysis


To conduct comparative analysis or simple
searches, several queries have been embedded in
the database within the Analysis Tasks options-
Quick Search, Course Category Analysis,
Course Analysis, and Topic Analysis. Each of
these buttons generate results when
complemented with a user-specified criterion.
• Quick Search allows for the viewing of
general information relative to a specific
institution. It produces reports revealing the
Figure 2. Main Database Interface University’s name, its program name(s), web
address and geographical location. It also
As information changes, the database displays the school’s ranking(s), department
provides several means to reflect these changes. size, an overview of credit requirements and
This can be accomplished by using the editing other general details.
tasks buttons (IE Programs, Course Categories,
144
Proceedings of the 2008 IEMS Conference

• Course Category Analysis permits the [3] Hamidreza Eskandari, Serge Sala-
input of partial course titles in order to return a Diakanda, Sandra Furterer, Luis Rabelo, Lesia
list of courses taught at each university related to Crumpton-Young, Kent Williams (2007),
"Enhancing the undergraduate industrial
that title. There are many factors available to
engineering curriculum: Defining desired
limit the output yet those are user-defined. characteristics and emerging topics", Education
• Course Analysis allows for analysis + Training, Vol.49, No. 1.
based on a minimal numbers of letters of the
course name to account for variation among [4] Industrial Engineering Programs Accredited
course titles. The type of university can be by the Accreditation Board for Engineering and
selected, and other constraints can be set (i.e. Technology, Inc. Retrieved October 2006, from
whether a user desires institutions that teach or http://www.abet.org.
do not teach a particular course).
[5] Institute of Industrial Engineers, IERC
• Topic Analysis allows for the
Conference, “Reengineering the IE Curriculum,
comparison of topics, whether emerging or not, a Departmental Reform Strategy,” May 2004.
based on the input of any letters describing the
desired, IE concept.

6. Summary
This paper clearly demonstrates how this
initiative was approached, research methods
executed, the creation of the category-based
database and means by which the database can
be updated or edited. After the anticipated
release of the database via the World Wide Web,
institutions will be able to continuously update
the database’s content to ensure the accuracy
and availability of their school’s data and be
permitted to freely use its contents to perform a
wide range of comparative analysis suitable for a
number of purposes.

7. Acknowledgements

This study is a part of the NSF funded


research project entitled “ReEngineering the
Undergraduate Industrial Engineering Program.”

[1] America’s Best Colleges 2006, US News


and World Report, August 2006 Edition.

[2] Ana Ferreras, Dr. Lesia Crumpton-Young,


Dr. Luis Rabelo, Dr. Kent Williams, Dr. Sandra
Furterer. Developing a Curriculum that Teaches
Engineering Leadership & Management
Principles to High Performing Students.
ASEE/IEEE Frontiers in Education Conference,
October 2006.

145
Proceedings of the 2008 IEMS Conference

Table 2: Industrial Engineering Curriculum Spreadsheet

Table 3. General Education Spreadsheet

146
Proceedings of the 2008 IEMS Conference

Applying Six Sigma to Service Organizations: A Case for the Florida


Department of Children and Families
Alicia Combs, Mihaela Petrova-Bolli, Eric Tucker
University of Central Florida
er319078@pegasus.cc.ucf.edu

Dana Johnston
Department of Children and Families, State of Florida

Abstract apply the best practices of industry into the


This project used the Six Sigma methodology service sector that the Six Sigma approach was
to evaluate the current Tier 3 Review Process for applied to DCF Tier 3 Review Process.
the Department of Children and Families (DCF), While organizations adapt to improve
propose recommendations for improvements and themselves, the academic community is no
provide control mechanisms to be implemented different. The goal has always been to provide
over the next six months. The project was the best educational opportunities for students.
accomplished by a team that included University In the past, opportunities have come in three
of Central Florida (UCF) students, DCF forms. The first is to artificially create projects,
program managers, and three mentors well the second is to provide research or research
versed in the Six Sigma methodology. The with industry, and the third approach is to use
purpose of this article is to provide an overview students as consultants [3]. The latter has been
of the significant events in relationship to utilized by business schools supporting local
applying the Six Sigma process to this service businesses by developing plans for
organization. Specifically, this article will improvement. What is occurring in the Industrial
expound on the lessons learned, during the team Engineering and Management Systems (IEMS)
formation phase and each phase of the DMAIC Department at the University of Central Florida
(Define, Measure, Analyze, Improve, and is an adaptation of this last approach. Instead of
Control) process. working only as consultants, UCF students are
This article will be useful to organizations closely integrated in an organizations
seeking to implement the Six Sigma improvement effort. The close integration of
methodology as well as organizations that are UCF students and service entities in this project
seeking to collaborate with academic were formed in conjunction with the ASQ
institutions. Community Good Works Program [4]. The goal
of this program is to use quality practices to
1. Introduction improve the community. This paper will provide
the necessary background to understand the
In today’s world, private businesses are being DCF Tier 3 Review Process as well as the
asked to be more effective with fewer resources. applied Six Sigma methodology. It will also
The public sector is no different. It is with the expound on the key lessons learned in the areas
mindset to do more with less in a more effective of team formation and DMAIC process
manner that organizations have looked for ways implementation. The results of the project and a
to streamline their processes, improve brief status update will be provided to conclude
efficiencies, and improve quality. While the the discussion.
concepts of Lean and Six Sigma are not new to
manufacturing industries, they are perhaps a bit
more nascent to organizations in the service
industries [1][2]. It is with a keen emphasis to

147
Proceedings of the 2008 IEMS Conference

2. Background and UCF IEMS students enrolled in a total


quality improvement course. The focus of the
In order to clearly understand the direction of effort was to utilize the Lean and Six Sigma
this study, it is necessary to understand the methodologies in order to propose multiple
background of two components. The first courses of action to improve the Tier 3 Review
component is Florida DCF and the second Process.
component is the quality methodology that is The Lean approach uses many different tools
applied. in order to eliminate waste from the
organization. The Six Sigma approach utilizes
2.1 Florida DCF the DMAIC process in a structured and
systematic manner in order to understand the
The purpose of the Florida DCF is to serve as process with the eventual goal of suggesting
an agency that promotes the betterment of the improvements and control measures [9].
community by protecting the rights of While many of the applications of Lean and
individuals and promoting the economic well Six Sigma have been industrial settings, it is
being for individuals and families [5]. Florida clear that this is not the only realm in which
DCF’s vision is to become the premier social these tools can be used [10]. Several research
protection and promotion agency in the country. endeavors have shown the potential of using
It is with this mentality that Florida DCF has quality approaches in nonindustrial settings. One
implemented a program called “Quality Quest” such example is the US Air Force’s Smart
[6]. This program has moved DCF forward in Operations 21 (AFSO 21) initiative [11]. This
the effort to improve quality of services across initiative was started in March 2006 in an effort
all of its programs. to improve the quality of Air Force processes.
One of these implementations is the According to all reports to date, the Air Force’s
Automated Community Connection to efforts have been commendable and are a great
Economic Self Sufficiency (ACCESS) system. illustration of the improvements that can be
The ACCESS system provides a streamlined and accomplished by organizations that focus of
technological advance interface to apply for utilizing Lean and Six Sigma methodologies to
public assistance and to make eligibility improve processes [12].
determinations [7]. In September 2007, the DCF
ACCESS system was awarded the Innovations 3. Process (Lessons Learned)
in American Government Award by the Ash
Institute for Democratic Governance and While the background information on Lean
Innovation at Harvard University's John F. and Six Sigma could fill volumes of journals and
Kennedy School of Government [8]. The receipt books, the focus of this paper is to explain the
of this award is a testament to the focus Florida critical lessons learned in the team formation
DCF has placed on improving its quality and process and the implementation of the DMAIC
efficiencies of service. methodology.
This mentality has become a pervasive part
of the DCF culture and this Lean Six Sigma 3.1. Team Formation
initiative is a natural extension of this
environment. Florida DCF has also partnered The building of the combined DCF and UCF
with the American Society for Quality (ASQ) on team was a remarkable endeavor. It was
several initiatives as part of the Community interesting because many individuals were asked
“Good Works” program. to subvert individual goals during the term of the
study in order to accomplish the study. While
2.2 Lean Six Sigma the setting aside of individual goals is not
unusual to team building, it is unique when a
In conjunction with the “Good Works” great portion of the team does not belong to the
program, a Lean Six Sigma effort was initiated main organization. This circumstance may have
by a team of DCF Circuit 18 program managers been problematic, but the position of the UCF

148
Proceedings of the 2008 IEMS Conference

students was clearly addressed by the course perception that the project was already defined.
instructor and supported by the DCF project This was true in a general context, but not based
champion. on the Six Sigma methodology. It was only
The summary of team findings showed that through strong leadership that the team was able
the combined team was a strength, however to come together and understand what was really
there were some noted areas for improvement. needed in the define phase. The second finding
The strength of the combined approach was that was the level of detail that was needed to
the UCF students had access to DCF and Six perform Six Sigma. The define phase
Sigma Blackbelt experts. Another strength was foreshadowed the depth and breadth of work that
the diverse set of skills brought to the table by was to come.
the UCF students. The measure was of great interest to the
While there were several strengths as team. It was the first opportunity that many of
previously noted, there were some key areas for the UCF team members had to understand the
improvement. The three most critical areas were DCF Tier 3 Review Process and to view and
the following: 1) understanding (willingness to collect data. There were many insights gleaned
learn) of the quality improvement methodology, from this phase, but three were critical. The first
2) developing and executing a clear set of observation was that the team needed to become
communication expectations and requirements, intimate with the process as soon as possible. In
and 3) balancing the project with other work and order to meet this task, a portion of the team
class requirements. Most of these areas for interviewed a focus group of DCF employees
improvement centered on the student and conducted a 16 question survey. The second
experiences, but were also felt on a lesser degree insight was how critical it was to understand the
by the DCF team members. data. In this study, the data that was supplied by
One area of special note was the management DCF consisted of 37 categories. The amount of
of the team when expectations were not met. For data supplied was initially overwhelming.
a three week period, many team members However, as the team better understood the
missed several deadlines for deliverables. As a process, the understanding of the collected data
result, the team leadership revisited the team was also increased. This insight will be further
charter and clarified the expectations and the addressed in the analyze phase. The final insight
documentation that would result if expectations from this phase was the calculation of the
were not met without the proper communication. process sigma. The process sigma is normally
After this clarification meeting, the majority of measured in terms of defect per million
the team performed very well, with some minor opportunities (DPMO); however, the team
exceptions. Unfortunately, during the course of learned early in the data collection process that
the study, a team member was reassigned to a there was a difference in the way quality was
different study for not meeting expectations. measured by the state and federal inspectors
While this was an undesirable event, it was a versus the local operating agency. The local
tremendous lesson to the students that this was agency had a subset of 13 critical categories
more than a purely academic effort. In essence, whereas the state only looked to see if the
the students were hired as integral quality reports reviewed were correct or incorrect. The
improvement experts with an expectation of reduction in the fidelity of the data that was
performance. measured drove the team to evaluate the process
sigma using attribute data. The use of attribute
3.2 DMAIC Process data is what necessitated the measurement based
on defects per million (DPM) instead of DPMO.
While there were numerous lessons learned The use of attribute data required the team to use
in the building aspect of the study, there were several different procedures during the analyze
some critical lessons learned from the phase, specifically in the creation of the process
application of the Six Sigma DMAIC process. control diagrams.
The key findings of the define phase took The analyze phase allowed the team to learn
two forms. The first was driven by the two main lessons. The first was in the area of

149
Proceedings of the 2008 IEMS Conference

Pareto Analyses. After creating an initial set of will increase the visibility of the performance
Pareto diagrams, the team was unable to see a measures to the organization.
tradition 80-20 break point. Based on this
observation, the team decided to prioritize the 4. Results
data categories into three categories. The three
categories were weighted and the Pareto charts The results of this study should be looked at
were repotted. The result is that actual breaks in from several vantage points.
the data could be determined. This enabled the The first-vantage point is from the
team to evaluate the data in a systematic manner perspective of the DCF team members. The
and provide useful information to determine team members were able to closely pair with the
improvements. The second lesson learned in the students from UCF. This arrangement may have
analyze phase was that team observations are initially appeared to be convoluted; however it
critical to understanding a process before truly turned out to be an exceptional
forming options to improve a process. It was relationship. DCF was able to serve as experts
noted by several team members that they could on their processes, while harnessing the
better understand the nature of the process by knowledge, energy, and enthusiasm of doctoral
seeing it in operation. While this finding is quite and master’s level students supported closely by
logical, it was not a priority initially to the team, local area quality experts and mentors.
but once accomplished made the linkages The second-vantage point is that of the UCF
between the data and potential more salient students and faculty. The UCF students were
solutions. able to have a truly experiential learning
The improve phase had one major finding. experience, while implementing the Lean and
The improve phase showed the team that the Six Sigma framework. The UCF team members
previous phases of the DMAIC process learned that they could quickly assimilate the
generated information that was very useful to knowledge of an organization that was initially
proposing improvement ideas. The key result of very different than any previous experiences.
this work was a matrix of 16 improvement ideas The UCF students also learned quickly that the
(reduced from an initial list of 60), that were process would not move forward unless they put
prioritized based on the potential benefits (high in the work to move the process forward and
or low) and the needed resources to accomplish generate the required products. For their efforts,
the improvement (high or low). From the list of the UCF students were certified as Six Sigma
improvements, there were three that could be Green Belts and Black Belts depending on their
immediately implemented into the Tier 3 experience level.
Review Process. The remaining improvements The third-vantage point is that of the results
needed additional approval and resources from produced by the DMAIC process. It can be
DCF. In addition to the list of improvements, the clearly shown that a systematic process led to
team was able to combine the information substantial improvement suggestions. The
gained throughout the process and develop a processes have been successfully used in the
comprehensive set of performance measures that past, so the results of the suggestions once
would be useful to the organization. The key implemented should be no different.
lesson is that following the methodology will As of the writing of this paper, DCF Circuit
generate information that is useful to 18 has implemented four of the sixteen
improvement. recommendations provided the team. The results
After the improve phase, the combined team of the four suggestions are at various stages of
was concluded and the recommendations for execution.
improvement were delivered by the DCF team The recommendations that have been
members to the leadership of DCF. The control implemented have increased the number of cases
measures of this process will depend greatly on under review, added and improved monthly and
what improvements are implemented. Some of quarterly training, and finally established written
the improvement measures will result in guidance in the form of checklist for reviewers
modification to training programs, while other to follow. The short-term effects of these

150
Proceedings of the 2008 IEMS Conference

improvement efforts have been to increase the 8. References


Tier 3 accuracy by 27 percent from Oct 07 –
Dec 07. The short-term increase is noteworthy [1] S. Furterer and A. K. Elshennawy,
and foreshadows future improvements. "Implementation of TQM and lean Six Sigma tools in
local government: a framework and a case study,"
5. Conclusions Total Quality Management & Business Excellence,
Routledge,12,2005. pp. 1179-1191.
The approach that was taken in this study
[2] C. Yang, "Establishment of a Quality-
was very effective. The opportunity for UCF
students to partner with an organization truly Management System for Service Industries," Total
interested in quality was exceptional. By Quality Management & Business Excellence,
partnering with UCF, DCF was able to create a Routledge,11,2006. pp. 1129-1154.
synergistic relationship that harnessed the power
and best practices of several organizations. For [3] P. L. Burr and G. T. Solomon, "The SBI Program:
the UCF students, the learning opportunity was Four Years Down the Academic Road," J. Small Bus.
outstanding and the lessons learned will not be Manage., Blackwell Publishing Limited,04,1977. pp.
forgotten. In the future, organizations should be 1-8.
encouraged to partner with universities for more
than research. In this instance, multifaceted
[4] American Society for Quality, Good Works
expertise was focused on helping the partner
Program Retrieved 03/05,2008, from
organization. This effort could result in a
solution that is unattainable with intra http://www.asq.org/about-asq/what-we-
organizational resources. From a quality do/goodworks.html
perspective, it is clear that the Lean Six Sigma
methodology works. At DCF, improvement [5] Florida Department of Children and Families,
ideas that were hidden became visible and DCF Mission, Vision and Values Retrieved
provided DCF with a great opportunity to thrive 03/05,2008, from
in the future. http://www.dcf.state.fl.us/transition/mission.shtml

6. Acknowledgements [6] Florida Department of Children and Families,


Quality Quest Retrieved 03/05,2008, from
The authors of this paper would like to thank
http://www.dcf.state.fl.us/transition/docs/accomplish
the following individuals who were part of the
_qualityquest.pdf
combined UCF and DCF teams and performed
admirably throughout the project: Jorge
Martinez, Jackie Robles, Christopher Lee, and [7] Florida Department of Children and Families,
Josue Utreras. Additional thanks go out to Dr. ACCESS Florida Retrieved 03/05,2008, from
Ahmad Elshennawy, Dr. Frank Voehl, and Ms. http://www.dcf.state.fl.us/transition/docs/accomplish
Erica Farmer. _access.pdf

7. Disclaimer [8] Florida Department of Children and Families,


ACCESS Florida Honored as Innovations in
This paper was written by several authors and American Government Award Winner Retrieved
reflects their personal views and is not 03/05,2008, from
necessarily endorsed by the United States http://www.dcf.state.fl.us/ashaward.shtml
Air Force.
[9] Li-Hsing Ho and Chen-Chia Chuang, "A Study of
Implementing Six-Sigma Quality Management
System in Government Agencies for Raising Service

151
Proceedings of the 2008 IEMS Conference

Quality," Journal of American Academy of Business,


Cambridge, Journal of American Academy of
Business, Cambridge,09,2006. pp. 167-173.

[10] A. Parasuraman, V. A. Zeithaml and L. L. Berry,


"A Conceptual Model of Service Quality and Its
Implications for Future Research," J. Market., 1985.
pp. 41-50.

[11] Michael W. Wynne,


Letter to Airmen: Air Force Smart Operations 21
Retrieved 03/05,2008, from
https://acc.dau.mil/CommunityBrowser.aspx?id=140
491

[12] Joe B. Wiles,AFSO 21: Fairchild initiative


reduces wait for critical parts
Retrieved 03/05,2008, from
http://www.af.mil/news/story.asp?id=123039791

152
Proceedings of the 2008 IEMS Conference

Resource Utilization: A Study on How to Implement a


Communication System

Kendra Lee, La Shaveria Keeton, Carsolina Walton, Tiki L. Suarez-Brown


Florida A&M University
tiki.suarez@famu.edu

Abstract communication within the school. The


following three key operations were created to
The advancement of technology has brought fully address the following needs:
about many changes, yet organizations continue
to seek increased solutions in an effort to close 1. Create an efficient set of procedures to
communication gaps. This research determined distribute SBI, FAMU and business news
various business needs for communication and information via WSBI Television -
between administration, faculty, students and SBI’s VICs (plasma screens) to all faculty,
staff found within the School of Business and staff, administration and students,
Industry (SBI) at Florida A&M University. 2. Implement a process to electronically
Furthermore, this research investigated efficient catalogue and archive previous specific
systems to employ underutilized resources for events such as TV tapings and Forum,
inclusiveness purposes. Possible 3. Create a process to contact and schedule SBI
recommendations are explored for future Alumni / SBIForce to participate by creating
implementation. electronic content for WSBI Television.

This project was viewed as a way to


1. Introduction
eliminate and/or prevent the gaps and
breakdowns presently within our internal
Complete utilization of technology to communication efforts and viewed as an
administer up-to-date announcements and opportunity to improve communication
pertinent information within SBI has been processes within SBI. The Dean of the School
problematic. Therefore developing processes to of Business and Industry, Dr. Lydia McKinley-
enhance communication vehicles between Floyd provided final approval and agreed upon a
students, faculty, staff and alumni within SBI project budget of $25,000. A project scope was
was identified as the most critical project to created based on this approval.
implement. Currently SBI uses communication
portals called Video Information Centers (VICs
also known as plasma screens). However, there 2. Overview of SDLC Process
is no formal process in place for students and
faculty to submit information to be displayed on 2.1. Systems Development Life Cycle (SDLC)
these screens around the school. Additional
communication procedures are outdated and Systems Development Life Cycle (SDLC) [1]
inefficient. There is a need for effectively store, is a traditional methodology that is used by
and distribute meaningful information to all Systems Analysts to develop an information
individuals in a timely manner. system through investigation, analysis, design,
implementation and maintenance. The process
The Systems Development Life Cycle of SDLC is composed of the following six
(SDLC) was the methodology chosen to phases: Project Identification and Selection,
effectively develop a system to increase Project Initiation and Planning, Analysis,

153
Proceedings of the 2008 IEMS Conference

Design, Implementation, and Maintenance (see and new system desires were determined via
figure 1). traditional methods using interviews with clients
Dean McKinley-Floyd and Mr. Dupree, director
of SBI media and news, during the Analysis
phase. The design phase incorporated
information from the interviews and converted it
into logical and then physical systems
specifications.

Due to time constraints of the semester, the


implementation phase was continued by our
client Mr. Dupree, and his student volunteers.
The implementation phase included installing
approved applications into production
environments. As a result of the project the
Dean purchased a Desktop Computer, an
amplifier, and DVD’s. Currently Mr. Dupree
uses WSBI as an additional communication tool
and creates shows to be aired 3 times a week. An
electronic database was also created to catalog
past and current videos. The objective of
formulating a long term consistent
communication tool is still being formulated.

The final phase of SDLC is maintenance. The


maintenance phase involves making changes to
hardware, software, and documentation to
support its operational effectiveness. These
systematic repairs and improvements
are made at the behest of the user.

3. Communication Strategy

SBI communicates with a number of entities


to include the students, faculty, staff, the
University (collectively to include other schools,
Figure 1. SDLC Methodology Phases for majors, the President and his staff), and SBI
Project alumni known as the SBI Force. The current
problems stem from the lack of communication
The project identification and selection phase between SBI and its various entities as well as
explored the following project points: overall between SBI and SBI Force.
feasibility, time length for execution, class
willingness, availability of resources and major Currently SBI has installed 10 plasma screens
over minor needs of the School of Business and throughout the school to assist with
Industry. This second phase of SDLC developed disseminating information. However, additional
four major deliverables: the statement of project plasma screens are needed and updated video
scope, the baseline project plan, statement of system equipment is required to run more
work and work breakdown structure. System enhanced programs. At the moment, the
requirements such as current system procedures majority of the cost associated with the system

154
Proceedings of the 2008 IEMS Conference

includes updating the equipment and processes „The DVD creation „Companies can
at the School of Business and Industry, which will require purchase receive a copy of the
includes purchasing equipment necessary for the and inventory of events in which they
improvement of information streamed Recordable DVDs. participated in DVD
throughout the School of Business and Industry. format, which are
The additional cost to hire personnel with more user friendly.
expertise to complete the project must also be „Setup and „The data in digital
considered. maintenance costs format can be
for the broadcasting broadcast to inform
Both tangible and intangible costs and benefits of the digitized others of the current
have been identified with this project. Tangible events. May need events within and
costs for the new system consist of conversion cooperation from the around the school of
equipment, a sound board, personal computers school of journalism. business.
for the VIC staff to use, an entirely new „In order to use the „The video and
background set for the TV taping room, and videos and photos to photos can be used
most importantly additional servers that will be market the program, as a marketing tool
solely for the installation of this new information marketing costs to recruit the best
system. Labor needs associated with the staffing would be created. and brightest
for the technical and production side will consist (Flyers, DVDs, students and
of approximately 20-25 people per Mr. Dupree’s magazines, and corporate
request. In the event that the staff selected has articles) professionals.
experience and understanding of both functions
then labor needs will be reduced to about 15-20
students (see Table 1). Intangible costs are a
factor as well. There is a high cost associated
with the improvement and actualization of this
system, not including the cost of information not
reaching students has desired. Another
unexpected cost includes equipment repair
and/or updates. One-time costs that are incurred
at the beginning of the project include the new
hardware and software utilized to house the
system and system training for the students (as
needed).

Table 1. Cost/Benefit Analysis

Costs Benefits
„Initial transfer from „Digitized
VHS to DVD is audio/video format
timely and the cost can be easily
of the hardware will cataloged and
need be incurred. recovered for future
reference.
„Additional people „Archived TV
will be required to tapings, Forums, and
maintain the system; Close-ups can be
therefore the cost of utilized as a training
resources will tool for students and
increase. professors alike.
155
Proceedings of the 2008 IEMS Conference

Short-term goals were identified for all


objectives of the project.

First Objective
• Short-term goals for the first objective
include creating and streamlining an
efficient set of procedures to gather news
and information variety of means to include
SBI administration, staff, students and
faculty, the campus community, as well as
the local community. This information will
be displayed via WSBI, plasma screens, etc.
Individuals requesting information to be
displayed must complete an Information
Request Form from Mr. Dupree or receive
the form via email wsbi@gmail.com or
wbsi@famu.edu upon request. All fields on
the Information Request Form should be
completely filled out and sent/delivered to
Professor Dupree and his aides. Once the
information has been received and reviewed,
it must be organized and formatted such that
it can be stored and then sent out using
various communication methods.

• Final stage of this process will release


Figure 1. Level-0 DFD for current system
information via the various communication
To increase efficiency, initial hiring of mediums including WSBI Newscast, the
additional personnel is included in the start up plasma screens, etc. The newly installed
costs. Any monies used to acquire software, server will host the website and provide an
training, and further developments are all efficient and powerful base for the
associated project-related costs. Once the dissemination of information. The financial
system is up and running, operating costs will be
contribution from SBI will also allow
incurred in order to maintain the hardware,
software, and facilities. Professor Dupree to purchase the equipment
needed to convert the archived videos into a
Logical system specifications for the current and digital format.
new systems were created using data flow
diagrams to model function and purpose. These Second Objective
diagrams along with structured English (logic • Implement a process to electronically
modeling) were created in the Analysis phase of catalogue and archive previous specific
the project. Figure 1. displays a Level-0 Data events such as TV tapings and Forum,
Flow Diagram (DFD) of the current system.
• Create a database of Forums, TV Tapings,
3.1. Short-term Goals and other recorded events. The database of
156
Proceedings of the 2008 IEMS Conference

all recorded events will be maintained and is an excellent mechanism to send and
placed on the server for users to access. The receive correspondence, received digital
database will include information such as clips from Alumni ultimately receiving
dates of the recorded events, the name of the information such as scheduling visits, etc. in
company/organization, the presenter, etc. a more efficient manner.

• Acquire immediate technology including all ƒ Blackboard – Utilize Blackboard,


of the items in the cost summary such as the http://famu.blackboard.com, to setup generic
Digital Photo Cameras, Mackie Soundboard, user names and passwords for all SBI Force
Digital Video Camera, Apple Computer to be able to login and view information as
(Laptop), VHS/DVD Converter, well as easily upload and send information
Distribution Amplifiers, DPS Velocity without additional costs. Blackboard is
Servers (2 additional), and Recordable ideal since the infrastructure is currently in
DVDs (50 pack). place and easily manipulated to be
incorporated in the project. Other added
• Convert VHS to a digital format therefore benefits of Blackboard include, mass email
all non-digital recorded events will be capabilities, interactive classrooms,
digitized and stored on the server. The communication boards, etc
VHS/DVD Converter, listed in the cost
summary, will assist in this process. 3.2. Long Term Recommendations

Third Objective The first objective’s long-term goals desire to


The following list displays various ways to completely automate retrieval and distribution of
contact SBI Alumni / SBI Force: news and pertinent SBI information.
Information Request Forms will now be
ƒ Phone calls – Phone calls will be made to accessible via the WSBI link located on both the
request demographic information from the SBI and the SBI Exchange websites. The
SBI Force. Although this method is more completed request form will be electronically
effective for communicating, extra man forwarded to Mr. Dupree and his staff;
power would be needed for this alternative eliminating the time needed to physically return
to work. the form to the WSBI office. Information will
be updated routinely via the WSBI website, the
ƒ Bulk Mail – A mass bulk mailing list will be newscasts, the plasma screens, etc., in addition
created to physically mail information to the to being stored electronically for efficient recall
SBI Force. The SBI Force will be able to of information.
submit digital video clips and films via
postal mail. This alternative will be costly The long term goals associated with objective
in terms of postage and time extensive to two will enable SBI to operate more efficiently
process bulk mailing. Although receiving by improving internal communications and
digital clips by mail eliminates preservation improving organization through the VICs.
efforts, conversion to the proper format to Organizing the videos on the server as well as
upload the clip to the server is time restricting access to the database will foster a
consuming. secure environment for users to retrieve the
information. This goal will be accomplished
ƒ Email – Establish a mass email list to through the creation of the WSBI website which
receive electronic content, disseminate will be (sourced) channeled through a link that
information, and provide updates about will be added to FAMU’s website to allow users
additions to the WSBI website will be the to access digitally recorded events to display
most effective and efficient means of SBI’s talents in its Forums, TV tapings, and
communicating with the SBI Force. Email other events.
157
Proceedings of the 2008 IEMS Conference

recommendation, the short term goals include


The long-term goals associated with the third utilizing data from the current alumni website
objective include developing a mobile crew and and SBI exchange to obtain information from
establishing the SBI Exchange website. The and about Alumni; utilizing Blackboard, email,
mobile crew will interview the SBI Force in and calls to contact Alumni; and utilizing
action whether locally throughout the semester existing contact information to ultimately create
or at their respective hometowns during the one central database location. Moreover, the
holidays. The SBI Exchange website will long-term goals also include utilizing the server
manage information whether uploaded, so that all entities can obtain access to the
downloaded, or via an instant communication database for Alumni information.
portal which will bridge the gap between SBIans
and the SBI Force. Issues and challenges were identified
concerning the second and third objectives of the
4. Observations, Issues and Challenges short-term plan. The implementation process for
electronically cataloguing events would need to
Based on what was observed the project include guidelines that refer to the timeframe in
needs identified were based on what was which information would be maintained on the
perceived to be most beneficial to WSBI/FAMU’s website. The ordering or setup
administration, faculty, staff, students, and all would also need to be outlined to ensure
other stakeholders involved within SBI. First, information is stored in a way that is user
SBI must create an efficient set of procedures to friendly and organized such that users
distribute SBI, FAMU, and business news and understand the content available that can be
information via WSBI Television - SBI’s VICs resourced. Another issue or challenge of
(plasma screen) to all faculty, staff, concern dealt not only with contacting SBI
administration and students. The second Alumni and SBI Force members, but ensuring a
objective involved implementing a process to process is in place to obtain and maintain current
electronically catalogue and archive previous information of individuals who would like to be
specific events such as TV tapings and Forum. involved in assisting SBI in its communication
The final objective involved creating a process enhancing endeavors (i.e. contributions for
to contact and schedule SBI Alumni/SBI Force WSBI content).
to participate by encouraging these individuals
to mail in electronic content for WSBI This action research of SBI’s technological
television. Short term and long term goals were frontier provides insight about the relationship
created to execute the aforementioned between organizational growth and the resources
objectives. The presented short term plan created it must evaluate. This type of reflective process
an electronic information form to allow introduced long and short term goals that
administration, faculty, staff, and students to produced a preliminary blueprint that can be
submit data they desired to be announced; and applied to SBI and similar institutions. The
recommended reinstating a vital Professional implementation of SDLC provided a framework
Development (PD) company. The presented that addressed the need for information sharing
long term goals include allowing electronic through the use of the SBI VIC. The research
submission of data via the SBI website. To provided illustrated the completion of the first
address the second recommendation, the short four phases of SDLC which allowed the existing
term goals included creating a database of system to be revamped and streamlined for
forums, TV tapings, etc. and digitizing old VHS effective restructuring purposes. These findings
tapes – which initially housed this information – provide additional support about methodologies
to DVDs. Additional long term goals include and techniques that can be applied to
completing an organized process on a server to organizations, similar to SBI, that have
allow restricted and ultimately protected access resources that are not being maximized
to the database. To address the third efficiently.
158
Proceedings of the 2008 IEMS Conference

Future work on communication gaps and


5. Conclusions and Future Work efficient resource utilization can be used to
identify colleges and universities with similar
The major objectives of the project included budgets that have superior/high impact
creating an information management system that technological systems that SBI can model.
archives past, current, and future communication Extended studies may also be used to research
mediums and fully utilizes new technology. A small firms and businesses operated by
management information system was created to entrepreneurs and the ways in which they use
address the three objectives by incorporating their systems to effectively disperse information
links leading to additional networking tools such to a limited amount of employees or a large
as the SBI Force, WSBI, and the alumni website number of clients.
and reinstating the student run company WSBI
Inc. to better organize, maintain, and References
disseminate information to SBI members and the
SBI Force. [1] J. A. Hoffer, J. F. George, and J. S. Valacich,
Modern Systems Analysis & Design 3rd Edition,
Prentice Hall 2002.

159
Proceedings of the 2008 IEMS Conference

The Revolution of Six-Sigma: An Analysis of its Theory and Application


Dominique Drake and J. S. Sutterfield
Florida A&M University

also rarely worked in teams or cross-functional


ABSTRACT efforts because each person had his or her own
Six-sigma is a discipline that has revolutionized tasks to complete. Moreover, post-production
many corporations. It has literally transformed inspection was the primary means of quality
them from a state of loss to one of profitability. control, which showed the lack of concern for
It can be used to improve any process, and is waste and error at intermediary steps in the
particularly useful for improving any production process (Evans and Lindsay, 2005). Under this
system, whether one used for tangible products traditional approach, companies did little to
or services. Although the Japanese are ordinarily understand customer requirements, neither
credited with having originated and advanced external nor internal, and rarely focused efforts
the quality movement, the roots of the quality on finding a way to improve quality if it did not
movement, and consequently the roots of the come from a technological breakthrough.
six-sigma discipline actually originated in the
United States. If the history of six-sigma is not As companies began to realize this approach did
well understood, neither is the rather subtle not take into consideration the cost of waste and
theory behind it. In this paper we develop the variation, many began looking for new ways to
historical roots of the quality revolution, show measure and define quality. This led to a more
how it developed into six-sigma, develop the modern definition of quality being accepted as
theory behind six-sigma, and analyze the uses of the predictable degree of uniformity and
some six-sigma tools used in an effective, dependability (Gitlow & Levine). This
coherent six-sigma program. Continuous Improvement View supported the
notion that every process contains an element
INTRODUCTION that can be improved. Inherently, continuous
Data is good but good data is better. However, in quality improvement focused on fine tuning
order to get this good data it is important to have parts of the whole through incremental changes.
measures in place such as quality control When a problem was discovered, it was
systems to ensure information accuracy. Quality addressed until the next problem was discovered
has evolved over the past two centuries when it and so on. Industrial scientists such as William
first became an important business measure of Edwards Deming, Joseph Juran, Walter
comparison. The ways in which it has been used Shewhart, Harold Dodge and others were at the
to define and assess quality have also evolved as forefront of this era. They began by applying
new business practices and degrees of statistical methods of control to quality. These
acceptance are enforced. One of the first methods showed the shift from inspection of the
definitions of quality was conformance to valid final product and putting a band-aid on the
customer requirements or the Goalpost View problem to actually implementing quality
(Gitlow & Levine and Levine, 2005). As long as control into the manufacturing process. This also
the company’s process met the customer’s illustrated the increasing importance of
requirements, then both the customer and the management decision-making in the quality
company were satisfied. Frederic Taylor in his control efforts instead of simply finding a quick
theories of scientific management subscribed to fix to the problem so it fell within the
this viewpoint. He believed segmenting jobs into specification limits of product or service
specific works tasks was the best way to control satisfaction and acceptance.
outputs because then efficiency would increase
(Basu, 2001). Employees were told what to do Deming and Juran believed in a new concept
and how to do it with little or no input. They called Total Quality Management. Research by

160
Proceedings of the 2008 IEMS Conference

Evans and Lindsay (2005) indicate this holistic quality control programs, one of the
viewpoint was based on three fundamental most recent being Six- sigma.
principles:
Six-sigma is a discipline based on the Greek
1. Focus on customers and stakeholders letter sigma which is the conventional symbol
2. Participation and teamwork by everyone for standard deviation. Six-sigma is a very
in the organization structured process that focuses on quantitative
3. A process focus supported by data to drive decisions and reduce defects
continuous improvement and learning (Benedetto 2003). Many companies have tried to
implement this program without truly
Through Total Quality Management (TQM), understanding the theory and concept behind it.
workers were empowered to provide input Because of such misguided efforts, Six-sigma
throughout the entire process to ensure and has received a somewhat jaded reputation, but it
instill confidence that products met customer is not the discipline that is flawed but rather the
specifications. In meeting the customer application of it. As Evans and Lindsay note, a
requirements, the term quality assurance cookbook approach in which management reads
became widely used as any planned and the latest self-help book advocated by business
systematic activity directed toward providing consultants and blindly follows the author’s
consumers with products of appropriate quality recommendations, will destine Six-sigma or any
along with the confidence that products meet other management practice to failure because
consumers’ requirements (Evans and Lindsay experience only describes what happened and is
2004). This extension of the Continuous no help to the emulating management team
Improvement viewpoint coined the terms Big Q (Evans and Lindsay, 2004). Understanding the
and Little Q, which referred to the quality of subtle theory, on the other hand, will enable one
management and the management of quality, to better understand the cause and effect
respectively (Evans and Lindsay, 2004). Driven relationships that are applied in Six-sigma and
by upper management, TQM focused on not just other management practices. In this paper, we
identifying and eliminating problems that caused will further develop the historical roots of the
defects but educating the workforce to improve quality revolution already illustrated, show how
overall quality. Upper management realized it quality revolution developed into Six-sigma,
needed to focus more on identifying and delve further into the underlying theory of Six-
eliminating problems that were causing the sigma and then analyze the uses of some Six-
defects throughout the process rather than just sigma tools used in an effective, coherent Six-
waiting until the end for post-production sigma program.
inspections. In order to do this, continuous
education programs were implemented at all LITERATURE REVIEW
levels. As time passed, these total quality Multiple scientists have contributed to the
management efforts soon began to fall short of evolution of quality management. The main
expectations of businesses in the United States. three who added to the development of six-
Companies vowed they could promise a certain sigma and will be discussed here are William
level of quality by getting it right the first time, Edwards Deming, Joseph Juran and Philip
but in the end the processes were still not Crosby. Deming had a Ph. D. in physics and was
meeting all needs and requirements of the trained as a statistician at Yale University. He
consumer. TQM was joined by many other and Juran both worked for Western Electric
acronyms like JIT, MRP and TPM that all during the 1920s and 1930s (Evans and Lindsay,
guaranteed an unreachable quality standard 2004). From there he started working with the
(Basu 2001). Because these methods focused so U.S. Census Bureau where he perfected his
greatly on the parts of the whole rather than the skills in statistical quality control methods. He
whole, they fell short in implementing rapid opened his own practice in 1941 and thus began
changes. This led to the introduction of certain teaching SQC methods to engineers and
inspectors in multiple industries. Deming, better

161
Proceedings of the 2008 IEMS Conference

known as the Father of Quality, was widely goals by identifying the customers and their
ignored in the United States for his works. As a needs. Quality control was the process of
result, he went to Japan right after World War II meeting quality goals during operations with
to begin teaching them SQC. His work there is minimal inspection. Quality improvement was
what truly bolstered the “Made in Japan” label the process of breaking through to
to its well-respected level of quality today (Dr. unprecedented levels of performance to produce
W. Edwards Deming, 2007). Dr. Deming’s most the product (Evans and Lindsay, 2004). His
famous work is the 14 points in his System of philosophy required a commitment from top
Profound Knowledge. There are four interrelated management to implement the quality control
parts in the program: Appreciation for a System, techniques and emphasized training initiatives
Understanding of Variation, Theory of for all.
Knowledge and Psychology (Evans and
Lindsay, 2004). In the first point, Deming The third philosopher who contributed greatly to
believed it was poor management to purchase six-sigma was Philip Crosby. Crosby worked as
materials at the lowest price or minimize the the Corporate VP for Quality at International
cost of manufacturing if it were at the expense of Telephone and Telegraph for 14 years. His most
the product. The second and third points well-known work is the book Quality is Free
emphasized the importance of management where he emphasized management should “do it
understanding the project first and foremost right the first time” and popularized the idea of
before taking any steps to reduce variation. The the “cost of poor quality.” Crosby emphasized
methods to reduce variation include technology, that management must first set requirements and
process design and training. Under the fourth those are on which the degree of quality will be
point, Deming stressed understanding the judged. He also believed doing things right the
dynamics of interpersonal relationships because first time to prevent defects would always be
everyone learns in different ways and speeds and cheaper than fixing problems which developed
thus the system should be managed accordingly. into the cost of poor quality idea. Similar to six-
Other points in the Deming school of thought sigma, Crosby focused on zero defects and
were on-the-job training, creating trust, building created four Absolutes of Quality Management
quality into a product or service from the to ensure that goal was accomplished (Skymark
beginning and inclusion of everyone in the Corporation, 2007):
company to accomplish project improvement.
The common methodology used by Deming for 1. Quality is defined as conformance to
improving processes is PDCA which stands for requirements, not as “goodness” or
Plan – Do – Check – Act. “elegance”
2. The system for causing quality is
Joseph Juran is well known for his book, the prevention, not appraisal
Quality Control Handbook published in 1951. 3. The measurement standard must be zero
He too went to Japan in the 1950s after working defects, not that’s close enough
at Western Electric to teach the principles of 4. The measurement of quality is the Price
quality management as they worked to improve of Nonconformance, not indices.
their economy. His school of thought was
different from Deming’s in that he believed to These philosophies have been categorized as
improve quality companies should use systems more evolutionary because the changes resulting
with which managers are already familiar. When from their implementation were more gradual.
consulting firms, he would design programs that As time continued, managers became
fit the company’s existing strategic efforts to increasingly disappointed with TQM and
ensure minimal rejection by staff. The main searched for philosophies that would create
points he taught were known as the Quality more drastic changes in the existing processes
Trilogy: Quality Planning, Quality Control and and systems. These revolutionary processes
Quality Improvement. Quality planning was the recognized that processes are key to quality, that
process of preparing companies to meet quality most processes are poorly designed and

162
Proceedings of the 2008 IEMS Conference

implemented, that overall success is sensitive to revised to a tenfold improvement every two
individual sub-processes success rates and that years, 100-fold every four years and 3.4 defects
rapid, dramatic change requires looking at the per million in five years (Gitlow & Levine,
entire process. One of the first of these 2005). A defect in six-sigma terms is any factor
revolutionary philosophies was Reengineering. that interferes with profitability, cash flow or
Reengineering is defined as the fundamental meeting the customers’ needs and expectations.
rethinking and radical redesign of business This led to the next important concept of six-
processes to achieve dramatic improvements in sigma: critical to quality. Critical-to-Quality
critical, contemporary measures of performance (CTQ) items are points of importance to the
(Benedetto, 2003). It places heavy reliance on customer be it internally or externally (Adomitis
statistics and quantitative analysis. and Samuels, 2003). Six-sigma projects aim to
improve CTQs because research has proven that
Six-sigma emerged during this revolutionary a high number of CTQs correlates with lost
time period after reengineering. The two customers and reduced profitability. Ultimately,
approaches are very similar but also have many the company will have reduced the number of
differences. Whereas reengineering focuses on defects and focused on the CTQs such that it
the statistical aspect, Six-sigma places more will cost more to correct the defects any further
emphasis on data to drive decisions. When than to prevent their occurrence in the first
comparing six-sigma to Total Quality place.
Management it is very apparent that great
changes have come about in the evolution of This thought was advanced in the 1980s by Dr.
quality. Six-sigma has a more structured and Genichi Taguchi with his Quality Loss function.
rigorous training development program for those Prior to that time, it was thought that the
professionals using it. Six-sigma is a program customer would accept anything within the
owned by the business leaders, also known as tolerance range and that product costs do not
champions, while TQM programs are based on depend on the actual amount of variation as long
worker empowerment. Six-sigma is cross as it was still within the tolerance range (Evans
functional and requires a verifiable return on and Lindsay, 2004). Taguchi introduced a model
investment in contrast to TQM’s function or that discredited that school of thought by
process based methodology that has little showing the approximate financial loss for any
financial accountability (Evans and Lindsay, deviation from a specified target value – an idea
2004). that coincides with six-sigma’s inclusion of
financial data in the assessment of a process or
Six-sigma started as a problem solving approach function. His work contradicted the original
to reduce variation in a product and concept and showed that variation is actually a
manufacturing environment. It has since grown measure of poor quality such that the smaller the
to be used in a multitude of other industries and variation is about the nominal level, the better
business areas such as service, healthcare and the quality is. As the graph below depicts, this
research and development. It represents a loss function also applied an economic value to
structured thought process that begins with first variation and defects from the standard. As the
thoroughly understanding the requirements process or project deviates from its target value,
before proceeding or taking any action. Those costs become increasingly higher and the
requirements define the deliverables to be customer becomes increasingly dissatisfied.
produced and the tasks to produce those Quality loss can be minimized by reducing the
deliverables which in turn illustrate the tools to number of deviations in performance or by
be used to complete the tasks and produce the increasing the product’s reliability. In six-sigma,
deliverables (Hambleton, 2007). Six-sigma was the goal is to minimize the number of defects
initiated in the 1980s by Motorola. The company and keep the process as close to the nominal
was looking to focus its quality efforts on value as possible to avoid high degrees of
reducing manufacturing defects by tenfold variability. The concepts of Taguchi are part of
within a five-year period. This goal was then

163
Proceedings of the 2008 IEMS Conference

Six-sigma’s key philosophy of reducing defects Sigma Level Defects per Million
and variation from the norm. Opportunities
One Sigma 690,000
As previously mentioned six-sigma is a product Two Sigma 308,000
of the statistical methods era of controlling Three Sigma 66,800
quality. It stressed the common measure of Four Sigma 6,210
quality, the defect. In measuring output quality, Five Sigma 230
the key metrics used are defects per million Six-sigma 3.4
opportunities (DPMO), Cost of Poor Quality Table 1: Defects per Million Opportunities at
(COPQ) and sigma level (Druley and Rudisill, Various Sigma Levels
2004). Defects per million opportunities takes
into consideration the number of defects One phenomenon that has been widely discussed
produced out of the entire batch and the number in relation to six-sigma is the 1.5 sigma shift.
of opportunities for error. It is important to Because six-sigma focuses on variation, a chart
assess quality performance using the defects per to understand deviation from the mean should be
million opportunities because two different used to further understand this concept. A Z-
processes may have significantly different table shows the standard deviation from the
numbers of opportunities for error, thus making mean. A process’ normal variation, defined as
comparison unbalanced. Furthermore, large process width, is +/- 3-sigma about the mean. In
samples provide for a higher probability of looking at Table 2, it is clear that a quality level
detection of changes in process characteristics of 3-sigma actually corresponds with 2,700
than a smaller sample. When Motorola created defects per million opportunities. This number
this ultimate goal of six-sigma, it was set to be may sound small but it is similar to 22,000
equivalent to a defect level of 3.4 defects per checks deducted from the wrong bank account
million opportunities. According to Evans and each hour or 500 incorrect surgical operations
Lindsay, this figure was chosen because field each week. This level of operating efficiency
failure data suggested that Motorola’s processes was deemed unacceptable by Motorola hence
drifted by this amount on average (Evans and the search for a design that would ensure greater
Lindsay, 2004). The cost of poor quality metric quality levels. A six-sigma design would have
was introduced by Crosby. It includes the costs no more than 3.4 DPMO even if shifted +/-1.5
of lost opportunities, lost resources and all the sigma from the mean. In looking once again at
rework, labor and materials expended up to the Table 2, it is evident a quality level of six-sigma
point of rejection. This metric is used to justify actually corresponds with two defects per billion
the beginning of a six-sigma project. When a opportunities while the more commonly known
company is choosing which project to undertake, value of 3.4 defects per million opportunities is
the project with the highest COPQ will most found at the six-sigma level with a +/-1.5-sigma
likely be selected. Lastly, the sigma level shift off the center value or +/-4.5-sigma. This
indicates the degree of quality for the project. shift is a result of what is called the Long Term
Sigma is a statistical term that measures how Dynamic Mean Variation (Swinney, 2007). In
much a process varies from the nominal or simpler terms, no process is maintained perfectly
perfect target value based on the number of at center at all times, thus allowing drifting over
DPMO. As mentioned, Motorola’s goal was to time. The allowance of a shift in the distribution
have 3.4 defects per million opportunities which is important because it takes into consideration
would happen at the six-sigma level. The both short and long-term factors that could affect
DPMOs for various sigma levels are compared a project such as unexpected errors or
in Table 1 below (iSixSigma, 2002). movement. Also worth noting is that the goal of
3.4 DPMO can be attained at 5-sigma with a +/-
0.5-sigma shift or at 5.5-sigma with a +/-1-
sigma shift. However as Taguchi pointed out, it
is important to minimize the noise factors that
could shift the process dramatically from the

164
Proceedings of the 2008 IEMS Conference

nominal value. Table 2 below shows the DPMO to improving other functions of the business
quality levels achieved for various combinations such as marketing, finance and advertising. The
of off-centering and multiples of sigma (Evans most common methodology is DMAIC. One
and Lindsay, 2004). goal of using the DMAIC methodology is to
identify the root cause of the problem and select
METHODOLOGY the optimal level of the CTQs to best drive the
Within six-sigma there are different project desired output. Another goal is to improve
based methods that can be followed. Among PFQT: Productivity, Financial, Quality and
them are Lean Sigma, Design for Six-sigma, Time spent (Hambleton, 2007). It is used as an
Six-sigma for Marketing and the most common iterative method to combat variation. With the
DMAIC – Define, Measure, Analyze, Improve built in ability to contrast variation, DMAIC
and Control (Hambleton, 2007). Lean Sigma is a intrinsically allows for flexibility because as
more refined version of the standard Six-sigma knowledge is learned thorough implementation,
program that streamlines processes to its assumptions of the root cause may be disproved,
essential value-adding activities. In doing this, requiring the team to modify or revisit
the company aims to do things right the first alternative possibilities. Kaoru Ishikawa
time with minimum to no waste. Wait time, promoted seven basic tools that can be used in
transportation, excess inventory and assessing quality control. The list has since been
overproduction are among the wasteful activities expanded upon to include many other tools but
that can be eliminated. One test used to identify the original seven were Cause-and Effect
the value-add activities that will reduce waste is diagrams, Check sheets, Control charts,
the 3C Litmus Test: Change, Customer, Correct Histograms, Pareto charts, Scatter diagrams and
(Hambleton, 2007). If the activity changes the Stratification (Hambleton, 2007). Some of these
product or service, if the customer cares about tools will be further explained below.
the outcome of the activity and/or if the activity
is executed and produced correctly the first time In the first step Define, the company clearly
then it is a value-add activity and should be kept. defines the problem. It begins the trust and
Lean Sigma was popularized in Japan after dedication from all stakeholders and all persons
World War II by Toyota. The Toyota Production included in the project team. The project team
System challenged Ford’s standard for a large consists of the Champion, Master Black Belt,
lot production system by using systematically Black Belt, Green Belt and Team Members. Key
and continuously reducing waste through small activities occurring in this phase include
lot productions only. Design for Six-sigma selecting the team members and their roles,
(DFSS) expands Six-sigma by taking a developing the problem statement, goals and
preventative approach by designing quality into benefits and developing the milestones and high
the product or process. DFSS focuses on growth level process map (iSixSigma, 2007). One tool
through product development, obtaining new used in this step is the process flowchart. Four
customers and expanding sales into the current main types of flow charts are considered: top-
customer base (Hambleton, 2007). It focuses on down, detailed, work flow diagram and
not just customer satisfaction but customer deployment (Kelly and Sutterfield, 2006). This
success because from the inception of the tool shows how various steps in a process work
product or service the company will focus on together to achieve the ultimate goal. Because it
optimizing CTQs. In order to do this, companies is a pictorial view, a flow chart can be applied to
use multivariable optimization models, design of fit practically any need. The process map allows
experiments, probabalistic simulation the user to gain an understanding of the process
techniques, failure mode and effects analysis and and where potential waste or bottlenecks could
other statistical analysis techniques. This method occur. It also could be used to design the future
is usually begun only after the Six-sigma or desired process. An example process map for
program has been implemented effectively. The a call answering operation is shown below in
Six-sigma for Marketing methodology simply Figure 2.
means applying six-sigma ideas and principles

165
Proceedings of the 2008 IEMS Conference

In the second step Measure, the company If non-linear, a regression analysis can be used
quantifies the problem by understanding the afterwards to measure the strength of the
current performance and collecting necessary relationship between the variables. The various
data to improve all CTQs. Key activities types of scatter plots are shown in Figure 4.
occurring in this phase include defining the
defect, opportunity, unit and cost metrics, The Pareto chart is used to separate the “vital
collecting the data, determining the process few” from the “trivial many” when analyzing a
capability. One tool used in this step is the project. It communicates the 80/20 rule which
SIPOC Diagram. SIPOC stands for Suppliers, states that 80 percent of an activity comes from
Inputs, Processes, Outputs and Customers. This 20 percent of the causes which thus provides
tool is applied to identify all related aspects of rationale for focusing on those vital few
the project – who are the true customers, what activities (Hambleton, 2007). The problem
are their requirements, who supplies inputs to frequencies are plotted in the order of greatest to
the process – at a high level before the project least and show the problems having the most
even begins (iSixSigma, 2007). These diagrams cumulative effect on the system (Kelly and
are very similar to process maps which are also a Sutterfield, 2006). As mentioned earlier, six-
tool applied at this phase of the DMAIC cycle. sigma focuses on eliminating as many noise
A SIPOC diagram is shown below in Figure 3 factors as possible and this Pareto analysis helps
(Simon, 2007). to do just that. So, the company does not expend
resources on wasteful activities that do not add
In the third step Analyze, the root cause of the value, but instead cost. to the end project. Figure
project’s problem is investigated to find out why 5 shows a typical cumulative Pareto chart.
the defects and variation are occurring. Through
this detailed research, the project team can begin In the fourth step Improve, the research of the
to find the areas for improvement. Key activities problem’s root cause is actually put into work by
in this phase are identifying value and non-value eliminating all the defects and reducing the
added activities and determining the vital few or degree of variation. Key activities in this phase
critical-to-quality elements. The care applied to are performing design of experiments,
this phase of the Six-sigma project is very developing potential solutions, assessing failure
important to the project’s success because a lack modes of potential solutions, validating
of understanding and thorough analysis is what hypotheses, and correcting/re-evaluating
causes most defects and variation. Evans and potential solutions. Failure Mode and Effects
Lindsay (2005) described that it could also be Analysis is a tool used in this step of the
caused by: DMAIC process to identify a failure, its mode
and effect through analysis. The analysis
ˆ Lack of control over the materials and prioritizes the failures based on severity,
equipment used occurrence and detection. Through this analysis
ˆ Lack of training a company can create an action plan if a failure
ˆ Poor instrument calibration occurs. Results of an FMEA include a list of
ˆ Inadequate environmental characteristics; effects, causes, potential failure modes, potential
ˆ Hasty design of parts and assemblies. critical characteristics, documentation of current
controls, requirements for new controls and
Multiple tools exist to guide the efforts of this documentation of the history of improvements.
process. They include but are not limited to Items listed low on the priority list do not
scatter diagrams, Pareto charts, histograms, necessitate an action plan to correct them unless
regression analyses and fishbone diagrams. A they have a high probability of occurrence or are
scatter diagram shows the relationship between special cause circumstances (Hambleton, 2007).
two variables. The correlation is indicated by the
slope of the diagram which could be linear or In the fifth and final step Control, the project
non-linear. If linear the correlation could be improvements are monitored to ensure
direct or indirect and positive, negative or null. sustainability. Key activities include developing

166
Proceedings of the 2008 IEMS Conference

standards and procedures, implementing while revenues and earnings increased by 11 and
statistical process control, determining process 13 percent respectively (Black, Hug and Revere,
capability, verifying benefits, costs, and profit 2004).
growth, and taking corrective action when
necessary to bring the project back to its Six-sigma has grown in application from just in
nominal value (iSixSigma, 2007). The most the manufacturing environment to also banking,
important tool used in this phase is the Control healthcare and automotive. In 2001 Bank of
chart. It is a statistical method based on America’s CEO Ken Lewis began focusing on
continuous monitoring of process variation. The increasing the customer base while improving
chart is drawn using upper and lower control company efficiency. The company handled
limits along with a center or average value line. nearly 200 customer interactions per second and
As long as the points plot within the control thus set the ultimate goal of its six-sigma efforts
limits, the process is assumed to be in control. to be customer delight. Bank of America
The upper and lower control limits are usually established a customer satisfaction goal and
placed three sigma away from the center line created a measurement process to evaluate
because for a normal distribution data points fall current performance in order to work toward
within 3-sigma limits 99.7 percent of the time. improving the state. In the first year, the bank’s
These control limits are different than the defects across electronic channels fell 88
specification limits. Control limits help identify percent, errors in all customer delivery channels
special cause variation and confirm stability and segments dropped 24 percent, problems
while specification limits describe conformance taking more than one day to resolve went down
to customer expectations (Skymark Corporation, 56 percent and new checking accounts had a 174
2007). However, if there are many outliers, percent year over year net gain. Within four
points gravitating toward one of the control years of beginning the project, the bank’s
limits, or there seems to be a trend in the points, customer delight rose 25 percent. Through the
the process may need to be adjusted to reduce application of six-sigma, Bank of America was
the variation and/or restore the process to its able to focus on the voice of the customer in
center. The control chart can help to assess determining what was most important to them to
quantitative gains made through the make their experience a pleasant one (Bossert
improvements because each point will be and Cox, 2004).
compared to the target value. An example of a
control chart for a process in control is shown in The Scottsdale Healthcare facility in Arizona
Figure 6 below. began a six-sigma project to work on its
overcrowded emergency department because it
APPLICATION OF METHODOLOGY took 38 percent of the patient’s total time within
the department to find a bed and transfer the
Multiple success stories have come from patient out of the waiting room. Before
implementing the aforementioned tools into a implementing quality efforts, multiple
six-sigma project. The most well known success intermediary steps existed in the process which
story is that of General Electric under the inevitably slowed down the time from start to
direction of Jack Welch. Welch began a six- finish and reduced the potential yield. As a result
sigma project to work on the rail car repairs and of the DMAIC and Lean Sigma efforts, the
aircraft engine imports for Canadian customers. facility identified the root cause of the problem
Through his guided efforts, the company was not that of finding a bed, as originally
reduced repair time, redesigned its leasing thought, but rather reducing the number of steps
process, reduced border delays and defects and involved in the transfer process. This solution
improved overall customer satisfaction. produced incremental profits of $600,000 and
Quantitatively, General Electric saved over $1 reduced the cycle time for bed control by 10
billion in its first year of six-sigma application percent. Moreover, the patient throughput in the
and then $2 billion in the second year. The emergency room increased by 0.1 patients/hour
company’s operating margin rose to 16.7 percent (Lazarus and Stamps, 2002). This project proved

167
Proceedings of the 2008 IEMS Conference

one of six-sigma’s key arguments that inspection development of a discipline, and the reason(s)
is unproductive and instead quality control that it is better than its predecessors. As
should be implemented from the beginning of a explained, Six-sigma is used to resolve problems
product or service to reduce non-value add resulting in deviation between what should be
activities. happening and what is actually happening. The
methodology and tools of Six-sigma can handle
It is important to note that not all applications of any problem type from conformance to
six-sigma have led to success. A common efficiency to product/service design. However,
explanation for the failures is that companies in order for the program to effectively mitigate
and managers read the latest self-help book and all project risks and be implemented
blindly followed the author’s recommendations. successfully, there must be commitment from
In doing this, they failed to fully grasp the the top-down. Through this paper the history of
theory behind the approach. Experience only quality has been presented. As its definition and
describes programs like six-sigma, but application have evolved from Frederic Taylor’s
understanding the theory will help to understand scientific management theories to the present
the cause-and-effect relationships which can program of six-sigma, and even looking forward
then be used for rational prediction and to more refined programs such as Fit Sigma,
management decisions. Also it is important to quality continues to be a concept of vital
note that many of the failures are not due to a importance for all businesses and industries. In
flaw in six-sigma’s conceptual basis, but rather this paper, the underlying theory and concepts
failures in the mechanics of team operation. contributed by each quality philosopher, and
According to Evans and Lindsay, 60 percent of each revolutionary model were explained to
six-sigma failures are due to the following show how they are similar but more importantly
factors: to show how they are different. For each
company and each industry the same quality
ˆ Lack of application of meeting skills control methodology may not be appropriate
ˆ Improper use of agendas because each operates under different
ˆ Failure to determine meeting roles and circumstances. The benefits that accrue from
responsibilities applying the different techniques of six-sigma to
ˆ Lack of setting and keeping ground rules various production situations are invaluable to a
ˆ Lack of appropriate facilitative behaviors company truly intent upon quality improvement,
ˆ Ambivalence of senior management longevity in its market, and global
ˆ Lack of broad participation competitiveness.
ˆ Dependence on consensus;
ˆ Too broad/narrow initiatives.

To ensure the success of a six-sigma REFERENCES


implementation, it is vitally important that the
project champion and other top management 1. Basu, R. (2001). Six-sigma to Fit Sigma.
officials are very intimately involved in the
process. Also implementing constant training for IIE Solutions, 33(7), 28-34.
all employees on related topics will minimize
the misunderstanding and maximize resources, 2. Benedetto, A. R. (2003). Adapting
both financial and human. manufacturing-based Six-sigma
methodology to the service environment
CONCLUSION of a radiology film library. Journal of
Healthcare Management, 48(4), 263.
The objective of this paper was not to examine
the tools of Six-sigma in great detail, but rather 3. Black, K., Revere, L., & Hug, A.
to explain their history and evolution, because it (2004). Integrating six-sigma and CQI
is more important to understand the origin and

168
Proceedings of the 2008 IEMS Conference

for improving patient care. The TQM


Magazine, 16(2), 105. 13. Samuels, D. I. and Adomitis, F. L. (2003).
Six-sigma can meet your revenue-cycle
4. Bossert, J., & Cox, D. (2005). Driving needs. Healthcare Financial Management,
57(11), 70.
organic growth at Bank of America.
Quality Progress, 38(2), 23-28. 14. Simon, K. (2007). SIPOC diagram.
Retrieved January 15, 2008 from
5. Druley, S. & Rudisill, F. (2004). Which http://www.isixsigma.com/library/content/c0
six-sigma metric should I use? Quality 10429a.asp.
Progress, 37(3), 104.
15. Skymark Corporation (2007). Control
6. Evans, J. R., & Lindsay, W. M. (2004). Charts. Retrieved January 12, 2008 from
An introduction to six-sigma, United http://www.skymark.com/resources/tools/co
States: Thompson Southwestern. ntrol_charts.htm.

16. Skymark Corporation (2007). Dr. W.


7. --- (2005). The management and control Edwards Deming. Retrieved January 12,
of quality. United States: Thompson 2008 from
Southwestern. http://www.skymark.com/resources/leaders/
deming.asp.
8. Gitlow, H. S. & Levine, D. M. (2005).
Six-sigma for green belts and 17. Skymark Corporation (2007). Philip
champions. United States: Pearson Crosby: The Fun Uncle of the Quality
Education. Revolution. Retrieved January 12, 2008
from
9. Hambleton, L., (2007). Treasure chest of http://www.skymark.com/resources/leaders/
six-sigma growth methods, tools and best crosby.asp.
practices. United States: Pearson
Education. 18. Sutterfield, J.S., and Kelly, C. S. J. (2006).
The six-sigma quality evolution and the
10. iSixSigma (2007). Six sigma DMAIC tools of six-sigma. 2006 IEMS Proceedings,
roadmap. Retrieved January 15, 2008 from Cocoa Beach, FL, pp. 370-378.
http://www.isixsigma.com/library/content/c0
20617a.asp. 19. Swinney, Jack (2007). 1.5 sigma process
shift explanation. Retrieved January 15,
11. iSixSigma (2002). Sigma level. Retrieved 2008 from
January 15, 2008 from http://www.isixsigma.com/library/content/c0
http://www.isixsigma.com/dictionary/Sigma 10701a.asp.
_Level-84.htm.

12. Lazarus, I. R. & Stamps, B. (2002). The


promise of six-sigma. Managed Healthcare
Executive, 12(1), 27-30.

169
Proceedings of the 2008 IEMS Conference

Figure 1: Nominal is Best Quality Loss Function

System Operator Operator Operator Secretary


Receives Answers Puts Caller Transfers Answers
Call Call on Hold Call Call

System Operator Message


Call is
Puts Caller Takes Delivered to
Returned
on Hold Message Secretary

Call is not
Returned

Figure 2: Process Flow Chart

Table 2: Defectives per Million Opportunities for Various


Process Off-Centering and Quality Levels

170
Proceedings of the 2008 IEMS Conference

Figure 3: SIPOC Diagram

Y Y Y

X X X
Negative No Correlation
Positive Correlation Correlation

Figure 4: Scatter plot diagram

171
Proceedings of the 2008 IEMS Conference

100 100
90 90

Cumulative Percentage
80 80

Number of Defects
70 70
60 60
50 50
40 40
30 30
20 20
10 10

0 Incom- Surface Cracks Others Missha-


plete Scars pen

Figure 5: Pareto chart

Measurement
UCL

Process
Mean 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

LCL
Sample Number
Figure 6: Control Chart

172
Proceedings of the 2008 IEMS Conference

The Use of Taguchi Methods to Optimize Shape Factors for


Maximum Water Jet Stability
J. S. Sutterfield, Sade L. Chaney and Tiffani R. Davis
Florida A&M University

ABSTRACT correlation between nozzle shape factor and the


It has been discovered that proper design of a distance from a nozzle to which a stream of
nozzle can promote water jet stability for longer water remained coherent after leaving the
distances, and that a quantity called a shape nozzle. This coherence is what is referred to as
factor can be used to describe the capacity of a the stability of the water stream, viz, the
nozzle to maintain water stream stability over capacity of the water stream to remain collected
longer distances. This is particularly important or integrated over some distance. Theobald
in the field of fire fighting because improved found that the larger the shape factor, the greater
water jet stability increases the capacity of fire the distance over which a water stream would
fighters to bring a fire under control. In this remain coherent or stable, and vice versa. This
paper we demonstrate the versatility of Taguchi phenomenon was investigated for three basic
experimental methods by using them to analyze types of nozzle designs: Straight-sided, a total of
shape factors for several nozzle types to select six designs; Convex, sides gradually curved
that one nozzle design from several others which inward toward the flow stream, two designs; and
can best be used to produce nozzles with the Concave, sides gradually curving outward away
greatest water stream stability for fire sighting. from the flow stream, two designs. Diagrams of
the test nozzles are shown below in Figure 1 on
the following page. These three basic nozzle
INTRODUCTION design types were examined over a fairly wide
In bringing fires under control and eventually range of nozzle exit velocities as to the nozzle
completely extinguishing them, it is important capacity to control water stream coherence.
for fire fighters to have water jets that maintain
their stability for the longest possible distance Although it is not the purpose of this paper to
from the fire. By stability is meant the property review in detail the experimental work of
of the water jet, emitted from the fire hose, to Theobald, it might be to the benefit of the
remain coherent for the greatest possible reader, in order to better understand the
distance before it begins to disperse. There are experiment, to provide an overview of the way
numerous reasons for this, but among them are in which the experiment was conducted.
safety to fire fighters, the need to thoroughly Toward this end, a diagram of Theobald’s
soak the combustibles to starve them of oxygen, experimental apparatus is shown in Figure 2. In
and to disperse the by-products of combustion using this test apparatus, the jet of water was
that tend to facilitate continued combustion. directed toward the screen. The screen support
Thus, it is important to employ a design for fire was designed so that the distance from the
hose nozzles that facilitates water stream nozzle to the screen could be varied. A 300V
stability over the greatest possible distances. power supply was connected with the positive
The problem of water jet stability, although not side attached to the grid, and the negative side
one of the more usual production problems to be attached to the nozzle. Thus, as the water jet
studied, has nonetheless been examined fairly impinged upon the screen, an electrical circuit
expensively by a number of investigators. One was completed through the water jet. As the
such study was done by Theobald (1980) in distance between the nozzle and the screen was
which he devised a property identified as a increased, the electrical resistance of water jet
shape factor to describe the stability of turbulent increased as the diameter of the jet increased.
flow from water jets. Theobald found a direct An initial electrical current reading was taken

173
Proceedings of the 2008 IEMS Conference

with the nozzle and screen close together, and chaotic the flow. When fluid flow is smooth, it is
this point was defined as the jet “fully said to be laminar; and when it is chaotic is is
continuous.” At some point in each experiment, said to be turbulent. Since turbulent flow is
as the screen to nozzle distance was increased, antagonistic to a stable water jet, it is obviously
the water jet became too diffuse to support important to understand the relationship between
electrical current. This point was defined as Reynold’s number and water jet stability.
“fully discontinuous.” The point at which the McCarthy and Molloy (1974) investigated this
stream was 50% percent continuous was then relationship for flow beginning in the laminar
taken as midway between these two readings. region (low Reynold’s numbers), through the
While some of the assumptions in this transition region to fully turbulent flow (high
experiment may be open to question, suffice it to Reynold’s numbers). In this work, however, the
the say that the basic experimental approach is same basic type of nozzle was used for each
sound and conforms with best experimental experiment, and only the diameter of the
practices. The approach attempts to compare all discharge opening was varied. This work
perform all experimental runs under the same resulted in plots of jet break-up length versus jet
conditions so as to eliminate, or at least efflux velocity, or discharge velocity. Rouse, et
minimize, any random effects and interactions al (1952) on the other hand, experimented
from creeping into any single run (Taguchi, extensively with concave nozzles of the types 9
1988). and 10 above in Figure 1, and actually proposed
a design for concave nozzles. Rouse’s design
In this paper, we shall analyze the data from proposal was the basis of the concave nozzles
Theobald’s original experiment, using Taguchi tested by Theobald. The work of Rouse was
methods, to determine whether or not we can extended by Whithead, et al (1951), to high
conclude that one type of shape is better than speed fluid flow in wind tunnel applications.
another in promoting stream stability. This This work resulted in a prototype model, the
would be important to do if we were designing internal contour of which was modified for high
nozzles for commercial use because it would be speed wind tunnel applications.
essential to produce and sell the type of nozzle
with the greatest water jet stability or coherence. An important piece of work in this area of
It would also be important for the contour of investigation was that of Vereshchagin, et al
such a nozzle to be determined at the least (1958), which examined the disintegration of
possible expense. Determining the best type of water jets as a function of jet length for various
nozzle of the three would allow more nozzle diameters. In this experimental work it
experimentation with that best type to determine was found that the distance from the nozzle at
the optimum internal surface shape for which a jet of water becomes discontinuous, viz,
maximizing water jet stability at least cost. becomes incoherent, is relatively invariant out to
some distance from the nozzle, but after which
LITERATURE REVIEW the discontinuity distance has a Gaussian
Since the subject of fire hose nozzle design is distribution. Theobald’s work re-examined this
not a popular area for research, the literature in phenomena using regression analysis and
this area is necessarily sparse. Nonetheless, concluded that the jet discharge maintained
there is some small amount of pioneering approximately 100% coherence out to a distance
literature in this area. A very important area of of approximately 1.75 meters. After this point,
investigation in nozzle design would be that of jet discontinuity became random out to a
the influence of Reynold’s number on water jet distance of approximately 3.5 meters, at which
stability. For the reader who may not be point the jet became discontinuous. It was
acquainted with Fluid Mechanics, Reynold’s further found that the random distance between
number is a measure of the turbulence of 1.75 and 3.5 meters could then be described with
flowing fluids. In general, the lower the a Gaussian or Normal distribution.
Reynold’s number the smoother the flow, and
the greater the Reynold’s number the more

174
Proceedings of the 2008 IEMS Conference

In summary, it may be said that the work of


Theobald is thorough, ingenious, and carefully The data taken from this experiment by
performed. For our purposes, the only criticism Theobald are shown in Table 1.
that may be lodged against it is that it only tested
two specimens each of the convex nozzle shape, To reiterate, this data was taken for six straight-
and two each of the concave nozzle shape. In sided nozzles, two convex nozzles and two
contrast, six types of straight-sided nozzles were concave nozzles. Table 1 above, however,
tested. The small number of convex and reflects only the data for nozzles 2 thru 10. This
concave nozzles tested is probably due to the is because nozzle 1, a straight nozzle in which
additional expense in manufacturing them. At the sides of the nozzle were parallel to the flow
any rate, since our paper is only aimed at stream, exhibited instability the instant the water
indicating how experimental method may be exited. For this reason this nozzle was omitted
used to guide research projects to conclusions at from the analysis, resulting in the nine sets of
minimum cost, we shall take the shapes tested as data shown. Thus, data for nozzles 2 thru 6
being sufficient for our purposes. Most represent straight-sided configurations, data for
importantly, it is to be pointed out that in none nozzles 7 and 8 the convex configurations, and
of the above references has it been attempted to data for nozzles 9 and 10 the concave
analyze the various nozzle shapes as a class to configurations. Table 2 below shows this data
determine which might yield the greatest water stratified for the three nozzle types with a
jet stability, and thus be desirable to analyze as a working mean of 0.8 deducted from each value.
class to find that internal shape which
maximizes water jet stability. Nozzle Exit Velocity
Nozzle 11.73 14.37 16.59 20.32 23.46 28.74 33.18 38.91
Straight
METHODOLOGY Sided 2 0.34 0.17 0.18 0.08 0.06 0.03 0.02 0.02
The general methodology to be employed in this 3 0.13 0.12 0.15 0.09 0.09 0.03 0.06 0.07
paper is that of using the original data of 4 0.05 0.05 0.12 0.06 0.01 0.03 0.08 0.13
Theobald and analyzing it using different 5 -0.02 0.00 0.01 -0.05 -0.03 -0.02 0.00 0.00
methods from his, and for purposes different 6 -0.14 -0.16 -0.20 -0.27 -0.25 -0.26 -0.22 -0.21
from his. We shall, therefore, analyze the data
7 0.26 0.04 0.01 -0.02 0.02 0.02 0.02 0.11
using Taguchi methods to determine whether Convex
8 0.22 0.17 0.21 0.16 0.15 0.14 0.14 0.14
any of the three classes of internal nozzle
shapes, straight, concave or convex, is to be 9 0.17 0.06 -0.02 -0.04 -0.04 -0.05 -0.06 -0.06
Conave
preferred in maintaining the continuity of the 10 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2
water jet. Thus, having identified that class of Table 2 – Shape factor working data for thee three
nozzles best maintaining jet stream stability, it nozzle types
would then be possible to plan and perform
further research to arrive at the optimal shape at As previously noted each of the data points in
the least possible cost. the above table is the average of ten test runs, so
that the results in the table represent a total of
Table 1 – Shape factors for straight, convex and 720 test runs. The above data is shown in Table
concave nozzles 3 with the initial calculations performed in
preface to the
Nozzle Exit Velocity
Taguchi analysis.
Nozzle 11.73 14.37 16.59 20.32 23.46 28.74 33.18 38.91
2 1.14 0.97 0.98 0.88 0.86 0.83 0.82 0.82
3 0.93 0.92 0.95 0.89 0.89 0.83 0.86 0.87
We wish to calculate
4 0.85 0.85 0.92 0.86 0.81 0.83 0.88 0.93 the total variation for
5 0.78 0.80 0.81 0.75 0.77 0.78 0.80 0.80 the shape factors, but
6 0.66 0.64 0.60 0.53 0.55 0.54 0.58 0.59 because the number
7 1.06 0.84 0.81 0.78 0.82 0.82 0.82 0.91 of straight-sided test
8 1.02 0.97 1.01 0.96 0.95 0.94 0.94 0.94 nozzles tested, seven,
9 0.97 0.86 0.78 0.76 0.76 0.75 0.74 0.74
10 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
is different from that

175
Proceedings of the 2008 IEMS Conference

for the number of convex and concave test


CF =
[0.07 + 0.90 + 0.78 ]
2

nozzles, two, it is necessary to obtain this


24
calculation indirectly. In order to do this, we
must first calculate the variation among shape
factors for the various nozzles-velocity CF = 0.1269
combinations, ST1. To proceed, we must average
each column for straight-sided nozzles, each for Then the variation among shape factors for the
convex nozzles, and each for concave nozzles as various nozzles-velocity combinations, ST1, is …
shown in Table 3. This results in the data shown
in Table 4. ST1 = CSS12 + CSS22 + … + CCnvx12 + CCnvx22 + …
+ CCncv12 + CCncv22 + … + CCncv72 – CF
Table 3 – Shape factor data initial calculations ST1 = 0.072 + 0.042 + … + 0.242 + 0.112 + … +
Nozzle Exit Velocity
0.192 + 0.132 + … + 0.072 – 0.1269
Nozzle 11.73 14.37 16.59 20.32 23.46 28.74 33.18 38.91 Total
Straight ST1 = 0.0935
Sided 2 0.34 0.17 0.18 0.08 0.06 0.03 0.02 0.02 0.9
3 0.13 0.12 0.15 0.09 0.09 0.03 0.06 0.07 0.74
4 0.05 0.05 0.12 0.06 0.01 0.03 0.08 0.13 0.53 Next, we calculate the variation among nozzles,
5 -0.02 0 0.01 -0.05 -0.03 -0.02 0 0 -0.11 SN. SN is …
6 -0.14 -0.16 -0.2 -0.27 -0.25 -0.26 -0.22 -0.21 -1.71
Total 0.36 0.18 0.26 -0.09 -0.12 -0.19 -0.06 0.01 0.35 2 2 2
C + C + C
7 0.26 0.04 0.01 -0.02 0.02 0.02 0.02 0.11 0.46
S N = SST CnvxT
8
CncvT
− CF
Convex
8 0.22 0.17 0.21 0.16 0.15 0.14 0.14 0.14 1.33
Total 0.48 0.21 0.22 0.14 0.17 0.16 0.16 0.25 1.79 2 2 2
S 0 . 07 + 0 . 90 + 0 . 78
N = 8
− 0 . 1269
9 0.17 0.06 -0.02 -0.04 -0.04 -0.05 -0.06 -0.06 -0.04
Conave
10 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 1.6
Total 0.37 0.26 0.18 0.16 0.16 0.15 0.14 0.14 1.56 SN = 0.0499

The variation among velocities, SV, is calculated


Nozzle Exit Velocity
as …
Nozzle
Type 11.73 14.37 16.59 20.32 23.46 28.74 33.18 38.91 Total
... + C
2 2 2
C + C +
Straight S V
= V 1T V 2T
3
V 8T
− CF

Sided 0.07 0.04 0.05 -0.02 -0.02 -0.04 -0.01 0.00 0.07
Convex 0.24 0.11 0.11 0.07 0.08 0.08 0.08 0.13 0.90 2 2 2 2 2 2
Concave 0.19 0.13 0.09 0.08 0.08 0.08 0.07 0.07 0.78
(0.07) +(0.24) +(0.19) +...+(0.00) +(0.13) +(0.07)
S
V
=
3
− 0.1269
Total 0.50 0.27 0.25 0.13 0.14 0.12 0.14 0.20 1.75

CF = 0.1269 SV = 0.0374
Table 4 – Averaged shape factor data for the three
Now, our purpose is to determine whether any of
nozzle designs
the three shapes is superior to the others in
For this data we obtain a correction factor of … promoting jet stability. In order to do this it is
necessary to construct three pairs of linear,

CF =
[∑ (xij )Str + ∑(xij )Cnvx+ ∑(xij )Cncv]
2 orthogonal contrasts. Each pair of these
contrasts will compare one of the general shapes
24 with the other two to determine whether it is
statistically superior to them. In the following
contrasts, N1, N2 and N3 indicate the straight-
sided, convex and concave nozzles, respectively.
The first pair of contrasts compares the straight-

176
Proceedings of the 2008 IEMS Conference

sided nozzles with the convex and concave exit velocity. This was in fact has been done in
nozzles and is … our analysis for all three sets of linear,
orthogonal contrasts and the results are shown in
N N + N Table 5 below. Again, for the sake of brevity,
L1 ( N ) = 4
1
− 2
8
3
the actual calculations are omitted and the reader
is referred to Taguchi (1988, vol. 1) for the
N − N details.
L (N ) =
2
2
4
3

For the details of calculating the variations of Components of Jet Exit Velocity
these contrasts, the reader may wish to consult Contrasts Linear Quadratic Cubic Quartic
Taguchi (1988). The variations for this pair of
S L1(N) 0.0000 0.0000 0.0000 0.0000
contrasts evaluate to …
S L2(N) 0.0001 0.0001 0.0000 0.0000
SL1(N) = 0.0491 and SL2(N) = 0.0008 S L3(N) 0.0000 0.0000 0.0000 0.0000
S L4(N) 0.0001 0.0001 0.0000 0.0000
The second pair of contrasts compares the S L5(N) 0.0001 0.0001 0.0000 0.0000
convex nozzles with the straight-sided and
concave nozzles and is … S L6(N) 0.0000 0.0000 0.0000 0.0000

N N + N
L (N ) = 4
3
2
− 1
8
3
Table 5 – Interactions of contrasts with jet
velocity components

N − N We nearly are prepared to calculate the total


L (N ) =
4
1
4
3

variation, ST, but must first calculate the first and


second order error variations, Se1 and Se2,
The variations for this pair of contrasts evaluate respectively. For Se1 …
to …
Se1 = ST1 – SN – SV – SL5(N)xVl – SL6(N)xVl
SL3(N) = 0.0184 and SL4(N) = 0.0315
Se1 = 0.0935 – 0.0499 – 0.0374 – 0.0001 –
The third pair of contrasts compares the concave 0.0000
nozzles with the straight-sided and convex
nozzles and is … Se1 = 0.0061

N 3 − N1+ N 2 The details of the calculation for Se2 are


L5 (N ) = 4 8 considerably more complicated. Again the
reader is referred to volume 1 of Taguchi for the
N1− N 2 details. Because the number of repetitions for
L6 (N ) = 4 straight-sided nozzles is different than those for
the convex and concave nozzles, five (5) as
The variations for this pair of contrasts evaluate compared with two (2), it is necessary to
to … calculate an effective number of repetitions in
order to calculate Se2. In general, Se2 is equal to
SL5(N) = 0.0074 and SL6(N) = 0.0425 the variation among repetitions, ST1, divided by
the effective number of repetitions, r-bar. The
As a matter of good practice, the interactions value of ST1 and r-bar were found to have been
between (among) control variables always …
should be examined. Thus, in our case, the
possibility of interactions between a given
nozzle design and the various components of jet

177
Proceedings of the 2008 IEMS Conference

ST1 = 0.8918 and r-bar = 2.50 1) The jet stability afforded by the straight-
sided design is insignificant compared with that
S T1 of either the convex or the concave designs.
S e2 = r 2) The jet stability afforded by the convex
design is approximately 26% greater than that
0 . 8918
S e2 = for the concave design.
2 .5
3) There are no significant interactions
between any of the contrasts and the components
S e 2 = 0 .3567 of jet velocity.

Conclusion “2” is particularly significant


RESULTS because it differs with the conclusion drawn by
It will be observed that the variations of all three Theobald, viz, that the concave design was
pairs of linear contrasts sum to 0.0499. This is superior to the other two. It was for this reason
because of the fact that in each pair of contrasts that Theobald used the results from it to
the components are mutually orthogonal. This normalize all of the results for the other designs.
means that the variations of these members can Thus, it will be seen that all of the values for the
be added as the squares of the sides of a right concave design in the last row of Table 1 were
triangle. Since in each of the cases the sum of set to 1. While it is true that the second of the
the variations of the members is 0.0499, it two concave designs afforded the greatest jet
means that the variation of the nozzles, SN, is stability, the first tested demonstrated a
entirely accounted for. In the first pair of con- significantly lower stability than several of the
trasts, those for the straight-sided compared with others. Consequently, the convex design of the
the convex and concave pair, the straight-sided three types demonstrates the greatest stability of
design has a variation of 0.0491, and the convex the three. For this reason, the convex design
and concave pair a variation of 0.0008. Since will be under consideration for the remainder of
the second variation is quite small, the first the analysis. Conclusion “3” permits any slight
variation accounts for almost all of the total variations for interactions between contrasts and
nozzle variation of 0.0499. This is because the jet velocity components to be combined with the
jet-stream stabilities of the convex and concave error variation.
nozzles are fairly similar. In the second pair of
contrasts, that for the convex design as opposed ANALYSIS OF TEST RESULTS
to the straight-sided and concave designs, the With a value for Se2 it is now possible to
convex design shows a variation of 0.0184, and calculate ST. The relevant values heretofore
the straight-sided and concave comparison a calculated in this paper are used to calculate Se2
variation of 0.0315. This indicates that a fairly in ANOVA shown below in Table 6. Since the
significant variation for the convex design and a above results indicate that the convex nozzle
significant variation between the concave and design is superior to the concave nozzle design,
straight-sided designs. In the third pair of the analysis of variance results are shown in this
contrasts, those for the concave design as table for only the complex design. The general
contrasted with the convex and straight-sided equation for ST is …
designs, the respective variations are 0.0074 and
0.0425, respectively. These indicate that the ST = SL5(N)+ SL6(N) + SL5(N)xVl + SL6(N)xVl + Se1 + Se2
concave design shows less variation than that for
the convex nozzle in contrast L3 above. Further
they indicate an even more significant variation
between the convex and straight-sided designs
than that between the concave and straight-sided
designs in contrast L4. From all of this, the
following conclusions may be drawn:

178
Proceedings of the 2008 IEMS Conference

Table 6 – ANOVA for shape factor analysis

Source Component f S V F0 Source Component f S V F0 S' ρ (%)


S L5(N) = 1 0.0074 0.0074 1.2196 Nozzle S L6(N) 1 0.0425 0.0425 7.0343 0.0365 8.11
Nozzle
S L6(N) = 1 0.0425 0.0425 7.0343
Jet Velocity SV 7 0.0374 0.0053 0.8833 0.0374 8.31
Linear 1 0.0198 0.0198 3.2783
Jet
Quadratic 1 0.0155 0.0155 2.5631
Velocity e' 63 0.3702 0.0060 0.3763 83.58
Residual 5 0.0021 0.0004 0.0683
Total 71 0.4502 0.4502 100.00
S L5(N)xVl = 1 0.0001 0.0001 0.0137
NxV
S L6(N)xVl = 1 0.0000 0.0000 0.0000 Table 7 – Modified analysis of variance for
water jet stability
e1 12 0.0061 0.0005
It will be seen from the above table that is
e2 48 0.3567 0.0074
significant and consulting a table of F
Total 71 0.4502
statistics discloses that the significance is at
From the above table, it will be observed that approximately the 99% level. Jet velocity, on
SL5(N) does not present a significant source of the other hand affords no significant source of
variation. Further, none of the components of variation. In all fairness it must be said that a
velocity variation proved to be significant. With similar analysis indicates a variation for the
none of the variations in the velocity compo- concave nozzle that is significant at
nents being any more significant than any of the approximately the 96.5% level. The reason that
others, it would be expected that there would not it is believed that the convex family of nozzles
be any significant interactions between the would be preferable for further investigation is
convex nozzle shape and any velocity that there is significant difference between the
component. It will be seen that this is indeed the two shapes of the concave nozzle. The two
case, for neither SL5(N)xVl nor SL6(N)xVl affords a shapes of the convex nozzle, however, were
significant source of variation. Although much nearer to each other and not far below the
interactions are shown in the table for only the better of the two shapes for the concave nozzle.
linear component of velocity, they were checked Lastly, it will also be seen that there is no
for higher order velocity components and found significant variation is the jet velocity.
to be even less significant than those for the
linear component. It must be said that this CONCLUSIONS
development proved to be contrary to the The objectives of the foregoing analysis were
authors’ intuition. At the beginning of the twofold: First, we wished to analyze Theobald’s
analysis it was believed that because of jet original data for three water jet nozzle designs to
velocity profile considerations there would determine whether our analysis would lead us to
probably be an interaction between the nozzle the same conclusions reached by Theobald in his
shape and the quadratic component of velocity. original analysis; Second, we wished to
However, this did not prove to be the case. As is demonstrate a method for using Taguchi
customary is such analyses of variance as this, methods to choose one design from others for
when a source of variation proves to be further investigation.
insignificant, it is combined with the error
variation. When this is done in the above As to the first objective, that of reaching the
analysis of variance, the result is the modified same or differing conclusions from those of
analysis of variance as shown in Table 7 below. Theobald, we have come to somewhat different
conclusions from those of Theobald. At the very
least it may be said that there is no statistical
basis for selecting the concave design over the
convex design. The foregoing analysis shows

179
Proceedings of the 2008 IEMS Conference

that while both designs are statistically REFERENCES


significant above the 95% level, they do not 1. Theobald, C. (1981). “The Effect of Nozzle
differ significantly from each other. The convex Design on the Stability and Performance of
design has a significance about 2.5% greater, Turbulent Water Jets,” Fire Safety Journal, 4, 1,
being at approximately 99%. Thus, we re- pgs. 1-13
spectfully differ with Theobald’s conclusion as
to the superiority of the concave design over the 2. Taguchi, G., (1988). System of Experimental
convex. The only conclusion that may be Design, UNIPUB/Kraus International
legitimately drawn from a purely statistical point Publications and American supplier Institute,
of view is that designs with curved sides are White Plains, NY and Dearborn, MI, vol. 1
superior to those with straight sides.
3. McCarthy, M. A., and Molloy, N. A. (1974).
For the second objective, the method of linear, “Review of the stability of liquid jets and the
orthogonal contrasts was used in order to influence of nozzle design,” Chemical
investigate whether one of the designs, straight- Engineering Journal, 7 (1974), pgs. 1-20
sided, convex or concave, might be superior to
the others. It was found, of course, that the 4. Rouse, H., et al (1952). “Experimental
convex design was significantly superior to the investigation of fire monitors and nozzles,”
straight-sided design, but only slightly so to the Transactions of the American Society of Civil
concave design. Although slightly superior to Engineers, 117 (1952), 1147
the concave design, the convex design was not
statistically superior. Both the concave and 5. Whitehead, L. G., Wu, L. Y., and Waters, M.
convex designs were found to demonstrate water H. L. (1951). “Contracting ducts of finite
jet stability above the 95% confidence level. length,” The Aeronautics Quarterly, 2 (1951),
Even so, it is believed that the convex design pgs. 254-271
would be the better candidate for further
investigation, because it shows considerably less 6. Vereshchagin, L. F., Semerchan, A. A., and
variability between runs. The larger variability Sekoyan, S. S. (1958). “On the problem of the
for the concave design suggests that some break-up of high speed jets of water, Soviet
experimental error may have crept into the test Physics – Technical Physics, 4 (1958), pgs. 38-
process for that design. Also, the first of the two 42
concave designs tested was significantly below
the second one, and considerably below both
convex designs. It is also important to point out
that in examining the superiority of one nozzle
design over the others, we are really examining
the tendency of one type of shape to provide
greater jet stability than the others. For this
reason, it would have been preferable to have
tested more than just two shapes of each of the
convex and concave designs. If this were done,
it more confidence could be placed upon the
conclusions drawn from such experimental
analysis.

180
Proceedings of the 2008 IEMS Conference

Figure 1: Nozzle shapes examined by Theobald

Figure 2: Theobald’s test apparatus for investigating water jet continuity

181
Proceedings of the 2008 IEMS Conference

In's and Out's of Product Placement


Kaylene C. Williams, Al Petrosky, and Edward H. Hernandez
California State University, Stanislaus
KWilliams@csustan.edu, APetrosky@csustan.edu, and eh@hrmgt.com

Robert A. Page
Southern Connecticut State University
pager1@southernct.edu

INTRODUCTION and products during the natural process of the


Product or brand placement continues to be movie, television program, or content vehicle.
an important practice within advertising and (Panda, 2004; Cebrzynski, 2006) That is,
integrated marketing communications in which product placement in popular mass media
advertisers push their way into content far more provides exposure to potential target consumers
aggressively than ever before (The Economist, and shows brands being used or consumed in
2005). While product placement is riskier than their natural settings (Stephen and Coote, 2005).
conventional advertising, it is becoming a Ultimately, the product or brand is seen as a
common practice to place products and brands quality of the association with characters using
into mainstream media including films, and approving of the product placement, e.g.,
broadcast and cable television programs, Harold and Kumar on a road trip to find a White
computer and video games, blogs, music Castle, Austin Powers blasting into space in a
videos/DVDs, magazines, books, comics, Big Boy statue rocket, Will Ferrell promoting
Broadway musicals and plays, radio, Internet, Checkers and Rally's Hamburgers in the
and mobile phones (Stephen and Coote, 2005). NASCAR comedy Talladega Nights, MSN
In recent years, product placement frequently appearing in Bridget Jones' Diary, BMW and its
has been used as the basis of multi-million dollar online short films, Amazon.com's Amazon
marketing and promotional campaigns with Theatre showcasing stars and featured products,
more than 1000 firms that specialize in product Ford and Extreme Makeover, Tom Hanks and
placement (Balasubramanian, Karrh, and FedEx and Wilson, Oprah giving away Buicks,
Patwardhan, 2006; Argan, Velioglu, and Argan, Curious George and Dole, Herbie and VW,
2007). The purpose of this paper is to examine Simpsons' and the Quik-E-Mart, Forrest Gump
product placement in terms of definition, use, and the Bubba Gum Shrimp Co. restaurants,
purposes of product placement, specific media Jack Daniels and Mad Men, LG phones in The
vehicles, variables that impact the effectiveness Office, and Living the Hills Life, just to name a
of product placements, the downside of using few. In addition, Weaver (2007) gives
product placements, and the ethics of product numerous examples of product placements
placements. related to tourism.
Even though product placement was named
PRODUCT PLACEMENT DEFINED and identified formally only a decade ago,
Product placement--also known as branded product placement is not new (Balasubramanian,
entertainment or product integration--is a 1994). Beginning in the 1930s, Proctor &
marketing practice in advertising and promotion Gamble broadcasted on the radio its "soap
wherein a brand name, product, package, operas" featuring its soap powders. Also,
signage, or other trademark merchandise is television and film were used by the tobacco
inserted into and used contextually in a motion companies to lend glamour and the "right
picture, television, or other media vehicle for attitude" to smoking (The Economist, 2005).
commercial purposes. In product placement, the However, due to poorly organized efforts and
involved audience gets exposed to the brands negative publicity about the surrender of media

182
Proceedings of the 2008 IEMS Conference

content to commercialization, product Television product placements are the dominant


placements were relatively dormant after the choice of brand marketers, accounting for 71.4%
Depression. Product placements were of global spending in 2006 (Schiller, 2007).
recatalyzed in the 1960-70s with a growth spurt The U.S. is the largest and fastest growing
during the 1980s and 1990s. (Balasubramanian, paid product placement market, $1.5 Billion in
Karrh, and Patwardhan, 2006) But, movies and 2005 and $2.9 Billion expected for 2007
programs are watched many times, and product (Brandweek, 2007). Some 75% of U.S. prime-
placements also are not limited in time to the time network shows use product placements.
original filmed item. In addition, today's This number is expected to increase due to the
technology can insert product placements in fact that 41% of U.S. homes are expected to
places they were not before. This digital product have and use digital video recorders that can
integration is a new frontier for paid product skip through commercials. Because of this,
placement. As a result, consumers will see more product placements are likely to eclipse
and more product placements that are traditional advertising messages. (Russell and
strategically placed in the media. Most product Stern, 2006)
placements are for consumer products, yet In terms of specific numbers, U.S. product
service placements appear more prominently. placement occurrences for Q1 2007 prime-time
Service placements tend to be woven into the network programming for the Top 10 programs
script and are probably more effective than featured 8,893 product placements. American
product placements that are used simply as Idol was the leader in terms of the number of
background props. (La Ferle and Edwards, product placements (3,113 occurrences in Q1
2006) 2007), followed by Amazing Race All Stars,
Product placements may be initiated by a Beauty and the Geek, The Apprentice, The
company that suggests its products to a studio or Pussycat Dolls Present, Extreme Makeover
TV show, or it might work the other way Home Edition, America's Next Top Model,
around. Intermediaries and brokers also match NCIS, The Office, and Til Death. The Top 10
up companies with product placement brands that featured product placements for Q1
opportunities. (Stringer, 2006) Costs for 2007 included Coca-Cola, Pussycat Dolls
product placements can range from less than Lounge Nightclubs, the Boston Red-Sox
$10,000 to several hundred thousand dollars. Baseball Team, Nike Apparel, Dell, Chicago
However, television and movie producers Bears Football Team, Cingular, Wireless Text
routinely place products in their entertainment Messaging, Hewlett-Packard, Nike Sport
vehicles for free or in exchange for promotional Footwear, and Under Armour apparel. (PR
tie-ins. (Cebrzynski, 2006) Newswire, 2007)
Generally, U.S. product placement markets
USE OF PRODUCT PLACEMENT are much more advanced than other countries
According to the research company PQ such that other countries often aspire to the U.S.
Media, global paid product placements were model. The next largest global markets are
valued at $3.07 Billion in 2006 (projected at Brazil, Australia, France, and Japan. China is
$4.38 Billion in 2007) with global unpaid forecast to be the fastest growing market for
product placements valued at about $6 Billion in product placements this year, up 34.5%
2005 and $7.45 Billion in 2006. Global paid (Thomasch, 2007). Product placement methods
product placement spending is expected to grow differ widely by country given varying cultures
at a compounded annual rate of 27.9% over and regulations. Most product placements are in
2005-2010 to $7.55 Billion. Consequently, five product areas: transportation and parts,
product placement growth is expected to apparel and accessories, food and beverage,
significantly outpace that of traditional travel and leisure, and media and entertainment.
advertising and marketing. The overall value of (Business Wire, 2006)
paid and unpaid product placement is expected
to increase 18.4% compounded annually to
$13.96 Billion in 2010. (Business Wire, 2006)

183
Proceedings of the 2008 IEMS Conference

PURPOSES OF PRODUCT PLACEMENT (3) To increase consumer memory and recall


Product placement can be very useful. Its of the brand or product
purposes include achieving prominent audience Product placements can have a significant
exposure, visibility, and attention; increasing effect on recall (Panda, 2004). For example,
brand awareness; increasing consumer memory memory improves when visual/auditory
and recall; creating instant recognition in the modality and plot connection are congruent
media vehicle and at the point of purchase; (Russell, 2002). Pokrywczynski (2005) has
changing consumers' attitudes or overall found that viewers can correctly recognize and
evaluations of the brand; changing the recall placed brands in movies, using aided
audiences' purchase behaviors and intent; recall measures and free recall measures. Also,
creating favorable practitioners' views on brand brands placed prominently in a movie scene
placement; and promoting consumers' attitudes enjoy higher brand recall than those that are not.
towards the practice of brand placement and the Verbal and visual brand placements are better
various product placement vehicles. (Panda, recalled than placements having one or the
2004) other. Also, sitcoms rather than reality shows
tend to spark better recall for product placements
(1) To achieve prominent audience exposure, (McClellan, 2003). Brand recall is typically no
visibility, and attention higher than 30% (Pokrywczynski, 2005).
Product placements can have a significant
effect on message receptivity (Panda, 2004). (4) To create recognition of the
The sponsor of product placements is likely to product/brand in the media vehicle and at the
gain goodwill by associating itself with a point of purchase
popular program targeted to a specific audience. Product placement can have a significant
The more successful the program, the longer effect on recognition (Panda, 2004). Familiar
shelf life of the product placement. (Daugherty brands achieve higher levels of recognition than
and Gangadharbatla, 2005; d'Astous and Seguin, unfamiliar brands (Brennan and Babin, 2004).
1999) More frequent viewers and viewers who In addition, product or brand placement
enjoy the program pay more attention to product recognition levels received from audio-visual
placements. Brands need to be visible just long prominent placements exceed the recognition
enough to attract attention, but not too long to rates achieved by visual-only prominent
annoy the audience. (Argan, Velioglu, and placements (Brennan and Babin, 2004). Some
Argan, 2007) 57.5% of viewers recognized a brand in a
placement when the brand also was advertised
(2) To increase brand awareness during the show. That number is higher than the
Nielsen Media Research has shown that 46.6% of viewers who recognized the brand
product placement in television shows can raise from watching only a television spot for the
brand awareness by 20% (Cebrzynski, 2006). brand. (Cebrzynski, 2006)
Tsai, Liang, and Liu (2007) found that higher
brand awareness results in a greater recall rate, (5) To bring desired change in consumers'
more positive attitudes, and a stronger intention attitudes or overall evaluations of the brand
of buying. When brand awareness is high, a The influence of product placement on
positive attitude toward the script leads to a attitudes toward a product or brand has not been
higher recall rate. Also, when a brand gains a researched very much. With this in mind,
certain level of awareness, the more positive the however, no differences have been found in
attitude toward product placement, the stronger viewers' attitudes toward a product or brand. On
its effect on recall rate, attitude, and intention of the other hand, evidence suggests that
buying. However, when product/brand consumers align their attitudes toward products
awareness is not high enough, consumers with the characters' attitudes to the products. In
typically fail even to remember the names of the addition, this process is driven by the consumers'
advertised products. attachment to the characters. (Pokrywczynski,
2005; Russell and Stern, 2006) Argan, Velioglu,

184
Proceedings of the 2008 IEMS Conference

and Argan (2007) suggest that the audience pays market due to technology such as TiVo and
attention to and accepts brand placement in DVD recorders.
movies and takes celebrities as references when
shopping. However, the movie should not be (8) To promote consumers' attitudes towards
over commercialized. At the same time, the practice of brand placement and the
attitudes toward product placements do not various product placement vehicles
differ based on gender, age, income, or Viewers tend to like product placements as
education. Authors van Reijmersdal, Neijens, long as they add realism to the scene. Snoody
and Smit (2007) have found that as consumers (2006) has found that viewer enjoyment of
watch more episodes, the brand image becomes product placements actually increased for media
more in agreement with the program image. vehicle versions of product placements where
This confirms that learning and human products were an integral part of the script. He
association memory are important to brand conjectures that peoples' lives are so saturated
placement. with brands that the inclusion of identifiable
products adds to the sense of reality, that is,
(6) To bring a change in the audiences' validates the individual's reality. About half of
purchase behaviors and intent respondents said that they would be more likely
Product placements are associated with to buy featured products. People with more
increased purchase intent and sales, particularly fashionable and extroverted lifestyles typically
when products appear in sitcoms, e.g., Ally have more positive attitudes toward product
McBeal in Nick and Nora pajamas, Frasier and placement (Tsai, Liang, and Liu, 2007). Also,
Friends in Starbucks and New World Coffee, product placements are preferred to fictitious
and Cosmopolitan martinis in Sex and the City brands and are understood to be necessary for
(Russell and Stern, 2006; Panda, 2004). In one cost containment in the making of programs and
example, Dairy Queen was featured on The movies (Pokrywczynski, 2005). Also, if brand
Apprentice. The contestants needed to create a image is positive, then consumers' brand
promotional campaign for the Blizzard. During evaluation towards the product placement seems
the week of the broadcast, Blizzard sales were to be more positive (Panda, 2004). Older
up more than 30%. Website hits also were up consumers are more likely to dislike product
significantly on the corporate and Blizzard Fan placements and more likely to consider the
Club sites as well as the Blizzard promotional practice as manipulation (Nelson and McLeod,
site. While DQ had six minutes of screen time, 2005). (Daugherty and Gangadharbatla, 2005)
the overall tone was a little harsh with two
contestants arguing. So, it was not the most USE OF PRODUCT PLACEMENT IN
positive environment for good, old-fashioned SPECIFIC MEDIA
DQ. However, it cost DQ in the "low seven Most product placement is done through
figures" to appear on the show and run its television, film, and video games. However,
supporting promotion. Not bad for a 30% regardless of the media used, the brand's image
increase in sales. (Cebrzynski, 2006) and the content vehicle need to fit in such a way
that the product/brand image will not be harmed
(7) To create favorable practitioners' views and that attention will be brought to the product
on brand placement or brand (Cebrzynski, 2006). Also, because
Practitioners' views on product placement advertisers continue to look for ways to stay in
generally are favorable or else the product touch with consumers, they easily could follow
placement market would not continue to their audiences into less-regulated media such as
increase. Practitioners remain positive about the Internet and 3-G mobile phones. As web-
product placements as long as no harm is done, connected television becomes a practical reality,
sales and brand image go up, and consumers are a user-driven environment and peer-to-peer file
positive about the product and brand. Also, swapping is being reinforced. While new
product placements help the practitioner make platforms such as 3G and MPEG-4 create
up for an increasingly fragmented broadcast greater opportunities for interactivity, successful

185
Proceedings of the 2008 IEMS Conference

product placement still must be relevant to its positive attitude toward this form of marketing
host content. (New Media Age, August 11, communication, feeling it is preferable to
2005) commercials shown on the screen before the
movie. (d'Astous and Sequin, 1999) More
Television frequent viewers and viewers who enjoy the
Television viewing is complicated with the movie more, pay attention to product placements
use of zipping, zapping, TiVo, and DVRs. That in the movie (Argan, Veliogly, and Argan,
is, the audience can shift the channel, change the 2007).
program, and slow down or fast-forward the Shapiro (1993) has classified four types of
program to avoid advertising. In addition, media product placements in movies: (1) provides
clutter, similarity of programming across only clear visibility of the product or brand
channels, and channel switching behavior all being shown without verbal reference, e.g., a
compound the advertising effectiveness of bottle of Coca-Cola sitting on the counter; (2)
television. (Panda, 2004) As a result, top-rated used in a scene without verbal reference, e.g.,
television shows are not necessarily the best actor drinks a Coca-Cola but does not mention
places for product placements. Product anything about it; (3) has a spoken reference,
placements depend on a number of factors, e.g., "Boy, I'm thirsty for a Coke"; and (4)
including length of the time on air, when and provides brand in use and is mentioned by a
how products are woven into the story line, and main star, e.g., actor says "This Coke tastes so
targeted audience. (Friedman, 2003) refreshing" while drinking the Coke. The star
Plot connection (Russell, 1998, 2002; using and speaking about the brand in the film is
Holbrook and Grayson, 1996) is the degree to assumed to have higher impact than the mere
which the brand is woven into the plot of the visual display of the brand. That is, meaningful
story. Lower plot placements do not contribute stimuli become more integrated into a person's
much to the story. Higher plot placements cognitive structure and are processed deeply and
comprise a major thematic element. Essentially, generate greater recall. (Panda, 2004)
verbally mentioned brand names that contribute
to the narrative structure of the plot need to be Video Games
highly connected to the plot. Lower plot visual Products and brands are expanding into video
placements need to serve an accessory role to games and even creating their own games.
the story that is lower in plot connection. Visual Essentially, there are three general approaches to
placements need to be lower in plot connections, game advertising: (1) traditional product
and audio and visual placements need to be even placements, signs, and billboard ads that are just
higher in plot connection. (Panda, 2004) Also, in the games from the beginning and cannot be
prominent brand placements in television have a changed later, (2) dynamic advertising wherein
more significant advantage than subtle brand new ads can be inserted at anytime via the
placements (Lord and Gupta, 1998). Internet, and (3) advergames or rebranded
versions of current games that blatantly promote
Film a single product throughout. Until now,
What is the effect of Tom Cruise chewing advergames and product placements have been
Hollywood chewing gum or Agent 007 using a the leading forms of in-game advertising. For
BMW? These are typical examples of product example, Burger King created three games
placement in movies. Higher involvement is suitable for the whole family: racing, action,
required to view a movie than for viewing and adventure, featuring The King, Subservient
television. Television viewers can multi-task in Chicken, and Brooke Burke. Their target market
the home setting thereby reducing their attention of young males meshes well with the Xbox
span and brand retention. Moviegoers actively audience, given that 18-34 year old men have
choose the experience, movie, time, and cost. been a hard group for marketers to get their
As such, they are much more receptive to the product or name in front of. (Cebrzynski, 2006;
brand communication during the movie. (Panda, New Media Age, October 27, 2005)
2004) A majority of movie watchers have a

186
Proceedings of the 2008 IEMS Conference

However, dynamic advertising is taking off and individual factors influence processing depth,
is probably the wave of the future. In dynamic which in turn impacts message outcomes. While
advertising, a marketer can specify where ads this model still is relatively untested, it does
are put, can set times when ads will run, can make sense and provides an integrative
choose which audience type your ad goes to, and framework. In the interim, until it is more
can get all the tracking available for Internet ads. universally tested and accepted, the variables
Approximately 62% of gamers are playing that are discussed consistently in the literature
online at least some of the time. Researchers are expanded below:
can track how long each ad is on the screen, how
much of the screen the ad occupies and from Visual/Audio/Combined Audio-Visual
what angle the gamer is viewing the ad. Product placements can be visual only, audio
Companies pay for ad impressions – one ad only, or combined audio-visual. Most product
impression constitutes 10 seconds of screen placements are visual which involves
time. About half of video games are suitable for demonstration of a product, brand, or visual
ads and could be contextually relevant to the brand identifier with no sound. In the audio
game. More than 50 games already are product placement, there is audio placement but
receiving dynamic ad content, with another 70 the brand is not shown. However, visual only or
set to go by year-end. (Hartley, 2007) audio only may not get noticed. About one in 10
Another wave of the future is 3D ads that are brands on television occurred as plugs verbally
twice as powerful as billboards. Also, the latest delivered by an on-camera personality. Visual
version of digital video standard MPEG-4 offers product placements last an average of 6.2
the possibility to personalize storylines or even seconds with the majority (68%) lasting for 5
hyperlink from tagged content, so viewers can seconds or less. The verbal mentions lasted 5.5
click on objects they want to buy. Whatever the seconds on average. For consumers to form
future brings, advertising and product association with the product placement, the
placements need to fit with the game. That is, brands need to be on screen longer or with more
video game developers need to incorporate impact. The combined hybrid strategy of audio-
advertising in an ambient way that will not visual shows the product or brand and also
distract players. In addition, the center of the mentions it in the medium. Only about 3.1% of
screen gets the most attention based on eye- product placements in sitcoms/dramas were both
tracking research. (New Media Age, October 27, visual and auditory. Recall is enhanced from
2005) dual-modality processing. (La Ferle and
Edwards, 2006) This dual mode requires
VARIABLES THAT IMPACT THE creativity and greater cost so that it does not
EFFECTIVENESS OF PRODUCT interfere with the natural flow of the program.
PLACEMENT (Argan, Velioglu, and Argan, 2007)
There is no universal agreement about what
makes product placements effective. However, Primary Product Placement Strategies
Balasubramanian, Karrh, and Patwardhan (2006) There are three primary product placement
have developed a useful model comprised of strategies (d'Astous and Sequin, 1999; Panda,
four components: (1) execution/stimulus 2004):
factors such as program type, execution 1. Implicit product placement strategy: The
flexibility, opportunity to process, placement brand, logo, the firm, or the product is
modality, placement priming; (2) individual- presented passively with only clear visibility
specific factors such as brand familiarity, within the program without being expressed
judgment of placement fit, attitudes toward formally. This product placement is more
placements, involvement/connectedness with contextual or part of the background with no
program; (3) processing depth or degree of clear demonstration of product benefits, e.g.,
conscious processing; and (4) message wearing clothes with the sponsor's name or a
outcomes that reflect placement effectiveness. scene in front of the Gap. A second type of
According to the authors, execution and implicit product placement strategy is when

187
Proceedings of the 2008 IEMS Conference

the product is used in a scene but no spoken uses may alienate current segments. (d'Astous
attention is given to the product. and Sequin, 1999) In general, however, greater
2. Integrated explicit product placement interplay between the characters and the brand
strategy: In this strategy, the brand, logo, makes the most successful product placement
the firm, or the product plays an active role (La Ferle and Edwards, 2006).
in the scene and is expressed formally
within the program or plot, e.g., Domino's Type of Television or Media Program
pizza delivered in a scene where everybody Consumers' reactions toward a product
is starving and actually eats it. That is, the placement may be impacted by the type of
attribute and benefits are demonstrated television program. There are four general types
clearly and mentioned by the main star. In of television programming: (1) quiz/variety
general, explicit product placements are shows wherein the need for entertainment is
more effective than implicit placements maximized, (2) mini-series/dramas which
(Panda, 2004). satisfy the need to identify oneself with
3. Non-integrated explicit product placement characters, (3) information/services
strategy: In this strategy, the brand, logo, programming which capitalizes on the need for
the firm, or the product is formally information, and (4) sports and cultural events
expressed but not integrated into the wherein the sponsoring is more linked to the
contents of the program, e.g., the program event than it is to the program. Consumer
was sponsored by Toyota or the Hallmark evaluations are most negative when a product
movie by Hallmark. This type of reference placement occurs in mini-series/dramas.
normally is included in the sponsorship deal. (d'Astous and Sequin, 1999) In general, success
of the product placement is dependent on the
Brand/Sponsor Image success of the programming content (Panda,
An expected result of television sponsorship 2004).
is the transfer of the program's image to the On the other hand, 18-34 year-old men spend
sponsor. As such, a product placement needs to little or no time watching TV, but they do like
be associated with programs and stars that are gaming. Unpaid product placements in games
congruent with the sponsoring identity. have been around for more than 10 years. In the
However, a more positive image of the sponsor past, the limitation has been the consoles, but
does not lead to significantly better consumer new consoles are launching that have Internet
reactions to product placements. (d'Astous and connectivity built in. So, product placements in
Sequin, 1999) But, higher levels of involvement video games are expected to increase and
with the movie or program have a greater impact improve. Predictions indicate that dynamic in-
potential. Also, on-set brand placement can game product placements are taking off and will
experience success regardless of the viewer continue to do so. (New Media Age, October 27,
involvement with the segment. (Pokrywczynski, 2005)
2005)
Degree of Viewers' Parasocial Relationship or
Sponsor-Program Congruity Attachment with the Characters
A strong sponsor-program congruity suggests Character attachment is a function of three
that the sponsor's products and activities are variables or stages: (1) inside-program
clearly related to the contents of the program, character-product relation, (2) outside-program
i.e., the product placement is likely to be natural consumer-character relation, and (3) interaction
and consistent with the program. However, between inside and outside influences in the
when sponsor-program congruity is weak, the consumer-product attitude. As a general
product placement may be seen as inconsistent statement, consumers align their attitudes toward
and not credible. For example, if a sponsor has a product with the inside-program characters'
a product that is placed in a reality show attitudes toward the product. This alignment
competition, the product may or may not be used process is driven by the consumers' extra-
as the sponsor intends it to be used, and the new program attachment to the characters. Or, the

188
Proceedings of the 2008 IEMS Conference

consumer's attitude toward the product is a The first downside to using product
function of the valence and strength of the placements is that marketers may have a lack of
character-product association in the program and control over how products are portrayed or
the valence and strength of the consumer- incorporated into a scene or storyline
character attachment. As such, if a consumer (Daugherty and Gangadharbatla, 2005).
experiences a strong parasocial relationship or Products may end up being misused, ignored,
alignment with the television character, then this criticized, associated with questionable values,
will contribute to the formation and change of or used unethically. This can be especially true
consumers' attitudes toward consumption and in reality shows. So, advertisers must exert
attitude alignment with the product. (Russell greater control over product or brand
and Stern, 2006; Russell, Norman, and Heckler, appearances to ensure their prominence. Most
2004) appearances are visual or verbal but rarely both.
In addition, most brands are portrayed neutrally
Degree of Clutter around the Product and for less than five seconds. (La Ferle and
Placement Edwards, 2006)
Product placement is impacted by how much The second downside to using product
other product placement there is in the show, placements is that marketers have little if any
and product placements are prevalent with one influence over how successful media
brand appearing in every three minutes of programming will be. That is, it is difficult to
programming. Because there is so much clutter, predict where to place brands for maximum
the value of any one brand may be limited. positive exposure. Also, if one places too many
Hence, this cluttered promotional environment product placements, then consumers may feel
and product placements must be managed that they have had enough and the saturation
actively. Game shows offer the greatest may have a negative effect. Product placement
exposure and storied programming offers the cannot be so overwhelming that it detracts from
fewest and least prominent product or brand the television show, movie, or content vehicle.
appearances. (La Ferle and Edwards, 2006) (Cebrynski, 2006)
Too much product placement can annoy viewers A third downside to using product
and be viewed as unacceptable commercialism. placements is the possibility of negative
However, communication interference from character association. If the product is
competitors can be limited. This occurs because associated with a particular character, then when
the sponsor often buys a good portion of the that character falls from grace or does something
commercial time within a program. When the inappropriate, the product or brand may be
product placement is integrated into the tarnished as well. That is, the target audience
program, the realism is enhanced and the may shift its attitude about a character in such a
likelihood of viewer zapping due to clutter is way that products associated with the character
reduced. (Daugherty and Gangadharbatla, 2005; also will be diminished.
d'Astous and Seguin, 1999) In general, other The fourth downside to using product
competing products in the product category placements is the difficulty in pricing product
should not be incorporated into the placement (The Economist, 2005) Typical
programming (Panda, 2004). placement fees are based on a fairly standard
scale of expected audience size for the media
DOWNSIDE OF USING PRODUCT vehicle. This pricing method assumes that
PLACEMENT product placement exposure is equal across
There are several downsides to using product events and scenes and equal to the exposure
placements: (1) lack of control, (2) media value marketers get from a 30-second
programming may not be successful, (3) commercial. However, evidence exists
possibility of negative character association, (4) supporting the premise that how and when the
difficulty in pricing product placements, and (5) products appear in the media vehicle may be
product placement ethics. more important in determining cost and value.
(Pokrywczynski, 2005)

189
Proceedings of the 2008 IEMS Conference

With product placements continuing to grow placements (e.g., brand research, purchase
in the TiVo era, businesses are having increasing intent, and word-of-mouth) and can multiply the
pressure to show a return on investment for that quantity of product placements by 4-6 times
activity. As a result, marketers are becoming without compromising the production. For
more sophisticated about measuring product example, they found that film and television
placements. Several firms offer tools that rate product placements both generate an average
product placements in terms of viewer impact response and action rate of 25-30%; hence,
and dollars and cents. For example, Nielsen is converting a product placement into consumer
creating a tool that would produce product action is remarkably high and measurable.
placement ratings. Even though Nielsen has (Song, 2007)
tracked 100,000 product placements since the One last downside of product placement is
fall of 2003, it is not easy to get product that of product placement ethics. In particular,
placement data to assess the effectiveness and some individuals question the ethics of product
price of product placements. However, some placement. Product placement ethics are
means are available. For example, weekly discussed below.
product placement ratings can be issued to
subscribers. Or, a weekly summary of top- ETHICS OF PRODUCT PLACEMENT
ranked product placements is available in the Many consumers and researchers consider
Wall Street Journal or other publications that product placements as excessive
assess some sweeps-related data. (McClellan, commercialization of the media and an intrusion
2003) And, at least a dozen other firms have into the life of the viewer. The viewer does not
established unique methods of measuring go to the movie or television to see the product
product placements. Some of the variables that placement. Rather, viewers often attend movies
are tracked in various measurement systems and television to escape the realities of life. As
include: placement foreground or background, such, marketers must make the product
oral or visual, main character or sidekick, how placement look obvious at the point of
much air time compared to competition, viewer emergence. Sometimes, the product or brand
response, awareness scale, product placement to placement is weaved into the plot, like ET's
commercial cost ratio, degree of integration, Reese's Pieces with a 65% increase in sales due
whether or not it does harm, "fit" scores, object to this placement (Balasubramanian, Karrh, and
or purpose of placement, reach, credible use of Patwardhan, 2006). If the plot connection and
product placement, and meeting of long-term the reflection of the user or character in the film
strategies and objectives. (Wasserman, 2005) are missing, the product placement often is
Regardless of which measurement system and futile. Hence, relevance of the product to the
variables are used, there will continue to be situation needs to be created by possibly
differing opinions over the value of product incorporating the placement planning at the
placements and how to measure that value. In script level. (Panda, 2004)
addition, the concern is that once marketers Product placement caters to a captive
quantify product placement value, then product audience even though the brands are shown in
placement will become a standardized their natural environment. But, viewers
commodity. generally tend to like product placements unless
One example of a company releasing ground there are too many. Product placements
breaking research for product placement typically are seen as an acceptable practice,
measurement is Visure Corp., a privately-held frank, amusing, pleasant, and dynamic. In
company based in Richmond, VA. Visure Corp. general, viewers consider product placements to
offers innovative, web-based cataloging and enhance realism, aid in character development,
measurement services that help marketers justify create historical subtext, and provide a sense of
and maximize product placement effectiveness. familiarity. However, if the audience realizes
The company's integrated product placement that the product placement was placed there, it
solution has the ability to measure what may affect their judgment and they may counter
consumers are doing in response to product argue the placed messages. (Panda, 2004)

190
Proceedings of the 2008 IEMS Conference

Still, some individuals feel that product mix branded mentions into their stories
placements are sinister and should be banned or (Mandese, 2006).
at least clearly disclosed in the credits at the end An additional area of ethical controversy is
of the program (The Economist, 2005). This is pharmaceutical product placements. The FDA
particularly true for implicit product placements requires that pharmaceutical companies give
that should be avoided since they are perceived both the benefits and the side effects of their
as less ethical than the other types of product drugs. This clearly will not work for product
placements, particularly if they appear in an placements. However, the FDA does allow
information/series television program. Also, reminder ads that only mention the brand with
consumers' ethical opinions about product no other information or claims. The FDA
placements differ significantly across product currently is reviewing all promotional activity
categories, with greater concern for ethically following calls by medical groups to ban direct-
controversial products such as alcohol, to-consumer drug ads. It is not clear how
cigarettes, and guns. (d'Astous and Sequin, product placements will fit into the review.
1999) (Edwards, 2005) The drug industry association
Toys are likely products for product PhRMA has published guiding principles for
placement. But, the use of toy placements is advertising prescription medicines: the industry
very low, little better than chance. However, should adapt its television ads to what is
children's programming is highly regulated by appropriate for the audience, health and disease
the FCC and much product placement in awareness should be part of the promotion,
children's shows is essentially banned. That is, a products should be discussed with doctors
2004 APA finding states that kids under 8 have before drug campaigns are launched, and low-
difficulty distinguishing ad content from income groups should be provided with
program content. However, the Children's information about health assistance programs.
Advertising Review Unit makes voluntary rules (Business Monitor International, 2007)
regarding advertising to children, and companies
typically honor these rules. While some SUMMARY
companies have product placement strategies or Product placement has become an
policies, television generally offers few venues increasingly popular way of reaching potential
for kids and toy product placements. As an customers who are able to zap past commercials.
interesting research note, 3-5 year olds were To reach these retreating audiences, advertisers
asked to sample identical fast food. One bag use product placements in more clever, effective
had a McDonald's golden arches logo while the ways that do not cost too much. The result is
other bag was unmarked. With exactly the same that the average consumer is exposed to 3,000
food and only a different bag, children preferred brands a day including even billboards, T-shirts,
the branded version. Even young kids are tattoos, schools, doctor's office, ski hills, and
swayed by advertising and brand placements. sandy beaches (Nelson and McLeod, 2005). A
(Business Wire, 2007; Auty and Lewis, 2004; wise caveat to consider for product and brand
Edwards, 2006) placement is "Our philosophy is if the brand
Newspapers also are under ethical scrutiny. doesn't make the show better, the brand doesn't
An important ethical issue is whether or not make the show. People must not notice the
newspapers should be allowed to use product integration, but they must remember it. That's
placements. While they are under pressure to do the test." (Stringer, 2006, p. 1) The ideal
so, they have held out so far to not blur the line product placement situation is win-win-win-win:
between content and ads. (The Economist, customer gets to know about new and
2005) The fear is that if product placements established products and their benefits, client
seep into newspapers and news magazines, then gets relatively inexpensive branding of their
editorial content may be jeopardized and product, media vehicle gets a brand for free or
credibility violated (Callahan, 2004). Still, rigid can reduce its production budget, and the
editorial content policies appear to be easing up product placement agency gets paid for bringing
as marketers and agencies pressure publishers to the parties together.

191
Proceedings of the 2008 IEMS Conference

REFERENCES Edwards, J. (2005), "Drug Product Placements


Flying Under FDA's Radar," Brandweek,
Argan, M., Velioglu, M.N., and Argan, M.T. New York, 46 (41), 14.
(2007), "Audience Attitudes Towards Edwards, J. (2006), "In Product Placement
Product Placement in Movies: A Case from Game, Toys Largely on the Sidelines,"
Turkey," Journal of American Academy of Brandweek, 47 (14), 18.
Business, Cambridge, 11 (1), March, 161- "Exclusive PQ Media Research," Business Wire,
168. New York, August 16, 2006, 1.
Auty, S., and Lewis, C. (2004), "Exploring Friedman, W. (2003), "Intermedia Measures
Children's Choice: The Reminder Effect of Product Placements," TelevisionWeek,
Product Placement," Psychology & December 15, 22 (50), 4.
Marketing, 21 (9), 697. Hartley, M. (2007), "Product Placement at a
Balasubramanian, S.K. (1994), "Beyond Whole New Level," The Globe and Mail,
Advertising and Publicity: Hybrid August 3.
Messages and Public Policy Issues," Journal Holbrook, M.B. and Grayson, M.W. (1996),
of Advertising, 23 (4), 29-46. "The Semiology of Cinematic Consumption:
Balasubramanian, S.K., Karrh, J.A., and Symbolic Consumer Behavior in Out of
Patwardhan, H. (2006), "Audience Response Africa," Journal of Consumer Research,
to Product Placements: An Integrative December, 374-381.
Framework and Future Research Agenda," "Industry News – Product Placement on the
Journal of Advertising, Provo, 35 (3), 115- Increase," Business Monitor International,
142. July 31, 2007.
"Branded Content Seeks a Second Act," "In-Game Advertising: Target Acquired," New
Brandweek, New York, May 28, 2007, 48 Media Age, October 27, 2005, 20.
(22), 4-5. La Ferle, C. and Edwards, S.M. (2006), "Product
Brennan, I., and Babin, L.A. (2004), "Brand Placement," Journal of Advertising, 35 (4),
Placement Recognition: The Influence of 65-87.
Presentation Mode and Brand Familiarity," Lord, K.R., and Gupta, P.B. (1998), "Product
Journal of Promotion Management, Placements in Movies: The Effect of
Binghamton, 10 (1,2), 185. Prominence and Mode on Audience Recall,"
"Business: Lights, Camera, Brands; Product Journal of Current Issues and Research in
Placement," The Economist, London, 377 Advertising, 20, 47-59.
(8450), October 29, 2005, 81. Mandese, J. (2006), "When Product Placement
Callahan, S. (2004), "Marketers Explore Product Goes Too Far," Broadcasting & Cable, New
Placement," B to B, Chicago, 89 (13), 1-3. York, 136 (1), 12.
Cebrzynski, G. (2006), "Lights! Camera! McClellan, S. (2003), "Grading Product
Product Placement!" Nation's Restaurant Placement," Broadcasting & Cable,
News, New York, December 4, 40 (49), 1-5. December 15, 133 (50), 1.
d'Astous, A. and Seguin, N. (1999), "Consumer Nelson, M.R., and McLeod, L.E. (2005),
Reactions to Product Placement Strategies in "Adolescent Brand Consciousness and
Television Sponsorship," European Journal Product Placements: Awareness, Liking and
of Marketing, Bradford, 33 (9/10), 896. Perceived Effects on Self and Others,"
Daugherty, T. and Gangadharbatla, H. (2005), International Journal of Consumer Studies,
"A Comparison of Consumers' Responses to 29 (6), 515-528.
Traditional Advertising and Product "Old McDonald's Has a Hold on Kids' Taste
Placement Strategies: Implications for Buds," Business Wire, New York, August 6,
Advertisers," American Marketing 2007.
Association Conference Proceedings, Panda, T.K. (2004), "Consumer Response to
Chicago, 16, 24. Brand Placements in Films Role of Brand
Congruity and Modality of Presentation in
Bringing Attitudinal Change Among

192
Proceedings of the 2008 IEMS Conference

Consumers with Special Reference to Brand Stringer, K. (2006), "Advertising: Pop-in


Placements in Hindi Films," South Asian Products: Images Are Inserted into Popular
Journal of Management, New Delhi, 11 (4), Television Shows," Knight Ridder Tribune
October-December, 7-26. Business News, Washington, February 16, 1.
Pokrywczynski, J. (2005), "Product Placement Thomasch, P. (2007), "Product Placement
in Movies: A Preliminary Test of an Spending Seen Up 30% in '07," Reuters
Argument for Involvement," American News, 14, March 14, 37.
Academy of Advertising Conference Tsai, M., Liang, W., and Liu, M. (2007), "The
Proceedings, Lubbock, 40-48. Effects of Subliminal Advertising on
"Product Placement: A Place for Everything," consumer Attitudes and Buying Intentions,"
New Media Age, London, August 11, 2005, International Journal of Management, 24
18. (1), 3-15.
Russell, C.A. (1998), "Towards a Framework of "U.S. Advertising Spending Declined 0.6% in
Product Placement: Theoretical First Quarter 2007, Nielsen Monitor-Plus
Propositions," Advances in Consumer Reports," PR Newswire, New York, June 12,
Research, 25, 357-362. 2007.
Russell, C.A. (2002), "Investigating the van Reijmersdal, E.A., Neijens, P.C., and Smit,
Effectiveness of Product Placements in E.G. (2007), "Effects of Television Brand
Television shows: The Role of Modality Placement on Brand Image," Psychology
and Plot Connection Congruence on Brand and Marketing, 24 (5), 403-420.
Memory and Attitude," Journal of Wasserman, T. (2005), "How Much Is this Shot
Consumer Research, 29, 306-318. Worth?" Brandweek, New York, 46 (3), 21-
Russell, C.A., Norman, A.T., and Heckler, S.E. 25.
(2004), "The Consumption of Television Weaver, A. (2007), "Product Placement and
Programming: Development and Validation Tourism-Oriented Environments: An
of the Connectedness Scale," Journal of Exploratory Introduction," International
Consumer Research, 31 (2), 150-161. Journal of Tourism Research, 9, 275-284.
Russell, C.A., and Stern, B.B. (2006),
"Consumers, Characters, and Products: A
Balance Model of Sitcom Product
Placement Effects," Journal of Advertising,
Provo, 35 (1), Spring, 7-22.
Schiller, G. (2007), "Digest: Product Placement
Gains," Film Marketing – News, March 16.
Shapiro, M. (1993), "Product Placements in
Motion Pictures," Working Paper, North
Western University, NY.
Snoddy, R. (2006), "Time to Formalise Product
Placement," Marketing, London, October 4,
18.
Song, M.K. (2007), "Visure Corp. Releases
Groundbreaking Research for Product
Placement from Its iVisure(TM)
Technology," PR Newswire, New York,
June 13.
Stephen, A.T. and Coote, L.V. (2005), "Brands
in Action: The Role of Brand Placements in
Building Consumer-Brand Identification,"
American Marketing Association
Conference Proceedings, Chicago, 16, 28.

193
Proceedings of the 2008 IEMS Conference

New Marketing Public Relations Trend: Customer As


A Business Partner in Corporate Innovation Strategies
Banu Karsak and Elgiz Yılmaz
Galatasaray University
bkarsak@gsu.edu.tr , elyilmaz@gsu.edu.tr

Abstract
2. The theoretical framework
The role of customers’ advices as innovation
consultants offers valuable experiences to 2.1. The theoretical framework of
executives in providing data and strategies to marketing public relations
enhance a corporation’s creativities. This
study aims to provide a quantitative and Integrated Marketing Communications
qualitative survey about how consumers (IMC), according to The American Marketing
evaluate this new concept of customer based Association, is “a planning process designed
marketing public relations trend with Turkish to assure that all brand contacts received by a
innovation cases selected according “ The customer or prospect for a product, service or
most Successful Turkish Brands in 2006”. organization are relevant to that person and
consistent over time”. The definitions of the
1. Introduction IMC concept are many and varied. IMC
concept first emerged in the 1980s and
Marketing focuses on consumers, as in developed relatively slowly into the middle of
consumer relations, which is part of public the next decade. Smith et al. (1997) argued for
relations. Traditional consumer relations three different definitions of IMC, each with
tactics, such as product oriented, can work common elements. These grouped definitions
hand in hand with marketing tactics such as are as follows:[2]
innovation strategies. 1- those that refer to all marketing
New product ideas can come from many communications
sources: customers, scientists, employees, 2- a strategic management process based
competitors, channel members and top on economy, efficiency and
management. effectiveness
Today the new business environment is 3- the ability to be applied within any
providing possibility to the new customer type of organization.
based concepts instead of classical marketing The marketing communications industry
concepts. Corporations that achieve high often uses terms such as advertising, public
customer retention and high customer relations, sales-promotion, direct mail or
profitability aim for: the right product (or sponsorship to describe what they do. The
service), to the right customer, at the right relationship between the public relations and
place, at the right time, through the right marketing functions has always been a
channel, to satisfy the customer’s need or somewhat ambiguous and controversial one.
desire[1]. Current economic trends and the The three main pillars of IMC are public
impact of the technology in the new business relations, advertising and marketing. Although
life put the end to the product based these three disciplines have much in common,
management strategy. Companies grow and much also differentiates them. Definitions for
prosper when they efficiently meet the needs each disciplines are as follows: [3]
of customers. As the world moves into the new Advertising is the use of controlled media
millennium four particular changes are (media in which one pays for the privilege of
reshaping the environment of business: the dictating message content, placement, and
globalization of markets, changing industrial frequency) in an attempt to influence the
structures, the information revolution and actions of targeting publics.
rising consumer expectations. Marketing is the process of researching,
creating, refining and promoting a product or

194
Proceedings of the 2008 IEMS Conference

service to targeting consumers. Marketing point of marketing, sales, contacts, products,


promotion disciplines include sales services, time, and profitability. When we
promotions (such as coupons), personal define “customers” we are speaking of
selling, direct marketing-and, often aspects of multiple types of groups of customers:[1]
advertising and public relations. Consumer- The retail customer who buys the
Public relations is the values-driven final product or service (individual or a
management of relationships between an family)
organization and the publics that can affect its Business to business- The customer who
success. purchases your product and adds it to its
Advertising, marketing and public relations product for sale to another customer or a
follow the same process: research, planning, business using your product within its own
communication and evaluation. But unlike organization to increase profitability or
marketing public relations focused on many services.
publics, not just on consumers. And unlike Channel, distributor-franchisee- A person or
advertising public relations doesn’t control its organization that does not work directly for
messages by purchasing specific placements you and is (usually) not on your payroll, which
for them. [3] buys your product to sell or utilize it as your
Both marketing professionals and academics representative/location in that area.
have tended to treat public relations as simply Internal customer- A person or business unit
a subset of the marketing mix, a view which within your enterprise that requires your
may ascribe the role of public relations as one product or service to succeed in their business
largely concerned with simply generating goals.
publicity in support of marketing goals. Harris
reckons that the 1980s has seen Marketing Choosing the right customers is important
Public Relations emerge as a distinctive new because some customers do not offer the
promotional disciple which comprises potential to create value, either because the
specialized application public relations costs of serving them exceed the benefits they
techniques that support marketing activities. generate, or because the company does not
Harris states that this specialized form of have the appropriate bundle of skills to serve
public relations should be differentiated from hem effectively [6]. The company wants long-
general public relations and from corporate term relationships with its chosen customers
public relations. Harris proposes that MPR is because loyal customers make possible faster
likely to become a separate marketing and more profitable growth.
management function distinct from the broader
area of corporate public relations. But he also Companies grow and prosper when they
states that these two functions will need to efficiently meet the needs of customers. As the
maintain a close working relationship because world moves into the new millennium four
they both involve similar skills and experience particular changes are reshaping the
and because of the perceived need to integrate environment of business: the globalization of
marketing and corporate objectives. Using this markets, changing industrial structures, the
viewpoint, Marketing Public Relations would information revolution and rising consumer
be treated as part of the marketing expectations: That’s why 4 P of marketing
communications mix [4]. have been replaced by 4 C.[7] This process
Grunig and Grunig [5] concluded that starts with understanding the market for the
marketing communication theories differ customers. For this, managers need a detailed
significantly from public relations theories understanding of 4 C which defines the valued
when applied in an integrated communication customers and how to create a competitive
department. advantage.

2.2. Customer based marketing public According to Kotler, customer satisfaction


relations with a product or service may depend on a
range of additional or augmented product
By the year 2000, the information revolution benefits that the customer may derive over and
was beginning to offer a better alternative- above those that comprise the core product or
mass customization. The customer is the focal service [3].

195
Proceedings of the 2008 IEMS Conference

Table 1: 4 C in Marketing [7] The new product development can take two
4P 4C forms. The company can develop new
Product Customer needs & wants products in its own laboratories. Or it can
contract with independent researchers or new
Price Cost (to the customer) product development firms to develop specific
products for the company.[7]
Place Convenience
Nowadays, we can talk about another new
Promotion Communication marketing management trend in which
customer is considered as a business partner in
corporate innovation strategies.
The consulting firm Booz, Allen & Hamilton
2.3.Corporate innovation strategy and has identified six categories of new products in
corporate citizenship terms of their newness to the company and to
the marketplace: [10]
A significant boost to corporate citizenship z New to the world products: New products
initiatives was given in 1996 when President that create an entirely new market.
Clinton called to Washington a group of z New product lines: New products that
leading business people to discuss the notion allow a company to enter an established
of corporate citizenship and social market for the first time.
responsibility. President encouraged the z Additions to existing product lines: New
business leaders “ to do well by their products that supplement a company’s
employees as they make money for their established product lines as package
shareholders”[8]. According to Carroll sizes, flavours, etc.
corporate citizenship has four faces: an z Improvements and revisions of existing
economic face, a legal face, an ethical face and products: New products that provide
a philanthropic face [9]. Good corporate improved performance or greater
citizens are expected to: perceived value and replace existing
z Be profitable (carry their own weight products.
or fulfill their economic z Repositioning: Existing products that are
responsibility) targeted to new markets or market
z Obey the law (fulfill their legal segments.
responsibilities) z Cost reductions: New products that
z Engage in ethical behavior (be provide similar performance at lower
responsive to their ethical cost.
responsibilities) Their existing products are vulnerable to
z Give back through philanthropy changing consumer needs and tastes, new
(engage in corporate contributions). technologies, shortened product life cycles,
Corporate citizenship addresses the and increased domestic and foreign
relationship between companies and all their competition.
important stakeholders, not just employees. New products may sometimes be failed.
Once a company has carefully segmented the Several factors may be responsible [11]:
market, chosen its target customer groups, 1-A high level executive may push a favourite
identified their needs, and determined its idea through in spite of negative market
desired market positioning, it is ready to research findings.
develop and launch appropriate new products. 2-The idea may be good, but the market size is
Marketing management plays a key role in the overestimated.
new product development process. 3-The actual product is not well designed.
Every company must carry on new product 4-The new product is incorrectly positioned in
development. Replacement products must be the market, not advertised effectively or
created to maintain or build sales. overpriced.
Furthermore, customers want new products, 5-Development costs are higher than expected.
and competitors will do their best to supply 6-Competitors fight back harder than
them. expected.
That’s why the role of customers’ advices as
innovation consultants offers valuable

196
Proceedings of the 2008 IEMS Conference

experiences to executives in providing data Arzum General Manager Murat Kolbaşı said
and strategies to enhance a corporation’s that: their market share increased from 0% to
creativity. This new trend should be one of 20% with their product entitled “Arzum
the key success factors. Ehlikeyif Kettle”. He informed that this
Because, a well defined concept prior to product was a new product idea came from
development, where the company carefully their customers who drink often tea and who
defined and assessed its target market and would like to drink Turkish coffee too with the
product requirements assure many benefits. same apparatus.
Kotler affirms that: “The teamwork aspect is Gencturkcell, one of the Turkish biggest
particularly important. New product GSM operator’s brand, for example, conducts
development is most effective when there is a its business projects with young people
team work among research&development, through the “gnçtrkcll club” in “The Voice of
engineering, manufacturing, purchasing, Youth” project. 13 millions members of this
marketing, finance and customer relationship “gnçtrkcll club” help to manage organizational
from the beginning of the innovation process. inovation strategies. Turkcell CRM Based
The product idea must be researched from a Marketing Department Chief Burak Sevilengül
marketing point of view and a specific cross- said that: in order to understand better their
functional team must guide the project target audience, they collaborated with 150
throughout its development.” [7] p.309 young people (the average age is between 16-
Successful new product development 25). He added that the “lovebird” campaign is
requires the company to establish an effective designed with the “gnçtrkcll club”s members’
organization for managing this innovative comments and obtained a great success.
process. Top management is very accountable Companies also rely on their scientists,
for the success of new products. It should engineers, designers and other employees for
define the business domains, product new product ideas. Successful companies have
categories and customer feed-backs that the established a company culture that encourages
company wants to emphasize. every employee to seek new ways of
improving the company’s production, products
3. New product development decision and services.
and development process Akbank, one of the Turkish bank, for
example, evaluated its cusotmers’ feed-backs
New product ideas can come from many about their credit demands. Customers would
sources: customers, scientists, employees, like to obtain bank credit without going to the
competitors, channel members and top bank’s agencies, without waiting for a
management. longtime in the agency and without
The marketing concept holds that customers’ completing long forms. Thus, the marketing
needs and wants are the logical place to start and R&D managers have worked together to
the search for innovation ideas. Hippel has create a fast and easy asking for credit way.
shown that the highest percentage of ideas for They founded “mobile credit” which allows to
new industrial products originate with customers to make their credit demands using
customers [12]. their handy phone. Akbank Individual Bank
Companies can identify customers’ needs Operations Marketing Department Chief Cem
and wants through surveys, projective tests, Muratoğlu said that CRM based strategies
focused group discussion, suggestion and have an important role on the concurrent
complaint letters from customers. Many of the banks’ successes. Our “mobile credit”
best ideas come from asking customers to application desgined by our customers’ feed-
describe their problems with current products. back had a great success. 700.000 people
Arzum, one of the Turkish small electric addressed to “Akbank mobile credit” only in
white goods producer, for example, asks 2006.
buyers at big points of purchase, what they Companies can also find good ideas by
like and dislike about their products, what examining their competitors’ products and
improvements could be made. Then they services. They can learn from distributors,
evaluate these demands to develop new suppliers and sales representatives what
products or to modify the existing ones. competitors are doing. They are often the first
to learn about competitive developments.

197
Proceedings of the 2008 IEMS Conference

They can find out what customers like and Adopters of new products have been
dislike in their competitors’ new products. observed to move through the following five
If the product concept passes the business stages:
test before the evaluation of different new z Awareness: The consumer becomes
product ideas sources, it moves to R&D and/or aware of the innovation but lacks
engineering to be developed into a physical information about it.
product. These departments will develop one z Interest: The consumer is stimulated to
or more physical versions of the product seek information about the innovation.
concept. Its goal is to find a prototype that the z Evaluation: The consumer considers
consumers see embodying the essential whether to try the innovation.
elements described in the product-concept z Trial: The consumer tries the innovation
statement that performs safely under normal to improve his or her estimate of its
use and conditions. value.
Producers and R&D people must not only z Adoption: The consumer decides to make
design the product’s required functional full and regular use of innovation.
characteristics but also know how to The companies seek to estimate some
communicate its psychological aspects variables as trial, first repeat, and adoption and
through physical issues. How will consumers purchase frequency of consumer products.
react to different colours, sizes, weights and Therefore they conduct consumer goods
other physical characteristics? Marketers need market testing. It’s true that many of the best
to supply producers with information on what ideas come from asking customers to describe
attributes consumers seek and how consumers their problems with current products as users
judge whether these attributes are present. of these. In this context, Rogers sees the five
When the prototypes are ready, they must be adopter groups as differing in their value
put through functional and consumer tests orientations: [8] Innovators are willing to try
whose are business partner in corporate new ideas at some risk. Early adopters are
innovation strategies, now. Consumer testing guided by respect; they are opinion leaders in
can take a variety of forms, from bringing their community and adopt new ideas early but
consumers into a laboratory to giving them carefully. The early majority are deliberate;
samples to use in their homes.[7] p.327 they adopt new ideas before the average
person, although they rarely are leaders. The
3.1. The consumer adoption process late majority are sceptical; they adopt an
innovation only after a majority of people have
In this study, we asked: “How do final and tried it. Finally, laggards are traditional; they
potential customers learn about new products, are suspicious of changes, mix with other
try them and adopt or reject them?” traditional people. They adopt the innovation
Management must understand this consumer only when it takes on a measure of tradition
adoption process to build an effective strategy itself.
for market share. Kotler affirms that:
“Adoption is an individual’s decision to 4. Methodology
become a regular use of a product”. [7] p.335
The consumer adoption process is later The purpose of this study is to examine the
followed by the consumer loyalty process, following questions:
which is the concern of the established What are the main stages in customer
producer. oriented developing new products, and how
An innovation refers to any good, service or can they be managed better?
idea that is perceived by someone as new.[13] What factors affect the rate of consumer
The idea may have a long history, but it is an adoption of newly launched?
innovation to the person who sees it as new. This paper aims to move on to examine the
Rogers defines the innovation diffusion marketing public relations literature and to
process as “the spread of a new idea from its identify links between the concepts of
source of invention or creation to its ultimate consumers as business partner and corporate
users or adopters”. [14] innovation strategies. An empirical study was
designed to evaluate the essences of
experience relating to consumer based

198
Proceedings of the 2008 IEMS Conference

innovation management and the perception of ages and 15 (21%) were between
this new trend between final consumers. A 36-45 ages.
representative sample of 100 people selected
randomly would be interviewed with a Table 3. Transmitting feed-back
questionnaire. This form is constituted 3 Freq
questions aimed at interviewing demographic uenc Valid
parameters, 12 questions aimed at y Percent Percent
interviewing their perception about Valid yes 52 71,2 71,2
corporation innovation experiences. But 73 of no 21 28,8 28,8
our sample answered us; that’s why 73 Total 73 100,0 100,0
transcripts of interviews are analyzed, coded in
SPSS program and the essences of experience We determined that 52 (71%) of our data
are collectively synthesized in the description. transmit their feed-backs about the products to
We used descriptive statistical techniques producers, but we also measured that 21 (29%)
evaluating our findings such as chi-square test, of our data don’t prefer to transmit their feed-
correlation and frequencies. We measured backs. These data mean that the majority of
demographic variables: gender, consumers are conscious and they feel
age and job. Our responsible themselves of the products and the
representative sample was services which they use.
constituted from 38 women (52%)
and 35 men (48%).
Table 4. Preference of channel
Freque Perce Valid
5. Findings ncy nt %
Valid cst
Table 1. Gender 6 8,2 8,2
rprsnttv
Freque Perce Valid call
12 16,4 16,4
ncy nt Percent centre
Valid W e-mail 10 13,7 13,7
38 52,1 52,1
M fax 1 1,4 1,4
35 47,9 47,9
Total 1-2 4 5,5 5,5
73 100,0 100,0
1-3 6 8,2 8,2
Table 2. Age 2-3 1 1,4 1,4
Freque Perce Valid 2-4 1 1,4 1,4
ncy nt Percent 3-4 1 1,4 1,4
Valid 18- 1-2-3 8 11,0 11,0
16 21,9 21,9 1-2-3-4 2 2,7 2,7
25
26- N/A 21 28,8 28,8
32 43,8 43,8 Total
35 73 100,0 100,0
36-
15 20,5 20,5
45
46- Our findings suggest that the majority of our
1 1,4 1,4
55 data (12 – 16%) prefer call centres and to send
56- e-mails (10 – 14%) to transmit their feed-
5 6,8 6,8
65 backs about the product and/or the services.
65- We also measured that 8 (11%) of our data
üzer 4 5,5 5,5 prefer to use customer representatives, call
i centres and sending e-mails to transmit their
Tota feed-backs. We may say that the customers
73 100,0 100,0
l look for the rapidity to communicate with
producers and e-mail is considered as one of
Our data suggests that 32 (44%) of the the most preferred transmitting channel.
sample were between 26-35 ages,
16 (22%) were between 18-25

199
Proceedings of the 2008 IEMS Conference

Table 5. Age & transmitting feed-back %


14,3
TRANSMI Total within 1,9% 5,5%
%
TRANS
yes no % of 4,1
Age 18- Count 1,4% 5,5%
10 6 16 Total %
25 Total Count 52 21 73
%
37,5 100,0 %
within 62,5% 28,8 100,0
% % within 71,2%
AGE % %
AGE
%
28,6 %
within 19,2% 21,9% 100,0 100, 100,0
% within
TRANS % 0% %
TRANS
% of 8,2
13,7% 21,9% % of 28,8 100,0
Total % 71,2%
Total % %
26- Count
24 8 32
35
% Our data suggest that young people transmit
25,0 100,0
within 75,0% their feed-back more than the others. 33% (24)
% %
AGE of our samples who are between 26-35 ages
% and 16% (12) who are between 36-45 ages
38,1
within 46,2% 43,8% more interested in communication with
%
TRANS producers. We may comment that young
% of 11,0
32,9% 43,8% population are more actual and busier in
Total %
36- Count
everyday life and they’re more interested in
12 3 15 new products, services or trends which
45
% facilitate their life.
20,0 100,0
within 80,0%
% %
AGE Table 7. Knowledge of innovation concept
% Frequ Valid
14,3
within 23,1% 20,5% ency Percent Percent
%
TRANS Valid yes 51 69,9 69,9
% of 4,1
16,4% 20,5% no 22 30,1 30,1
Total %
46- Count Tot 73 100,0 100,0
1 0 1
55
% We also measured that the majority of our
100,0 100,0
within ,0% samples 51 (70%) know the meaning of
% %
AGE “innovation” concept. Our findings are similar
%
with the results of “preference of channel”;
within 1,9% ,0% 1,4%
TRANS
because 28 of our samp+le prefer to
% of communicate with producers via e-mail, which
1,4% ,0% 1,4% is a innovative product. In addition to this,
Total
56- Count these are consistent with our results about age-
4 1 5
65 transmitting feed-back cross ratio: Our data
%
20,0 100,0
suggest that young people (24 - 33%) transmit
within 80,0% their feed-back more than the others. Thus,
% %
AGE young population know new innovative trends
% and they use them currently.
4,8
within 7,7% 6,8%
%
TRANS
% of 1,4
Table 8. Knowledge Arzum kettle product
5,5% 6,8% & innovation background
Total %
65- Count Frequ Valid
üzer 1 3 4 ency Percent Percent
i Valid yes 65 89,0 89,0
% no 8 11,0 11,0
75,0 100,0
within 25,0%
% % Tot 73 100,0 100,0
AGE

200
Proceedings of the 2008 IEMS Conference

Arzum * Arzum innovation crosstabulation Gnctrkcll * Gnctrkcll innovation


ARZMINV Total crosstabulation
yes no GNCTINV Total
y Count yes no
e 14 51 65 y Count
s e 15 52 67
% s
21,5 78,5 100,0
within % within 22,4 77,6 100,0
% % %
Arzum Gnctrkcll % % %
% % within 93,8 91,2 91,8
100,0 86,4 89,0
within Gncinv % % %
% % %
Arzinv % of 20,5 71,2 91,8
% of 19,2 69,9 89,0 Total % % %
Total % % % n Count
1 5 6
n Count o
0 8 8
o % within 16,7 83,3 100,0
% Gnctrkcll % % %
100,0 100,0
within ,0% % within
% % 6,3% 8,8% 8,2%
Arzum Gncinv
% % of
13,6 11,0 1,4% 6,8% 8,2%
within ,0% Total
% %
Arzinv Tot Count
16 57 73
% of 11,0 11,0 al
,0%
Total % % % within 21,9 78,1 100,0
Tot Count Gnctrkcll % % %
14 59 73
al % within 100,0 100,0 100,0
% within 19,2 80,8 100,0 Gncinv % % %
Arzum % % % % of 21,9 78,1 100,0
% within 100,0 100,0 100,0 Total % % %
Arzinv % % %
% of 19,2 80,8 100,0 We determined that 92% (67) our sample
Total % % % know the new product “gençturkcell”, but they
don’t know (78% - 57) that this is a new
We determined that 89% (65) our sample product idea came from final customers of
know the new product “Arzum Kettle”, but Turkcell GSM Operator. Our findings are
they don’t know (81% - 59) that this is a new similar with the results of “Knowledge of
product idea came from final customers of Arzum Kettle” parameter; 89 % of our
Arzum. samples know the product, but 81% of them
don’t know the new product idea came from
Table 9. Knowledge Gencturkcell product customers.
& innovation background
Frequ Valid Table 10. Knowledge Akbank mobile
ency Percent Percent credit Product & Innovation background
Valid yes 67 91,8 91,8 Frequ Valid
no 6 8,2 8,2 ency Percent Percent
Tot 73 100,0 100,0 Valid yes 47 64,4 64,4
no 26 35,6 35,6
Tot 73 100,0 100,0

201
Proceedings of the 2008 IEMS Conference

Akbank * Akbank innovation strategies. This theory is coherent with our


crosstabulation findings. Because our findings suggest that our
AKBNINV Total selected companies cooperate with their
yes no customer to manage their innovation
y Count strategies. The majority of consumers are
e 11 36 47 conscious and they are aware of their
s responsibility as a consumer and the services
% within 23,4 76,6 100,0 they use. One of the results of this study is
Akbank % % % that companies do not promote “consumer
% within 100,0 58,1 64,4 based strategy” which helps to protect and
Akbinv % % % enhance their image.
% of 15,1 49,3 64,4
Total % % % Eight stages are involved in the new product
n Count development process: idea generation, idea
0 26 26 screening, concept development and testing,
o
% within 100,0 100,0 marketing strategy development, business
,0% analysis, product development, market testing,
Akbank % %
% within 41,9 35,6 and commercialization. The purpose of each
,0% stage is to determine whether the innovation
Akbinv % %
% of 35,6 35,6 idea should be dropped or moved to the next
,0% stage. The company wants to minimize the
Total % %
Tot Count chances that good ideas will be dropped or that
11 62 73 bad ones will be developed. That’s why, the
al
% within 15,1 84,9 100,0 new trend of marketers is to profit from their
Akbank % % % consumers as corporate innovations strategies’
% within 100,0 100,0 100,0 consultant.
Akbinv % % %
% of 15,1 84,9 100,0 Acknowledgments
Total % % %
This research has been financially supported
by Galatasaray University Research Fund.
We determined that 64% (47) our sample
know the new product “Akbank Mobile
Credit”, but they don’t know (85% - 62) that 7. References
this is a new product idea came from final
[1] Ronald S. Swift, “Accelerating Customer
customers of Akbank. Our findings are similar Relationship: Using CRM and Relationship
with the other results of “Knowledge of Technologies”, Prentice Hall, USA, 2001, p.XVII.
Arzum Kettle” and “Knowledge of [2] Tench, R. and L. Yeomans, Exploring Public
gençturkcell” parameters. Relations, Prentice Hall, England, 2006.
Regarding these three last tables, we should [3]Guth, D.W. and C. March, Public Relations-A
notice that this adopter classification suggests Values Driven Approach, Pearson Education, USA,
that an innovating firm should research the 2006.
demographic, psychographic and media [4] P.J. Kitchen and D. Moss, “Marketing and
characteristics of innovators and early adopters Public Relations”, Journal of Marketing
Communications, 1995, pp. 105-119.
and direct communications specifically to
[5] J. Grunig and L. Grunig, “The relationship
them. between public relations and marketing in excellent
organizations:evidence from the IABC”, Journal of
6. Conclusion Marketing Communications, 4:3, 1998, pp. 141-
162.
Successful new product development [6] Peter Doyle, “Value-Based Marketing:
requires the company to establish an effective Marketing Strategies for Corporate Growth and
organization for managing the development Shareholder Value”, John Wiley &Sons, Ltd,
process. Companies can choose to use product Oxford, 2000, 7.
managers, new product managers or customers [7] Kotler, P. and G. Armstrong, Principles of
as business partner in corporate innovation Marketing, 9.ed.,Upper Saddle River, Prentice
Hall, New Jersey, 2001.

202
Proceedings of the 2008 IEMS Conference

[8] Archie B. Carroll, “The Four Faces of [12]Eric von Hippel, “Lead Users: A Source of
Corporate Citizenship”, Business and Society Novel Product Concepts,” Management Science,
Review, 100/101, 1998, pp. 1-7. July 1986, pp. 791-805. Also see his The Sources
[9]Archie B. Carroll, “The Pyramid of Corporate of Innovation (New York: Oxford University Press,
Social Responsibility: Toward the Moral 1988); and “Learning from Lead Users,” in
Management of Organizational Stakeholders”, Marketing in an Electronic Age, ed. Robert D.
Business Horizons, July-August, 1991, pp. 39-48. Buzzell (Cambridge, MA: Harvard Business
[10] New Products Management for the 1980s School Press, 1985), pp. 308-17.
(New York: Booz, Allen & Hamilton, 1982). [13]Cooper, Robert G. and Kleinschmidt, Elko J.,
[11]Kotler, P., Marketing Management Analysis, New Products: The Key Factors in Success,
Planning, Implementation and Control, American Marketing Association, Chicago, 1990.
International Edition, 9.ed., Upper Saddle River, [14]Rogers, Everett M, Diffusion of Innovations,
Prentice Hall, New Jersey, 1997. Free Press, NewYork, 1962.

203
Proceedings of the 2008 IEMS Conference

Exploration of Factors Affecting Online


Social Networking Services
Joycelyn Finley-Hervey Tiki L. Suarez-Brown Forrest Thompson
Florida A&M University Florida A&M University Florida A&M University
joycelyn.finley@famu.edu tiki.suarez@famu.edu forrest.thompson@famu.edu

Abstract certain personal data to be restricted to


The driving communication tools among people other than those on their friends list.
millennials have been MySpace and Users can depict their mood, make comments
Facebook. With these new mediums, for all viewers to read, and post and sell
individuals who utilize these social online music, which has proven popular among
services are faced with opportunities and
MySpace users.
challenges. This paper seeks to assess
students’ perceptions and explore factors that
affect the utilization of social networking. Facebook was launched in 2004 at
The paper concludes with implications for Harvard University as a networking site for
utilizing the social networking tools more students to get to know others on campus.
effectively. Within months, the site was open to all Ivy
League schools and shortly thereafter
1. Introduction expanded to most American and Canadian
universities. In 2006, high schools and some
Social networking services focus on the large companies were included in Facebook.
building of online communities of people Since then, it has expanded to countries
who share interests and activities, or who are around the world. Currently, Facebook has
interested in exploring the interests and over 60 millions active users. Users can
activities of others. MySpace and Facebook, select to join one or more participating
the most popular of these websites, have networks, such as a school, place of
been around for less than ten years. They employment, geographic region, or social
have member bases of more than 360 group. The viewing of detailed profile data
million. Beyond these numbers, independent of a user is restricted to users from the same
surveys have confirmed a mere fact of life in network or confirmed friends. For example,
America: students are addicted to these new the “status” feature allows users to inform
“social clubs.” their friends and the Facebook community of
MySpace was launched in 1999 and offers an their current whereabouts and actions; and
interactive, user-submitted network of the “events” feature allow users to let friends
friends, personal profiles, blogs, groups, know about upcoming events in their
photo, music and videos internationally. community and to organize social gatherings.
MySpace is currently the world’s sixth most
popular website and the third most popular By 2005, MySpace was reportedly getting
website in the United States. Currently, more page views than Google, with
MySpace has over 300 millions active users. Facebook rapidly growing in size. In 2007,
MySpace allows users to customize their user Facebook began allowing externally
profile pages and the option of allowing
204
Proceedings of the 2008 IEMS Conference

developed add-on applications thus linking 2.2. Social Networking Challenges


social networks to social networks. It is
estimated that there are over 200 social Contrary to the aforementioned factors,
networks with MySpace and Facebook being which mostly relate to personal matters,
the most popular. Given the popularity of challenges cover a range of operational,
online social networking, there is a need to personal and professional aspects. The
understand the associated promises and below items pose key social networking
downfalls, which are detailed in the next challenges:
section.
1. Becoming familiarized with online
2. Opportunities and Challenges of social networks and applications may
Online Social Networking be difficult in the initial stages
(Sangaran, 2007).
2.1. Social Networking Opportunities
2. Spending too much time on social
There are several opportunities and networks for personal reasons can
challenges of utilizing social networking affect work productivity (Perkins,
sites. Based on interviews conducted with 2008); consequently, class
executives from Communications firms, performance may suffer.
Sangaran (2007) reports that social
networking presents the following 3. Posting some personal information or
opportunities:
pictures on sites could be damaging
1. Helps an individual keep in touch to career opportunities (Perkins,
with acquaintances, friends and 2008) because companies often
family as well as make new friends. search online networks to obtain
information about potential hires or
2. Generates fun and entertainment, as competitors.
Facebook has games and different
applications to play with.
4. Introducing viruses, worms and
3. Gives updates on current issues and spyware into computers because
allows you to know what your online networks have limited
friends and family think of a specific restrictions on links or content
topic. (Perkins, 2008).

4. Offers new pictures posted by friends


5. Being targeted by virtual strangers
and family, to download and keep.
who want to get acquainted with you
5. Allows an individual to update job (Sangaran, 2007). Predators and
details and other personal details like stalkers are a very real concern in
age and marital status. today’s society.

6. Supplies non-invasive assurances. Millennials’ perceptions of opportunities and


Users can block or show profiles
challenges will be discussed in the next
only to those they choose.
section.
205
Proceedings of the 2008 IEMS Conference

3. Methodology 3.1 Results

The survey’s target population included Sixty percent of students who completed
undergraduate and graduate students taking the survey were males. Over 95% of the
courses within the School of Business and students were between the ages of 18-24 and
Industry at Florida A&M University. These 99% of the students stated they were of
students were asked to complete a survey of Black/Non-Hispanic nationality. Over 98%
34 questions. Questions consisted of multiple of the individuals were students in the School
choice and short answer to assess factors of Business and Industry. 86% of the
related to perceptions and reasons for using students stated they used online social
two social online networking sites: networking tools such as Facebook and
Facebook and MySpace. Surveys were MySpace and that they used it to keep in
administered to 309 individuals within the touch, to network, to supply information and
following seven undergraduate and graduate for business purposes. Additional write in
courses: Introduction to Business Systems, answers included using the tools to view
Systems Theory and Design, Organizational pictures and for fun. Over half of our
Behavior, Organizational Theory and respondents stated they have been using
Behavior, Intermediate Accounting Part I, these tools for only 1-2 years. Although,
Intermediate Account Part II and over 50% of the students surveyed stated
Governmental Accounting. All students they use these tools at least once or twice a
were asked to complete the survey only once. day (several times a day 21%, once/twice a
The survey was divided into the following day 31%). In addition, results from the
three major areas: survey revealed that although most students
utilize Facebook and MySpace, only 40%
Section #1: Personal information stated they feel they it is somewhat
Questions (Q)1-4 important.

Section #2: General utilization of It is also interesting to note that students


Facebook/MySpace who stated they do not use these online social
Questions Q5-13 networking tools do not because it either
provided too much information for others to
Section #3: Specific use while on view, it did not interest them or it was seen as
Facebook/MySpace and perceptions a violation of privacy. Additional questions
Questions Q14 -34 also revealed that privacy was of major
concern. 69% of the students who utilize
The First section gave students the Facebook/MySpace sites chose the privacy
opportunity to answer personal information option, allowing only friends to view specific
such as classification, gender and major. The information on their pages. This specific
Second section addressed questions that question, which was found in the second
pertained to the use of Facebook/MySpace, Section, addressed Opportunity #6.
its importance and what students liked
least/most. The Third section listed Opportunity #1 which relates to two
questions pertaining to inappropriate material questions found in the first Section of the
(sexual, racial), privacy issues, relationship survey (What do you like most about
discussions and profile commenting/viewing. Facebook/MySpace and What did you like
least about Facebook/MySpace) ranked
Photos, Message and Profiles first, second
and third, respectively as items they liked
most about Facebook and MySpace.
Newsfeeds, Groups and Notes were ranked
206
Proceedings of the 2008 IEMS Conference

first, second and third, respectively for items person. Students were asked to select “Yes”,
that they liked least. Opportunity #2 relates “No”, or “Yes - only if they have a
to a survey question found within the third reasonable explanation for what they are
Section. This question asked students what looking at”. Over 35% of the students
type of data did they upload onto Facebook surveyed answered Yes, that is was okay for
and MySpace. Over 79% of the students strangers to comment and over 55%
stated they uploaded self-photos, while 54% answered Yes, that it was okay for
stated they uploaded their personal acquaintances to comment. However, over
information such as hometown and date of 19% of the students responded to the answer
birth and 51% stated they uploaded photos of Yes - only if they have a reasonable
their friends. Figure 1 shows the percentage explanation for what they are looking at for
of items uploaded to Facebook/MySpace ‘acquaintances’ and 17% of the students
responded to the answer Yes - only if they
Which Items Have you Uplo have a reasonable explanation for what they
are looking at for ‘strangers’. See Figure 2
0.8 can acquaintances comment on profile.
0.7
0.6
0.5
Perce 0.4

0.3
0.2
0.1
0
Self-Photos
Photos of
My Email
List of Current
Relationship
Personal Info
FriendsAddressClasses Status
Ite
Figure 2: Can Acquaintances Comment
Figure 1. Uploaded Items to on Profile
Facebook/MySpace
The survey also revealed interesting facts
Challenges #3 and #5 relates to several related to inappropriate sexual and racial
questions found in the third Section. material. 50% of the students stated that
Challenge #3 relates to a question, which Facebook and MySpace should be monitored
asked students if they ever decided not to for inappropriate racial material, while 72%
post photos of themselves on of the students stated that these online
Facebook/MySpace because it might be read services should be monitored for
by current or future employers, University inappropriate sexual material. Lastly 76%
administrators and/or teachers and parents. stated that they have seen inappropriate
Most students stated they did NOT desire to sexual material on these sites.
upload photos because they feared
current/future employers (61% stated Yes) 4. Conclusion
might read it more so than university
admin/teachers (43% stated Yes) and even There are several implications for utilizing
parents (36% stated Yes). A following social networking more effectively. It is a
question concerning posting information tool that corporations and millennials can
about them also revealed similar results. gain benefit. Corporations can use networks
Challenge #5 relates to two questions that to recruit and secure referrals for potential
discuss whether or not it is okay for either employees. Firms can also incorporate
strangers or acquaintances to comment on Facebook and MySpace applications on
their profile, when you see each other in computer workstations to encourage
207
Proceedings of the 2008 IEMS Conference

collaboration, creativity and team building References


among employees. Millennials can use the
sites to network with peers regarding suitable [1] Perkins, Bart. 2008. Pitfalls of
employment opportunities and to highlight socialnetworking. Computerworld, February 11,
professional profiles that are attractive to 2008; 42, 7, p. 44.
company recruiters.
[2] Sangaran, Shyla. 2007. Varied appeal of
social networking. New Straits Times, Kuala
In addition, an increasing number of Lumpur. December 10, 2007, p. 15.
academicians are interested in studying the
impact of MySpace/Facebook and other
social networks on society. Social networks
connect people at low cost, thus it has vast
applications. Companies can use social
networks for advertising. Entrepreneurs and
small businesses can expand their contact
base. Since businesses operate globally,
social networks can make it easier to keep in
touch with contacts around the world.

Furthermore, professionals such as


doctors, lawyers, accountants and others are
adopting social networks to manage
institutional knowledge, disseminate peer-to-
peer knowledge. In addition, users can tap
into the power of social networks for social
good. These networks are highly successful
in connecting like-minded communities with
a broader audience with interested and
passionate users.

Lastly, privacy issues are becoming a


major concern of social networks. There is a
growing concern about users giving out too
much personal information. Users need to be
aware of data theft or viruses. In addition,
there is a perceived privacy threat in relation
to placing too much personal information in
the hands of companies and governmental
bodies. These entities can produce a
behavior profile of the users on which
decisions, detrimental to the users, may be
taken. Information posted on
MySpace/Facebook has been used by
employers, law enforcement agencies, and
universities to investigate and prosecute
users. Legal experts agree that information
on MySpace/Facebook should be considered
public information and available for use
without restrictions.

208
Proceedings of the 2008 IEMS Conference

Recent Literature in Stochastic Resource Constrained


Project Scheduling
Sandra Archer Robert L. Armacost Julia Pet-Armacost
University of Central Florida University of Central Florida University of Central Florida
archer@mail.ucf.edu armacost@mail.ucf.edu jpetarma@mail.ucf.edu

Abstract
1.1. Predictive Scheduling
This article describes recent advances in Predictive or proactive scheduling
both predictive scheduling (developing the techniques allow a project manager to
best schedule before work begins that will be develop a baseline schedule that protects the
the most robust to uncertainty) and reactive project against the effects of uncertainty.
heuristics (rescheduling procedures that a Analytical approaches and techniques for
project manager may use in real-time as developing robust proactive schedules have
uncertainty occurs) for the Stochastic been extensively studied ([2], [17], [27],
Resource Constrained Project Scheduling [45]). Approaches include redundancy-based
Problem (SRCPSP). techniques that allow for extra time in each
activity, or scheduling extra resources. Other
1. Introduction: Predictive and
approaches include robust machine
Reactive Scheduling
scheduling techniques to develop schedules
Stochastic project scheduling is to minimize the consequences of the worst
generally concerned with developing an case scenario.
order of tasks within a project that is robust
to uncertainty in areas such as resource 1.2. Predictive Scheduling – Buffering:
availability, resource breakdown, or task Goldratt’s [11] Critical Chain and
durations. A robust schedule is "flexible" Buffer Management Method provides a
such that metrics of performance (most heuristic framework for project managers to
commonly project duration) remain as close plan, schedule, and control their projects, but
as possible to optimal when disturbances leaves it up to the user to determine exact
occurrence [32]. To account for uncertainty implementation. The method identifies the
in a project schedule, project managers may critical chain, or the set of tasks that results
use two approaches: a predictive (or in the longest path to project completion,
proactive) scheduling technique to develop a considering both precedence and resource
baseline schedule before work begins or a constraints. A “feeding” buffer is placed at
reactive scheduling technique that provides the points in the schedule where resources
decision points once work begins to react or feed into the critical chain path, to protect the
reschedule in light of the outcome of critical chain, while a “project” buffer
uncertain events. Managers may apply a protects the overall project completion time
combination of both of these techniques [12].
during the course of a project. Tavares, Ferreira, et al.[36] have applied
the concept of using buffers to protect the
schedule by adding time buffers throughout a
209
Proceedings of the 2008 IEMS Conference

baseline schedule, although their method a technique of contingent schedules, or


does not prohibit resource conflicts from developing multiple schedules at once. As
occurring. This method was then developed the project progresses, the project manager
into an adapted float factor heuristic by Leus may switch to the schedule corresponding to
[23] and was further adapted to the resource the events that have occurred [6]. This
constrained version by Van de Vonder, technique, however, focuses on flexibility,
Demeulemeester, Herroelen, and Leus [39, rather than robustness and is valuable for
43]. Additionally, Van de Vonder, et al. [40, time-critical reactive scheduling. GERT
41] introduced multiple algorithms to include (Graphical Evaluation and Review
time buffers in a given schedule while a Technique) is a tool that allows managers
predefined project due date remains working with stochastic project networks to
respected. At the same time, Trietsch [37] review projects with a stochastic evolution
developed a method involving optimal structure. There have been many heuristic
feeding buffers to protect against tardiness solutions proposed, including those that offer
and suggested that the marginal cost of a a full-rescheduling option, depending on the
buffer should match its criticality, even for project objective function [8].
the case of unknown project completion time
distribution. 2. Stochastic Resource Constrained
Project Scheduling Problem
1.3. Predictive Scheduling – Other (SRCPSP)
methods:
The classic SRCPSP has the objective to
There have been many recent minimize project duration with deterministic
developments in the area of developing resource availability while activity durations
robust baseline schedules. Ke and Liu [20] occur stochastically. Techniques to solve the
designed a hybrid intelligent algorithm using problem include predictive scheduling and
stochastic simulation and genetic algorithms reactive scheduling, with the literature
to solve the project scheduling problem with dominated by those methods to develop a
stochastic activity duration times to minimize robust baseline schedule. Some research into
total cost under completion time limits. the timing of activity variability on overall
Sakka and El-Sayegh [31] developed a project duration has shown that projects may
method to quantify risks associated with float benefit from efforts to reduce activity
loss in construction projects. Zhu, Bard, et variability early rather than late in the
al. [46] approached the concept of setting due schedule [13].
dates for project activities with random
durations as a two-stage integer program to 2.1. Predictive Scheduling for SRCPSP
balance project completion cost and late
Robust project scheduling includes
penalties.
mathematical programming models for the
1.4. Reactive Scheduling: Reacting to generation of stable baseline schedules under
changes the assumption that the proper amount of
resources can be acquired if booked in
Instead of, or in addition to, anticipating advance based on the pre-schedule and that a
uncertainty, managers must be prepared to single activity disruption (duration increase)
make decisions as the project progresses in may occur during schedule execution [16].
light of current events. Managers may utilize

210
Proceedings of the 2008 IEMS Conference

The same authors have also used a resource sampling, that uses a Schedule Generating
flow network resource allocation model that Scheme by constructing different activity
protects a given baseline schedule against lists and a priority rule to calculate its
activity duration variability [24]. possibility of being selected. The second is
Stochastic task duration can be based on Hartmann’s [14] genetic algorithm
approached as a multi-stage decision process and demonstrates the adaptability of this
that uses so-called “scheduling policies” method to stochastic RCPSP. Ballestin and
[34]. Scheduling policies have also been Leus’s [5] work continued in the
employed to minimize the expected project development of a Greedy Randomized
duration by developing multi-stage stochastic Adaptive Search Procedure heuristic to
programming problems ([9], [10], [28]). develop baseline schedules for minimizing
Branch and bound methods have also been expected project makespan and they
applied [19], [34], [35]. evaluated the heuristic by studying the
Al-Fawzan and Haouari [1] provided a distribution of the possible makespan.
tabu search algorithm to generate a baseline Deblaere, Demeulemeester, et al. [7]
schedule for the resource constrained provided three integer programming-based
problem with stochastic task durations to heuristics and one constructive procedure to
minimize makespan while maximizing develop a baseline schedule and resource
robustness. However, Kobylanski and allocation plan for the RCPSP with activity
Kuchta [22] questioned the methods in a note duration variability to maximize stability.
on their paper and provided their own Other approaches include that of Long
definitions and methods of determining a and Ohsato [26] who provided a genetic
robust schedule. They suggested a fixed algorithm for the RCPSP with stochastic
makespan for the project and then activity durations so optimal project duration
determining a robust schedule for this fixed is minimized. Shih [33] proposed a greedy
makespan. Their proposed robustness is method for resource allocation to reduce
measured by minimal free slack or the completion time for the RCPSP with
minimal ratio of free slack to activity stochastic task duration.
duration as high as possible. Azaron,
Perkgoz, et al. [3] evaluated the time-cost 2.2. SRCPSP Approach: Critical Path
trade off with a multi-objective genetic and Buffering Methods
algorithm that allocates resources as the Grey [12] provided a detailed overview
decision variable while optimizing project of Critical Chain Buffer Management and its
cost, mean completion time, and the application for the RCPSP, and included a
probability that the project exceeds a time discussion of critiques of the method. One
threshold. concern is that the technique generates a
Ballestin [4] evaluated when heuristic baseline schedule by solving the
algorithms should be used to solve problems deterministic RCPSP and subsequently
with stochastic task durations instead of inserts buffers for robustness, instead of
working with the deterministic problem and solving a stochastic RCPSP [15], which may
provided heuristics to develop the baseline be an oversimplification of the problem [18].
schedule when activity durations are given Most of the research related to the use of
by a known distribution. The first algorithm buffers for project scheduling is in the job
is based on regret-based biased random shop or machine scheduling literature and

211
Proceedings of the 2008 IEMS Conference

includes time and resource redundancy [12]. events, and described the various reactive
Rabbani, Fatemi Ghomi, et al. [29] strategies a project manager may take. On
developed a new heuristic for the RCPSP one hand, a reactive effort may be a very
with stochastic task durations by using simple schedule repair such as the right shift
concepts from critical chain to determine the rule [30] which will move forward in time all
finish time of activities at certain decision activities affected and does not re-sequence
points. Additionally, Van de Vonder, activities. More complex techniques
Demeulemeester, et al. [44] proposed a suggested by the authors include a full
heuristic algorithm to protect the starting rescheduling of the remaining portion of the
times of intermediate activities when schedule. This full rescheduling action may
multiple activity disruptions occur by adding be completed with a different set of
intermediate buffers to a minimal duration objectives than those used to construct the
RCPSP. original baseline schedule because an
Kim and De La Garza [21] proposed additional objective may include the
and evaluated the resource constrained minimization of deviations from the baseline
critical path method (RCPM), that allows for schedule. The authors refer to this re-
the identification of alternative schedules by scheduling case with new objectives as "ex
identifying the critical path and float data. post stability" measures of performance.
Liu, Song, et al. [25] proposed a multi-
objective model for the RCPSP and use a
[1] Al-Fawzan, M.A. and M. Haouari, A bi-
critical chain scheduling approach to insert objective model for robust resource-constrained
feeding and project buffers into the project scheduling. International Journal of
approximate optimal project schedule to Production Economics, 2005. 96(2): p. 175.
enhance stability. Grey [12] suggested the [2] Aytug, H., M.A. Lawley, K. McKay, S.
possibility of using feeding and project Mohan, and R. Uzsoy, Executing production
buffers for the Stochastic Task Insertion schedules in the face of uncertainties: A review
and some future directions. European Journal of
(STI) case as an area of future research.
Operational Research, 2005. 161: p. 86-110.
Tukel, et al. [38] have introduced two
new buffering methods for determining [3] Azaron, A., C. Perkgoz, and M. Sakawa, A
feeding buffer sizes in critical chain project genetic algorithm approach for the time-cost
trade-off in PERT networks. Applied
scheduling for the SRCPSP with stochastic Mathematics and Computation (New York),
task durations. One method uses resource 2005. 168(2): p. 1317-1339.
utilization and the other uses network
[4] Ballestín, F., When it is worthwhile to work
complexity. with the stochastic RCPSP? Journal of
Scheduling, 2007. 10(3): p. 153.
2.3. Reactive Scheduling for the SRCPSP
[5] Ballestin, F. and R. Leus, Resource-
Van de Vonder, Demeulemeester, et al. constrained project scheduling for timely project
[39, 42] evaluated several predictive-reactive completion with stochastic activity durations.
2007.
SRCPSP procedures with the objective of
maximizing stability and the probability of [6] Davenport, A.J. and J.C. Beck. A survey of
timely project completion. Herroelen and techniques for scheduling with uncertainty.
Unpublished manuscript. Available from:.
Leus [17] described predictive-reactive
http://www.eil.utoronto.ca/profiles/chris/chris.pap
scheduling techniques as those that repair a ers.html 2002
baseline schedule to account for unexpected

212
Proceedings of the 2008 IEMS Conference

[7] Deblaere, F., E. Demeulemeester, W. scheduling: Do not oversimplify. Project


Herroelen, and S.V.d. Vonder, Robust Resource Management Journal, 2002. 33(4): p. 48-60.
Allocation Decisions in Resource-Constrained
Projects*. Decision Sciences, 2007. 38(1): p. 5. [19] Igelmund, G. and F.J. Radermacher,
Preselective Strategies For The Optimization Of
[8] Demeulemeester, E.L. and W. Herroelen, Stochastic Project Networks Under Resource
Project scheduling : a research handbook. Constraints. Networks, 1983a. 13(1 Spr): p. 1-28.
International series in operations research &
management science ; 49. 2002, Boston: Kluwer [20] Ke, H. and B. Liu, Project scheduling
Academic Publishers. xxiii, 685 p. problem with stochastic activity duration times.
Applied Mathematics and Computation (New
[9] Fernandez, A.A., R.L. Armacost, and J.J. Pet- York), 2005. 168(1): p. 342-353.
Edwards, Understanding simulation solutions to
resource constrained project scheduling problems [21] Kim, K. and J.M. De La Garza, Evaluation
with stochastic task durations. Engineering of the resource-constrained critical path method
Management Journal, 1998. 10(4): p. 5. algorithms. Journal of Construction Engineering
and Management, 2005. 131(5): p. 522-532.
[10] Fernandez, A.A., R.L. Armacost, and J.J.A.
Pet-Edwards, The role of the nonanticipativity [22] Kobylanski, P. and D. Kuchta, A note on the
constraint in commercial software for stochastic paper by M. A. Al-Fawzan and M. Haouari about
project scheduling. Computers & Industrial a bi-objective problem for robust resource-
Engineering, 1996. 31(1,2): p. 233. constrained project scheduling. International
Journal of Production Economics, 2007. 107(2):
[11] Goldratt, E., Critical Chain. 1997, Great p. 496-501.
Barrington: The North River Press Publishing
Corporation. [23] Leus, R., The generation of stable project
plans--complexity and exact algorithms. 2003,
[12] Grey, J., Buffer Techniques for Stochastic Ph.D. Thesis. Department of applied economics,
Resource Constrained Project Scheduling with Katholieke Universiteit Leuven.
Stochastic Task Insertions Problems. 2007.
[24] Leus, R. and W. Herroelen, Stability and
[13] Gutierrez, G.J. and A. Paul, Robustness to resource allocation in project planning. IIE
variability in project networks. IIE Transactions, Transactions, 2004. 36(7): p. 667.
2001. 33(8): p. 649.
[25] Liu, S.-X., J.-H. Song, and J.-F. Tang,
[14] Hartmann, S., A competitive genetic Critical chain based approach for resource-
algorithm for resource-constrained project constrained project scheduling. Zidonghua
scheduling. Naval Research Logistics, 1998. Xuebao/Acta Automatica Sinica, 2006. 32(1): p.
45(7): p. 733-750. 60-66.

[15] Herroelen, W. and R. Leus, On the merits [26] Long, L.D. and A. Ohsato, Solving the
and pitfalls of critical chain scheduling. Journal resource-constrained project scheduling problem
of Operations Management, 2001. 19(5): p. 559. by genetic algorithm. Journal of Japan Industrial
Management Association, 2007. 57(6): p. 520-
[16] Herroelen, W. and R. Leus, The 529.
construction of stable project baseline schedules.
European Journal of Operational Research, [27] Mehta, S.V. and R.M. Uzsoy, Predictable
2004a. 156(3): p. 550-565. scheduling of a job shop subject to breakdowns.
IEEE Transactions on Robotics and Automation,
[17] Herroelen, W. and R. Leus, Project 1998. 14(3): p. 365-378.
scheduling under uncertainty: Survey and
research potentials. European Journal of [28] Pet-Edwards, J., B. Selim, R.L. Armacost,
Operational Research, 2005a. 165(2): p. 289. and A. Fernandez. Minimizing risk in stochastic
resource-constrained project scheduling. 1998.
[18] Herroelen, W., R. Leus, and E. Paper presented at the INFORMS Fall Meeting,
Demeulemeester, Critical chain project Seattle, October 25-28.

213
Proceedings of the 2008 IEMS Conference

[29] Rabbani, M., S.M.T. Fatemi Ghomi, F. [39] Van de Vonder, S., E. Demeulemeester, and
Jolai, and N.S. Lahiji, A new heuristic for W. Herroelen, An Investigation of Efficient and
resource-constrained project scheduling in Effective Predictive-reactive Project Scheduling
stochastic networks using critical chain concept. Procedures. 2004: Katholieke Universiteit
European Journal of Operational Research, 2007. Leuven, Faculty of Economics and Applied
176(2): p. 794-808. Economics, Department of Applied Economics.

[30] Sadeh, N., S. Otsuka, and R. Schedlback, [40] Van de Vonder, S., E. Demeulemeester, and
Predictive and reactive scheduling with the W. Herroelen, Heuristic Procedures for
microboss production scheduling and control Generating Stable Project Baseline Schedules.
system. In: Proceedings of the IJCAI-93 2005a: Katholieke Universiteit Leuven, Faculty
Workshop on Knowledge-Based Production of Economics and Applied Economics,
Planning, Scheduling and Control, pp. 293-306. Department of Applied Economics.
1993.
[41] Van de Vonder, S., E. Demeulemeester, and
[31] Sakka, Z.I. and S.M. El-Sayegh, Float W. Herroelen, Proactive heuristic procedures for
consumption impact on cost and schedule in the robust project scheduling: An experimental
construction industry. Journal of Construction analysis. European Journal of Operational
Engineering and Management, 2007. 133(2): p. Research, 2006a. In Press, Corrected Proof.
124-130.
[42] Van de Vonder, S., E. Demeulemeester, and
[32] Selim, B.R., Robustness measures for W. Herroelen, A classification of predictive-
stochastic resource constrained project reactive project scheduling procedures. Journal
scheduling. 2002, University of Central Florida: of Scheduling, 2007b. 10(3): p. 195.
United States -- Florida.
[43] Van de Vonder, S., E. Demeulemeester, W.
[33] Shih, N.-H., On the project scheduling Herroelen, and R. Leus, The use of buffers in
under resource constraints. Journal of the project management: The trade-off between
Chinese Institute of Industrial Engineers, 2006. stability and makespan. International Journal of
23(6): p. 494-500. Production Economics, 2005b. 97(2): p. 227.

[34] Stork, F., Branch-and-bound algorithms for [44] Van de Vonder, S., E. Demeulemeester, W.
stochastic resource-constrained project Herroelen, and R. Leus, The trade-off between
scheduling, Research Report No. 702/2000. stability and makespan in resource-constrained
Technische Universität Berlin. 2000. project scheduling. International Journal of
Production Research, 2006b. 44(2): p. 215.
[35] Stork, F., Stochastic resource-constrained
project scheduling. Ph.D. Thesis. Technische [45] Wu, S.D., R.H. Storer, and P.-C. Chang,
Universität Berlin. 2001. One-Machine Rescheduling Heuristics with
Efficiency and Stability as Criteria. Computers &
[36] Tavares, L.V., J.A.A. Ferreira, and J.S. Operations Research, 1993. 20(1): p. 1.
Coelho, On the optimal management or project
risk. European Journal of Operational Research, [46] Zhu, G., J.F. Bard, and G. Yu, A two-stage
1998. 107(2): p. 451. stochastic programming approach for project
planning with uncertain activity durations.
[37] Trietsch, D., Optimal feeding buffers for Journal of Scheduling, 2007. 10(3): p. 167-180.
projects or batch supply chains by an exact
generalization of the newsvendor result. Authors
International Journal of Production Research,
2006. 44(4): p. 627-637.
Sandra Archer, M.S. is a Ph.D.
[38] Tukel, O.I., W.O. Rom, and S.D. Eksioglu, candidate and Robert L. Armacost, D.Sc. and
An investigation of buffer sizing techniques in Julia Pet-Armacost, Ph.D. are Associate
critical chain scheduling. European Journal of
Operational Research, 2006. 172(2): p. 401-416. Professors in the Department of Industrial
Engineering and Management Systems at the
University of Central Florida.

214
Proceedings of the 2008 IEMS Conference

Design and Manufacturing of an Acoustic Panels Test Frame

Michael Ferguson Jesse Parkhurst Alexandra Schonning Simeon Ochi


Univ. of North Florida Univ. of North Florida Univ. of North Florida Goodrich Corporation
Ferm0013@unf.edu parj0034@unf.edu aschonni@unf.edu simeon.ochi@goodrich.com

Abstract students. The two students working on this


The University of North Florida in particular project have completed courses in
collaboration with the Goodrich Corporation has Strength of Materials, Machine Design etc.
designed a test frame to be used in investigating Active learning is supported by NSF [12] and
the mechanical properties of acoustic panels. has been found beneficial to students in a variety
Strain and deflection of the panel will be of fields including engineering [13-23].
measured while supporting uniform pressure
applied through vacuum loading device. Design 2. Conceptual Design Ideas
and manufacturing details of the frame are
presented. One of the objectives was to design and
build a test rig to use in testing the static strength
1. Introduction of composite material panels used for AUV
sonar windows. The material to be tested would
The purpose of this project is to be in the form of panels of 20 by 20 inches with
characterize the strength of composite material variable thickness. With this known, constraints
panels. This will be achieved by conducting were formalized. The test fixture should disperse
experiments to measure the strain and the the load evenly over the panels in a quasi static
deflection of a plate fabricated from the material fashion. The test fixture must be able to measure
while the plate is subjected to a quasi-static load. the deflection and strain in the material when the
With this information the strength characteristics load is applied and the measurements must be
can be calculated. The material has many easily viewable. With these constraints
practical uses. One known application of the preliminary designs were discussed.
material is construction of sonar windows of The first design considered was one that
Autonomous Underwater Vehicles (AUV). The would load the panels from the top. A column
sonar windows need to be strong enough to would be created to hold water and which would
absorb an impact while not disturbing the sonar evenly distribute the load across the top of the
signal passing through the nosecone. panel. This design was dismissed due to
The research conducted was enabled by practical reasons; if the panel failed a huge
a grant from National Science Foundation (NSF) column of water would be dispersed onto the
which funds the Manufacturing Innovation testing floor and clean up would be excessive. A
Partnership (MIP) program at the University of column of sand was then considered because the
North Florida (UNF) [1]. MIP was created to sand is denser and less material would have to
enhance the educational experience of the be used in applying the same load. It was also
students and to create a design and dismissed for the same reason as the water
manufacturing center for local companies to aid column. The next design idea involved
in the economic and technical development in application of pressure underneath the panel.
the Northeast Florida region. A large number of With this setup the panel could have strain
other projects have been undertaken by MIP [2- gauges to measure the strain in the material and
11]. These projects help students gain a better a linear variable differential transformer (LVDT)
understanding of engineering applications. could be placed above the panel to measure the
While it’s not part of the regular curriculum, it displacement of the material. This design was
provides active learning opportunities to the dismissed because if the panel were to fail, small

215
Proceedings of the 2008 IEMS Conference

pieces would fly outward and possibly injure inch tall C-section steel. The C-section steel was
someone. After these dismissals a final design used to save weight over a solid bar while not
was found. A vacuum would be pulled from greatly sacrificing strength. A 21 inch by 21
underneath the panel to reduce the chance of inch steel plate was used for the bottom portion
pieces flying outward. Strain gauges would be of the frame and was welded to the walls of the
placed on the bottom surface of the panel to frame.
measure strain and an LVDT would be mounted
above to measure the deflection of the center of
the panel. Using a vacuum pump, the vacuum
could be slowly applied to the panel to form the
static load. This final design idea was chosen
based on safety and since it met all of the
constraints. The preliminary design is shown in
Figure 1.

Figure 2. First base design

Upon further calculations using a finite


element analysis program it was determined that
the panels were not likely to deflect more than a
maximum of 2 inches. The walls were therefore
changed to C3x5.5 steel which is a three inch
tall C steel section that would provide enough
room for the panels to deflect. The design was
approved by the company liaison. In summary, a
Figure 1. Preliminary design flat steel plate of size 21 by 21 inches would be
the base of the test rig and the walls would be
made up of C3x5.5 steel members, cut at 45o
angles at the corners. This steel would also be
3. Detailed Design used to the loading and accessory rail located
above the test panel. The loading rail would be
supported by two steel gussets located on two
After approval of this preliminary
opposing walls. The gussets would be welded to
design a rough model was designed on the
the outside of the wall as shown in Figure 3.
computer using Rhinoceros 3.0 CAD program.
Each gusset has two bolt holes with a size of ¼
The basic components of the test frame are a flat
inch that supports a vertical ten inch C3x5.5 bar.
square plate used as the base, four wall sections,
The two vertical C3x5.5 bars support the 23¾
one loading and accessory rail, two vertical
inch loading and accessory bar by four ¼ inch
support beams, and four gussets. Once it was
bolts. The loading rail will support the LVDT
drawn in the CAD program a few minor changes
during tests and during transportation the rail
were needed. The walls of the test frame were
could be used to hoist the test rig. The lifting
going to be two inches tall by ½ in thick
eyelets are 1.5 inches and are located on the
allowing the panel to deflect almost two inches.
vertical support bars.
This first conceptional design is shown in Figure
2. Initial calculations suggested that the
deflection of the panels could be as much as 5
inches. The walls were therefore changed to a 6-

216
Proceedings of the 2008 IEMS Conference

tight. The flange is held down by eight bolts.


There is a bolt hole at every corner and one at
the midpoint between the corners. The flange
can be seen in Figure 5.

ure 3. Gussets
Three holes of diameters two with ¾ inch and a
single ½ inch would be drilled into the wall for
instrumentation installation. One is used to
connect the vacuum pump to the rig. Another Figure 5. Flange
port measures the pressure inside the vessel.
These instruments can be seen in Figure 4. The size and grade had to be increased once tests
were conducted. The original bolts were number
10 fine thread, but failed once pressure was
exerted onto the plate. This failure is likely due
to the shear stress in the bolts exceeding the
shear strength of the material. The bolt size was
increased to ¼ in. Even after the increase of size
the bolts still failed, so the bolt size and grade
was increased to 5/16 and grade eight,
respectively. Another test was conducted and
during which the bolts did not fail.

4. Manufacturing
Figure 4. Instrumentations
The C3x5.5 steel bar was cut into four
pieces that were 20.82 inches long with a 45
The third hole will allow the wires from
the gauges to feed from the bottom of the panels degree cut at the ends. These wall sections were
to the computer. To ensure a vacuum a rubber welded together using a double v-groove weld to
ensure air tightness. The wall was welded to the
stopper is used to seal the hole. The strain
gauges will be placed on the under side of the plate and using a fillet weld. All welds were
cleaned up for appearance. The top of the walls
panel. A flange was designed to hold the test
panel securely in place and to help with being air were milled down to ensure a clean, level mating

217
Proceedings of the 2008 IEMS Conference

surface for the panel. After milling, hand filing


was used to further improve the surface finish. 5. Conclusion
The flange material was cut to the same length
as the wall. The pieces of the flange were also In collaboration with Goodrich
welded with a double v-groove together. The Corporation undergraduate students have
pieces were slightly arced to allow the ability to designed and manufactured a test frame to be
pretension the panel. It was also milled flat to used in testing acoustic panels. Design ideas
give a good seal. Four gussets were cut from a were produced and altered to come to the
sheet of steel to the desired dimensions and size. preliminary design. A model was developed in a
The edges had to be grinded to ensure a tight fit CAD program and was easily changed to fit new
into the C3x5.5 wall. Two ¼ inch holes were calculations. The frame was built on site at UNF
drilled into each gusset for bolting the vertical with donated materials and UNF purchased
supports. The gussets were welded onto the machinery. Tests were conducted and the
outside of the wall. Two ten-inch sections were strength of the composite materials were
cut from the C3x5.5 steel bar. These were used analyzed and found. With this project the
for the vertical supports. Six ¼ inch holes were undergraduate researchers gathered much
drilled into the vertical supports. There were knowledge of design and manufacturing
four bolt holes matching the gusset bolt patterns processes. Future work will be conducted on this
on the bottom and two bolt holes on top to attach frame. It will be modified to allow dynamic
the loading and accessory rail. The final piece testing to be performed on the panels.
cut was a 23.640 inch long C3x5.5 steel that was
used for the loading and accessory rail. Four bolt
holes were drilled into it on either end to match 6. ACKNOWLEDGEMENTS
the holes from the vertical support columns. The
These efforts are partially sponsored by the
vertical columns were bolted to the gussets and
National Science Foundation, grant EEC-
the loading and accessory rail was bolted to the
0438582, and the Goodrich Corporation.
vertical supports. To ensure an airtight seal, a
gasket material was needed. A latex based
gasket was used first. It was air tight but was 7. REFERENCES
extremely hard to remove. It also caused the 1. Daniel J. Cox, Alexandra Schonning, Neal
surface to pit which in the long run would Coulter, Florida’s First Coast Manufacturing
destroy the air tight seal. A silicon based gasket Innovation Partnership (MIP). National
was used next and it worked well, providing an Science Foundation Grant Proposal, 2004.
airtight seal. The silicon was easily removed Award Number IIP-0438582(2004-2007).
without pitting or other detrimental effect 2. Anthony Barletta, Alexandra Schonning,
occurring. The final product is shown in Figure Michael Patney, Ryan Cotton, Testing and
6. Comparison of the Mechanical Properties of
Six Commercial Bone Cements.
Experimental Techniques, To be submitted
July 2007.
3. Alexandra Schonning, Daniel Cox.
Fostering Undergraduate Research through
Collaboration with Local Engineering
Companies. in Proceedings of IMECE 2007
ASME International Mechanical
Engineering Congress and Exposition. 2007.
Seattle, Washington.
4. Schonning, Alexandra. Biomechanical
Applications of Computers in Engineering
Education. in Proceedings of the ASME
Figure 6. Completed Frame 2007 International Design Engineering

218
Proceedings of the 2008 IEMS Conference

Technical Conferences & Computers and Workshop on Engineering Education). 1995,


Information in Engineering Conference. National Scinece Foundation; Directorate
2007. Las Vegas, NV. for Education and Human
5. Richelle Atienza, Alexandra Schonning, Resources/Division of Undergraduate
Daniel Cox. Student Perspective of Education.
Academia and Industry Collaboration. in 13. J. Brooks, M. Brooks. In search of
12th International Conference on Industry understanding: The case for constructivist
Engineering and Management Systems. classrooms. in Association for Curriculum
2006. Cocoa Beach, FL. Development. 1993. Alexandria, VA.
6. Spencer Schwab, Stephen Thompson, 14. Kolb, D., Towards an applied theory of
Alexandra Schonning. Finite Element experiential learning. Theories of Group
Analysis of a Pool Cue. in 12th International Processes, ed. C. Cooper. 1975: Wiley,
Conference on Industry Engineering and London. 33-57.
Management Systems. 2006. Cocoa Beach, 15. Jonassen, D. H., Evaluating constructivistic
FL. learning. Educational Technology, 1991.
7. Anthony Barletta, William Berry, Alexandra 31(9): p. 28-33.
Schonning. Adjustable Tap Stop Design. in 16. Borzak, L., Field Study. A source book for
12th International Conference on Industry experiential learning. 1981: Beverley Hills:
Engineering and Management Systems. Sage Publications.
2006. Cocoa Beach, FL. 17. K. L. Ruhl, C.A. Hughes, P.J. Schloss, 1987.
8. Alexandra Schonning, Daniel Cox. Industry- Teacher Education and Special Education,
Academia Computer Aided Engineering 1987. 10(winter): p. 14-18.
Undergraduate Research Projects. in ASME 18. I.J. Russell, W.D. Hendricson, R.J. Herbert,
2006 International Design Engineering Effects of lecture information density on
Technical Conferences & Computers and medical student achievement. Journal of
Information in Engineering Conference. Medical Education, 1984. 59: p. 881-889.
2006. Philadelphia, PA. 19. A. E. Austin, R.G. Baldwin, Faculty
9. Hands on efforts designed to increase design collaboration: Enhancing the quality of
and manufacturing preparedness of scholarship and teaching. 1991, Washington,
undergraduate students. in ASME DC: The George Washington University,
International Mechanical Engineering School of Education and Human
Congress and Exposition. 2006. Chicago, Development.
IL. 20. E. Bodrova, D.J. Leong., Tools of the Mind:
10. Alexandra Schonning, Dan Cox. Enhancing The Vygotskian Approach to Early
Undergraduate Mechanical Engineering Childhood Education. 1996, Englewood
Education with Computer Aided Cliffs, New Jersey: Merrill, an imprint of
Engineering. in ASME 2005 International Prentice Hall.
Design Engineering Technical Conferences 21. J. Krajcik, C. Czerniak, C. Berger, Teaching
& Computers and Information in children science: A project-based approach.
Engineering Conference. 2005. Long Beach, 1999: McGraw-Hill publishers.
CA. 22. Hake, Richard, Interactive-engagement
11. A. Schonning, D. Cox. Development of a versus traditional methods: A six-thousand-
Mentoring Program to Improve Engineering student survey of mechanics test data for
Education Training among College and Pre- introductory physics courses. American
College Students. in 2005 International Association of Physics Teachers, 1998.
Conference on Industry, Engineering, and 66(1): p. 64-74.
Management Systems (IEMS). 2005. Cocoa 23. Prince, Michael, Does Active Learning
Beach, Florida. Work? A Review of the Research. Journal of
12. Carolyn Meyers, Edward W. Ernst, NSF 95- Engineering Education, 2004. 93(3): p. 223-
96, in Restructuring Engineering Education: 231.
A Foucus on Change (Report of an NSF

219
Proceedings of the 2008 IEMS Conference

A Model for Audit Brainstorming: The Search for


Material Misstatements and Fraud

LuAnn Bean
Florida Institute of Technology
Lbean@fit.edu

Jeff Manzi
Kaplan Financial Education
Jeff.Manzi@kaplan.com

Abstract boosted the potential for this standard to


uncover risk by requiring the auditor with final
Both Statements on Auditing Standards responsibility for the audit to participate in these
(SAS) No. 99 and No. 109 require that auditors brainstorming sessions.
conduct brainstorming sessions during every Given this relatively new focus on
audit for purposes of identifying material brainstorming and the added inference by SAS
misstatements and fraud in company operations No. 109 suggesting that concurrent
and financial statements. This paper explores a brainstorming would be acceptable for both
concurrent model for this process. SAS No. 99 and SAS No. 109, many
practitioners agree that this is the most cost
1. Introduction effective way to go [5, 14]. However, there is a
void in the literature explaining how this
In an effort to enhance auditor understanding concurrent model could be effectively
of the risks faced by a company and improve undertaken. The remainder of this article
application of the audit risk model, the Auditing examines brainstorming basics, suggests ways to
Standards Board (ASB) has issued several customize the process through a synectics
standards as part of new auditing guidance. One model, and proposes targeted queries to
of these, the Statement on Auditing Standards maximize risk detection when using a combined
(SAS) No. 99, Consideration of Fraud in a brainstorming session for both standards.
Financial Statement Audit, was released in 2002
[1]. It requires that auditors conduct a 2. Brainstorming Basics
brainstorming session during every audit for the
major purpose of bringing audit team members Brainstorming in the professional accounting
together to generate ideas about how and where environment was initiated as an innovative
fraud could occur in a client’s company. The approach for auditors to examine each client’s
resulting ideas then set the stage for suggestions operations with a greater degree of skepticism at
about ways to detect possible frauds. a more customized level. As such, the process is
A second standard, SAS No. 109 relatively new to many accountants. In an effort
(Understanding the Entity and Its Environment to maximize the benefits and address challenges
and Assessing the Risks of Material of these sessions, many have turned to best
Misstatements), also has expanded client risk practices experts such as Beasley and Jenkins
inquiry through brainstorming sessions with its [3], who suggest that several rules are necessary
issuance in 2006 [2]. Under this standard, for leaders to follow when planning for effective
brainstorming sessions are required to target the audit brainstorming sessions. These basic rules
susceptibility of client financial statements to include:
material misstatements. The ASB further

220
Proceedings of the 2008 IEMS Conference

• Scheduling brainstorming sessions in 3. Brainstorming and the Use of Synectics


advance with sufficient time available to
effectively achieve the objective.
• Developing a brainstorming agenda and Research studies by Carpenter [6] and
assigning each participant “homework” Bellovary and Johnstone [4] indicate that most
prior to the session. auditing firms, in fact, do not follow the
• Setting rules prior to the session so that preferred brainstorming techniques previously
effective communications is achieved mentioned. Instead, most audit firms lack
between audit team members. individual preparation prior to a session, devote
as little as five minutes to these sessions, fail to
• Setting a cooperative climate for the
structure sessions to encourage the quality over
session that encourages involvement.
the quantity of ideas, and have insufficient
• Making it clear that individuals
involvement by all brainstorming participants
participating will not be criticized for
and specialists.
their ideas.
To overcome these obstacles, professionals in
• Encouraging idea generation through
fields other than accounting have found the use
the recasting of the client operations
of synectics as an effective approach for
from different perspectives.
brainstorming. Synectics is basically a creative
• Recognizing the group as a whole for problem-solving process developed as a
the ideas generated, rather than refinement of brainstorming by William J. J.
selectively praising specific individuals Gordon and George Prince during the 1960s [8].
for their thoughts. These researchers examined both successful and
• Managing both the size and composition unsuccessful brainstorming sessions and
of the auditors participating in the developed synectics as a process to refine and
sessions, so that outcomes are cull ideas, conquer resistance to brainstorming,
maximized. and minimize mental blocks related to complex
In addition, Beasley & Jenkins [3] present situations.
several formats for conducting financial Our purpose in this paper is to propose this
statement audit brainstorming sessions. These model as a structure for effective concurrent
include: open discussion, round-robin, or even audit brainstorming sessions, since the
cyber-sessions. In open brainstorming, the complexity encountered in financial statement
discussion is unstructured and members suggest audits is consistent with Gordon and Prince’s
ideas in a free flowing manner. The drawback methodology. In other words, the use of
of this approach is that: (1) the quantity of ideas synectics is very applicable to concurrent SAS
may push aside the generation of quality ideas, No. 99 and 109 sessions because it emphasizes
and (2) the process commonly breaks down so the development or customization of “new” and
that one or only a few members participate. unique brainstorming ideas for each client,
Round-robin sessions require that members plan despite the relatively similar nature of the audit
or anticipate issues in advance of the meeting in process that must be followed. The steps
order to achieve a more efficient idea generation involved for facilitating a session using this
process. From the standpoint of generating approach include:
quality ideas, this method is preferred, even 1. Defining or describing the client’s
though ideas may lack spontaneity. For diverse organization. An important factor in
teams covering multi-locations, cyber this step of the model is to include both
brainstorming can be an effective technique, expert and novice auditors. For
because it allows members to meet remotely example, researchers note that experts
using collaborative software. Moreover, this (i.e., more experienced auditors) spend
technique can enhance the anonymity of more time than novices (i.e.,
contributions. inexperienced auditors) in analyzing a
problem before actually trying to solve

221
Proceedings of the 2008 IEMS Conference

it [7,13]. Therefore, a key part of this that experts (or more knowledgeable
step is to schedule sufficient time auditors) will surpass novices (or less
before convening a session so that all experienced auditors) due to their
participants can complete abilities to recognize patterns [10].
brainstorming “homework.” This Therefore, facilitators should draw on
should include a review of the most the knowledge of auditors with the
current financial information, a review most extensive experience and memory
of prior year or interim financial of client history for these answers
statements, and a review of previous during this step of the model.
management letters and other 4. Explore conflicts and uncouple
documents. Many experts agree that oxymorons. This step promotes the
this methodical but somewhat slow dissection of new ideas, which may at
start may lead to long-run time saving face value seem ridiculous or
benefits because fewer wrong ideas contradictory. Discussion should be
will be generated and greater problem focused on questions about what client-
solving effectiveness results. specific controls, personnel,
2. Use direct analogies to generate ideas. environmental, or risk-centric
Analogies can lead auditors to notice situations would or would not allow
parallels or interconnected fraud or generated scenarios to exist. In the
material misstatement situations—those process, the audit team should address
that might be occurring in the current significant areas of audit risk, areas
client’s organization but are based on subject to management’s override of
knowledge of similar incidences in controls, unusual accounting
previous clients or cases. An important procedures used by the client, and
checklist to incorporate during this step important control systems.
of the model is actually contained in 5. Develop any new analogies. Through
Appendix C of SAS No. 109. Step 4 exploration, new analogies may
Facilitators can utilize the factors listed surface. During this step explore what
in this standard stimulate discussion additional factors, risks, or red flags of
about similar situations in the current fraud were identified that made
audit, since this list was collectively previous ideas feasible, implausible, or
developed as a summary of business partially possible.
risks found in past audit failures or red 6. Reexamine the original client situation.
flags noted by the ASB. However, Both SAS No. 99 and SAS No. l09
researchers caution that this approach suggest that resulting ideas should be
might lead to a functional fixedness of help determine the extent of testing for
the mental set at this point in the fraud and the establishment of
process. In other words, more materiality at both the financial
knowledgeable auditors (or experts) statement and account levels. As a
may focus too closely on similar audits result, final financial statement
and as a result generate fewer ideas scenarios that identified overall audit
than less experienced auditors about risks may require procedures and
potential misstatements or fraud actions performed by a more
situations due to their biases, experienced audit team, while
preconceptions, and overconfidence assertion-based risks at the account
[11,12]. level may target specific, directed
3. React to the analogies. How is this procedures (like the analysis of product
client’s company similar or dissimilar demand and inventory turnover used to
to analogized cases of fraud or material explore inventory obsolescence).
misstatement? Here, research indicates

222
Proceedings of the 2008 IEMS Conference

4. Concurrent Brainstorming Sessions for susceptibility that were generated by the


SAS No. 99 & 109 audit team.
5. Conclusion
If you are considering concurrent Auditors no longer have a choice on whether
brainstorming for SAS No. 99 and SAS No. 109, or not to employ brainstorming techniques. The
several targeted queries advanced by ASB’s recent guidance under SAS No. 99 and
practitioners such as Hayes [9] or Bellovary and SAS No. 109 encourages control testing and an
Johnstone [4] can also serve as optimal planning in-depth understanding of the client
guides for facilitators of these sessions. These environment. If carefully planned and
include: facilitated, concurrent brainstorming sessions
• Analyzing the current state of your for both standards can do much more than
brainstorming methods under SAS No. occupy auditors’ time, as was feared by a
99 and comparing them with what number of accountants with the initiation of this
additional emphases are needed under approach. In fact, the effective use of
SAS No. 109. brainstorming basics and consideration of a
• Looking at how your current synectic model can improve the continuous
brainstorming processes and the communication and risk assessment processes of
resulting content have worked for the audit and the linkage of audit procedures
specific audits. with relevant assertions.
• Identifying which audit team member(s)
use the final risk factors generated for 6. References
audit plan development.
• Examining how the experience level of [1] Auditing Standards Board. Statement on Auditing
Standard No. 99, Consideration of Fraud in a
your audit staff has impacted
Financial Statement Audit, 2002.
brainstorming efforts.
• Reviewing what fraud-focused [2] Auditing Standards Board. Statement on Auditing
specialists are available to these Standard No. 109, Understanding the Entity and Its
sessions, as well as efforts to Environment and Assessing the Risks of Material
incorporate personnel with specialized Misstatements, 2006.
skills, like IT staff.
[3] Beasley, M. S. and J. G. Jenkins. “A Primer for
• Evaluating unique experiences with Brainstorming Fraud Risks,” Journal of Accountancy,
current client business risks, formal risk 2003, 196 (6), pp. 32-38.
assessment processes, and mitigating
controls that fit the concurrent [4] J. L. Bellovary and K. M. Johnstone, “Descriptive
brainstorming model for both fraud Evidence from Audit Practice on SAS No. 99
susceptibility and material Brainstorming Activities,” Current Issues in
misstatements. Auditing, 2007, Vol. 1, pp. A1-A11.
• Assessing how brainstorming ideas [5] J. Carney. “Risk Assessment Standards in
might impact audit budget Action,” Journal of Accountancy, 2008, 205 (1), pp.
modifications. 41-47.
• Analyzing how the logistics of the audit
team is best suited for coordination and [6] T. Carpenter. “Audit Team Brainstorming, Fraud
communication with the lead auditor Risk Identification, and Fraud Risk Assessment:
Implications of SAS No. 99,” Accounting Review,
given multi-location problems.
2007, 82 (5), pp. 1119-1140.
• Deciding who makes the final
assessment of the risks identified for [7] M. T. H. Chi, R. Glaser, and E. Rees. “Expertise
material misstatements and fraud in Problem Solving,” In R. Sternberg (Ed.), Advances

223
Proceedings of the 2008 IEMS Conference

in the Psychology of Human Intelligence (Vol. 1, pp. Research and Empirical AI (pp. 191-203). New
7-76), Hillsdale, N.J.: Erlbaum, 1982. York: Springer Verlag, 1992.

[8] Gordon, W. J. J., Synectics, New York: Harper & [13] J. F. Voss, T. R. Greene, T. Post, and B. C.
Row, 1961. Penner, “Problem Solving Skill in the Social
Sciences,” In G. Bower (Ed.), The Psychology of
[9] A. A. Hayes, “Sneaking a Peek: How the New Learning and Motivation, pp. 165-213. New York:
Auditing Standards on Risk Assessment May Affect Academic Press, 1983.
Fraud Huddles.” The Journal of Government
Financial Management, 2006, 55 (3), pp. 68-70. [14] R. E. Wortmann and A. Henry, “Don’t Get Stuck
Behind the Eight Ball: Master New Auditing
[10] A. Lesgold, H. Rubinson, P. Feltovich, R. Standards Now,” Pennsylvania CPA Journal, 2007,
Glaser, D. Klopfer, and Y.Wang, “Expertise in 77 (4), pp. 28-35.
Complex Skill: Diagnosing X-Ray Pictures,” In
M.T.H. Chi, R. Glaser, M.J. Farr (Eds.), The Nature Authors
of Expertise (pp. 311 – 342), Hillsdale, N.J.:
Erlbaum, 1988. LuAnn Bean, Ph.D., CPA, CIA, CFE, FCPA, is
Professor of Accounting at Florida Institute of
[11] R. J. Sternberg, “Costs of Expertise,” In K. A.
Technology in Melbourne, Florida.
Ericsson (Ed.), The Road To Excellence: The
Acquisition of Expert Performance in the Arts and
Jeffrey Manzi, Ph.D., CFA, is Vice President of
Sciences, Sports, and Games (pp. 347 – 354),
Hillsdale, N.J.: Erlbaum, 1996.
Securities and Insurance for Kaplan Financial
Education in LaCrosse, Wisconsin.
[12] R. J. Sternberg and P. A. Frensch, “On Being an
Expert: A Cost-Benefit Analysis,” In R. R. Hoffman
(Ed.), The Psychology of Expertise: Cognitive

224
INDEX
Author Page Author Page

Ahmad, Ali 117 Desai, Yogeeta 61


Design Interactive, Inc. North Carolina A&T State University
Ahmad, Raheel 191 Doswell, J. 99
University of Southern Mississippi Morgan State University
Alexander, Suraj 1, 10 Drake, Dominique 160
University of Louisville Florida A&M University
Ali, Dia 191 Elam, M.E. 65
University of Southern Mississippi The University of Alabama
Ali, Tarig 70 Ferguson, Michael 215
University of Central Florida University of North Florida
Allen, Steve 20 Finley-Hervey, Joycelyn 204
Truman State University Florida A&M University
Al-Saleem, Saleh 85 Fonseca, Daniel 65
Banha University The University of Alabama
Amiri, Shahram 25 Galal, Ashraf 104
Stetson University Qatar University
Ansuj, Angela 131 Gapinski, Andrzej 80
Federal University of Santa Maria Penn State University
Arbogast, Gordon 31 Greene, C.M. 65
Jacksonville University The University of Alabama
Archer, Sandra 209 Hargrove, S. Keith 99
University of Central Florida Morgan State University
Armacost, Robert 209 Hassan, Hassan 85
University of Central Florida Banha University
Ateme-Nguema, Barthelemy 59 Hernandez, Ed 182
University of Quebec California State University, Stanislaus
Bean, LuAnn 220 Hurst, Tim 75
Florida Institute of Technology S. Texas Project Nuclear Operating Co.
Bell, Michael 37 Ihezie, Deborah 99
NASA Morgan State University
Bourgeois, Brian 41 Jajpuria, Shalini 1
Naval Research Laboratory University of Louisville
Brandon, Donald 41 Janjirala, Praveen 10
Naval Research Laboratory University of Louisville
Brown-LaVeist, C. 99 Jiang, Xiaochun 90
Morgan State University North Carolina A&T State University
Brumback, Terry 65 Johnston, Dana 147
The University of Alabama University of Central Florida
Burbridge, Jr., John 47 Karsak, Banu 194
Elon University Galatasaray University
Carstens, Deborah 54 Keeton, LaShaveria 153
Florida Institute of Technology Florida A&M University
Chaney, Sade 173 Koehler, Jerry 95
Florida A&M University University of South Florida
Combs, Alicia 147 Ksor, Jonathan 137
University of Central Florida Mercer University
Crenshaw, John 75 Lee, Kendra 153
S. Texas Project Nuclear Operating Co. Florida A&M University
Dao, Thien-My 59 Manzi, Jeff 220
University of Quebec Kaplan Financial Education
Davis, Tiffani 173 Marin, Mario 142
Florida A&M University University of Central Florida
Demattos, Policarpo 61 Mohamed, Tamer 104
North Carolina A&T State University British University in Egypt
INDEX
Author Page Author Page

Morris, Ashley 41 Sala-Diakanda, Serge 142


Naval Research Laboratory University of Central Florida
Mullen, Robert 111, 114 Schonning, Alexandra 215
Southern Connecticut State University University of North Florida
Nahmens, Isabelina 117 Schulz, Jacob 75
Louisiana State University Texas A&M University
Ochi, Simeon 215 Sepulveda, Jose 142
Goodrich Corporation University of Central Florida
Omachonu, Vincent 37 Somani, Imshaan 125
University of Miami Mercer University
Page, Robert 182 Songmene, Victor 59
Southern Connecticut State University University of Quebec
Park, Eui 61, 90 Suarez-Brown, Tiki 153, 204
North Carolina A&T State University Florida A&M University
Parkhurst, Jesse 215 Sutterfield, J.S. 160, 173
University of North Florida Florida A&M University
Pet-Armacost, Julia 209 Thompson, Forrest 204
University of Central Florida Florida A&M University
Petrosky, Al 182 Tucker, Eric 147
California State University, Stanislaus University of Central Florida
Petrova-Bolli, Mihaela 147 Vo, Ha Van 125, 137
University of Central Florida Mercer University
Philippe, Thomas 95 Walton, Carsolina 153
University of Houston, Clear Lake Florida A&M University
Price, David 156 Watson, Gerald 90
Washburn University North Carolina A&T State University
Rabelo, Luis 142 Williams, Kaylene 182
University of Central Florida California State University, Stanislaus
Radharamanan, Ramachandran 125, 131, 137 Williams, Kent 142
Mercer University University of Central Florida
Rahimi, Shahram 191 Yilmaz, Elgiz 194
University of Southern Mississippi Galatasaray University
Rice, BaaSheba 90 Yousef, Nabeel 70
North Carolina A&T State University University of Central Florida
Rich, Coleman 47 Zhan, Wei 75
Elon University Texas A&M University
Robinson, Federica 142
University of Central Florida
Rockfield, Stephanie 54
Florida Institute of Technology

View publication stats

You might also like