NEW TECHNOLOGIES –

TRENDS, INNOVATIONS
AND RESEARCH

Edited by Constantin Volosencu










New Technologies – Trends, Innovations and Research
Edited by Constantin Volosencu


Published by InTech
Janeza Trdine 9, 51000 Rijeka, Croatia

Copyright © 2012 InTech
All chapters are Open Access distributed under the Creative Commons Attribution 3.0
license, which allows users to download, copy and build upon published articles even for
commercial purposes, as long as the author and publisher are properly credited, which
ensures maximum dissemination and a wider impact of our publications. After this work
has been published by InTech, authors have the right to republish it, in whole or part, in
any publication of which they are the author, and to make other personal use of the
work. Any republication, referencing or personal use of the work must explicitly identify
the original source.

As for readers, this license allows users to download, copy and build upon published
chapters even for commercial purposes, as long as the author and publisher are properly
credited, which ensures maximum dissemination and a wider impact of our publications.

Notice
Statements and opinions expressed in the chapters are these of the individual contributors
and not necessarily those of the editors or publisher. No responsibility is accepted for the
accuracy of information contained in the published chapters. The publisher assumes no
responsibility for any damage or injury to persons or property arising out of the use of any
materials, instructions, methods or ideas contained in the book.

Publishing Process Manager Ana Skalamera
Technical Editor Teodora Smiljanic
Cover Designer InTech Design Team

First published April, 2012
Printed in Croatia

A free online edition of this book is available at www.intechopen.com
Additional hard copies can be obtained from orders@intechopen.com


New Technologies – Trends, Innovations and Research, Edited by Constantin Volosencu
p. cm.
ISBN 978-953-51-0480-3








Contents

Preface IX
Part 1 Manufacturing Technologies 1
Chapter 1 Microassembly Using Water Drop 3
Taksehi Mizuno
Chapter 2 Design and Simulation-Based Optimization
of Cooling Channels for Plastic Injection Mold 19
Hong-Seok Park and Xuan-Phuong Dang
Chapter 3 Biologically Inspired Techniques
for Autonomous Shop Floor Control 45
Hong-Seok Park, Ngoc-Hien Tran
and Jin-Woo Park
Chapter 4 The Micro Injection Moulding Process
for Polymeric Components Manufacturing 65
R. Surace, G. Trotta, V. Bellantone and I. Fassi
Chapter 5 Recent Advances in
Multi-Dimensional Packing Problems 91
Teodor Gabriel Crainic, Guido Perboli
and Roberto Tadei
Part 2 Nanotechnologies 111
Chapter 6 Nano Research Trends of Critical Scientific
Fields Across Leading Worldwide Geo-Economic
Players and Their Spatial Interactions 113
Mario Coccia, Ugo Finardi and Diego Margon
Part 3 Robotics 137
Chapter 7 Improving Accuracy and Flexibility
of Industrial Robots Using Computer Vision 139
Petar Maric and Velibor Djalic
VI Contents

Part 4 Telecommunication 165
Chapter 8 A Framework for VoIP Testability and Functionality
Extension with Interactive Content Delivery 167
Janez Stergar, Janez Klanjšek and Sibila Vadlja
Part 5 Physics 189
Chapter 9 Application of Radiosity Simulation
Methods for Lighting Researches 191
Ruzena Kralikova and Katarina Kevicka
Part 6 Dental Medical Technologies 207
Chapter 10 Combined-Correlated Methods Applied
to the Analysis of Dental Prostheses Materials Quality 209
Diana Laura Cotoros and Mihaela Ioana Baritz
Part 7 Smart Homes 239
Chapter 11 Smart Homes as Service Platforms for
New Healthcare and Energy Services 241
Mikko Pynnönen and Mika Immonen
Part 8 Speech Technologies 259
Chapter 12 Recent Progress in Development of
Language Model for Slovak Large
Vocabulary Continuous Speech Recognition 261
Jozef Juhár, Ján Staš and Daniel Hládek
Part 9 Agriculture Technologies 277
Chapter 13 The Use of High-Speed Imaging Systems
for Applications in Precision Agriculture 279
Bilal Hijazi, Thomas Decourselle,
Sofija Vulgarakis Minov, David Nuyttens,
Frederic Cointault , Jan Pieters
and Jürgen Vangeyte
Part 10 Management 297
Chapter 14 Team Building for Implementation
of Concurrent Engineering Loops 299
Lidija Rihar, Janez Kušar,
Tomaž Berlec and Marko Starbek
Contents VII

Chapter 15 The Development Process as a Complex
and Interdisciplinary Team Based Challenge 327
Michael Bader and Mario Fallast
Chapter 16 Risk Management in Area of Security
and Protection of Health During the Work 347
Andrea Seňová and Katarína Čulková
Part 11 Technology Popularization 377
Chapter 17 Open and Integral Innovation on Tablet PC by
Popularized Advanced Media as Industrial Cradle 379
Makoto Takayama







Preface

At the beginning of the new millennium the request for innovation increased.
Complex manufacturing, miniaturization of the components, development of the
Internet or healthcare are in need of new technologies provided by researchers who
are capable to introduce them. The book “New Technologies - Trends, Innovations and
Research” presents contributions made by researchers from the entire world and from
some modern fields of technology, serving as a valuable tool for scientists, researchers,
graduate students and professionals.
Some practical applications in particular areas are presented, offering the capability to
solve problems resulted from economic needs and to perform specific functions. Some
chapters cover topics related to high technologies; other topics related to consumer
goods. The book mostly covers technological applications, including material
applications with complex machines as well as virtual applications such as computer
software, communications technology and business methods. The book will make
possible for scientists and engineers to get familiar with the ideas from researchers
from some modern fields of activity. It will provide interesting examples of practical
applications of knowledge, assist in the designing process, as well as bring changes to
their research areas. A collection of techniques, that combine scientific resources, is
provided to make necessary products with the desired quality criteria. Strong
mathematical and scientific concepts were used in the applications. They meet the
requirements of utility, usability and safety. Technological applications presented in
the book have appropriate functions and they may be exploited with competitive
advantages.
The book has 17 chapters, covering the following subjects: manufacturing
technologies, nanotechnologies, robotics, telecommunications, physics, dental medical
technologies, smart homes, speech technologies, agriculture technologies and
management.
In the domain of the manufacturing technologies the following contributions are
presented: a method of micro-assembly using water drop for electric components
characterized by combining surface tension with negative pressure produced by
vacuum; a systematic method for optimizing the cooling channels in order to obtain
the target mold temperature and to reduce cooling time and non-uniformity of
X Preface

temperature distribution of the molded part; a study of the autonomous shop floor
control system with biologically inspired techniques, a solution for autonomous
adaptation to disturbances; a study of the micro injection molding process for the
manufacturing of polymeric micro-components and a study of the multi-dimensional
packing and loading problem. In the field of manufacturing technologies a resource
study of the nano-research trends is presented.
In the field of robotics a chapter presents an algorithm for automatic identification of
the kinematic model of the manipulator’s geometry in order to increase its accuracy
and flexibility, based on a system with parallel optical axes used for measurement of
the 3D position of the tool’s tip and/or fixtures of work pieces, a complete automation
being achieved.
In the field of telecommunications a multimedia system application for voice over
Internet protocol, web cameras and IP phones is presented.
In the field of physics an application of radiosity simulation methods for lighting
researches is presented, which has as an objective a study of quantitative and
qualitative parameters of illumination, designing a lighting system with a higher
performance.
In the field of dental medical technologies a study which analyzes advantages and
disadvantages of composite materials based upon resins, used as dental materials, is
presented.
In smart home development section a chapter introduces emerging business area of
home centered services, focusing on smart homes as service platforms for health care
and energy services.
In speech technologies some methods and principles used in Slovak language
modeling are presented, with application in the Slovak automatic transcription and
dictation system for the judicial system.
For precision agriculture, with application in two specific domains, pesticide spraying
and fertilizer spreading, high speed imaging systems are presented, that allow the
acquired data to be processed with an algorithm used to determine the grain velocities
and trajectories necessary for characterization of the centrifugal spreading.
In the field of management the following themes are presented: a study on the
organization of the teamwork, where a structure of a track-and-loop process of
concurrent product realization, suitable for small companies, is described; a resource
study of the collaboration of the parties involved during the development process of
technical products, and a study on some general problems of the risk management.
And in the end there is a chapter about popularization of advanced technology and
advanced information technology.
Preface XI

I am optimistic about the possibility to publish these contributions, made by
researchers from the entire world, as appropriate technologies, in a society which is
becoming more technological than ever. I would like to thank all the researchers who
accepted the invitation to contribute on the basis of their scientific potential, hoping
that the book will have a good impact on the technological media.

Prof Constantin Volosencu
'Politehnica' University of Timisoara
Romania

Part 1
Manufacturing Technologies

1
Microassembly Using Water Drop
Taksehi Mizuno
Saitama University
Japan
1. Introduction
The miniaturization of electronic devices has been progressing remarkably to match the
demand for high performance and multiple functions. In their production process, however,
handling of electric components becomes more and more difficult as they become smaller. A
promising approach to overcome such difficulty is the application of MEMS technology
(Segovia et al., 1998). Meanwhile, the basic properties of surface tension have been studied
extensively (De Genes et al. 2002). Various attempts using surface tension have been reported
such as micro gas-liquid separator (Shikazono et al., 2010), micro motor (Kajiwara et al., 2007)
and bearing (Shamoto et al., 2005). As to the assembly of micro parts using liquid surface
tension, the self-alignment principle and characteristics have been studied (Sato et al., 2000). A
scheme for micromanipulation using capillary force has been proposed (Obata, et al. 2004).
This chapter presents a novel method of picking up a small electric component to the center
axis of a nozzle by using the liquid surface tension of a water drop (Takagi et al., 2008; Kato
et al., 2010; Haga et al., 2010). This method is characterized by combining surface tension
with negative pressure produced by vacuum, which is different from the approach by Bark
et al. (1998). The aim of this method is to assemble µm-order electric components with
mounting machines having common positioning accuracy. The basic properties of the
proposed microassembly are studied with a fabricated experimental device.
2. Principles of picking up
2.1 Conventional method
In mounting small electric components onto a substrate, picking up by using vacuum is
most widely used at present. The principle is explained by Fig.1. The process is
a. A nozzle is made to touch a component on a tape and then vacuum is created inside the
nozzle. The component is picked up by the negative force produced by vacuum.
b. The component is carried to a prescribed position.
c. It is placed on the prescribed position of the substrate by breaking the vacuum.
One problem of this method is the failure of picking up when the component is displaced
from the desired position that is usually the center of the nozzle. Such misalignment is
unavoidable in actual mounting machines. It is to be noted that such ill effect of
misalignment becomes more remarkable in assembling smaller components.

New Technologies – Trends, Innovations and Research

4
Table
Chip
Nozzle
( a ) ( b ) ( c )
Vacuum
Vacuum Break

Fig. 1. Process of conventional assembly
( a ) ( b ) ( c ) ( d ) ( e )
Liquid

Fig. 2. Process of assembly using water drop
2.2 Picking up with water drop
In the conventional method, misalignment causes fail of picking up because it makes
negative pressure for suction insufficient. As a countermeasure to such misalignment, a
method of picking up using water drop is presented in this section. Figure 2 shows the
process of picking up:
a. Liquid is stored in a nozzle.
b. A drop is made on the top of the nozzle by increasing the pressure inside the nozzle.
c. The drop is made to touch a component.
d. The component is picked up by raising the nozzle.
e. The drop is suctioned by making vacuum inside the nozzle so that the tip is hold at the
top of the nozzle.
In the stage (d), the component moves to just the bottom of the drop automatically due to
gravitational force and is hold at the center axis of the nozzle. It is referred to as self-centering
effect in the following. Due to this effect, a component even displaced from the desired
position can be picked up to the center axis of the nozzle.
3. Experimental system
Figure 3 shows an outline of the experimental system. Objects to be picked up are placed on
a three-axis positioning stage (Fig.4). A nozzle and its holder is fixed on a slider of the
positioner for rough positioning (Fig.5). Figure 6 shows the details of the nozzle. An ejector
is connected to the nozzle through the holder. It controls the pressure inside the nozzle.

Microassembly Using Water Drop

5
Compressor
Nozzle
Ejector
Pressure sensor
Microscope

Fig. 3. Experimental system

Fig. 4. Three-axis positioning stage with a nozzle and its holder

New Technologies – Trends, Innovations and Research

6

Nozzle

Fig. 5. Nozzle and holder
Ditch
( b ) ( a )
30
Inner Diameter
Outer Diameter

Fig. 6. Details of the nozzle
Ultra pure water is used as the liquid to avoid the ill effects of contamination on tips and
assembled products. For observation, a microscope is used to measure the relative
displacement of the tip to the nozzle and the diameter of the water drop.

Microassembly Using Water Drop

7
4. Picking up chip
4.1 Object for picking up
Figure 7 shows a targeted surface mount component. This is a chip resistance called as
“0402” that is an actual industrial component. The width w, depth d and height h are 0.4, 0.2
and 0.1 mm, respectively. The width of electrical plate e is 0.1 mm. The coordinate axes X, Y
and Z are defined as shown in Fig.7.
X
Y
Z
w
h
d
= 0.2 [mm]
h
d
w
= 0.1 [mm]
= 0.4 [mm]

Fig. 7. Surface mount component
Nozzle
Chip

Step 1 Step 2

Step 3 Step 4
Fig. 8. Self-centering effect

New Technologies – Trends, Innovations and Research

8
4.2 Self-centering effect
Figure 8 demonstrates an actual process of picking up. Step 1 shows the initial state. In Step
2, a drop is produced at the top of the nozzle. A displacement of the tip from the center axis
of the nozzle is observed. In Step 3, the tip moves to just the bottom of the drop after the
nozzle descends for the drop to touch the tip. It is due to the self-centering effect. Then the
drop is suctioned by vacuum so that the tip is held at the top of the nozzle as shown in Step 4.
This result demonstrates well the self-centering effect that enables picking up even in the
presence of misalignment.
4.3 Effects of horizontal misalignment
Next, the effect of misalignment in the horizontal directions is investigated. Figure 9 shows
the definitions of variables: radius of drop R and displacement of the tip to the nozzle center
D
α
( , ) x y α = . Picking up was carried out for various D
α
.
D
R
a
= x y or
a

Fig. 9. Definition of Parameters
The results are classified as shown in Fig.10:
Success: The tip is picked up successfully at the center axis of the nozzle due to the self-
centering effect.
Failure 1: The drop touches the surface of the stage on which the tip is placed. This
phenomenon is observed for large misalignment. When the nozzle is lifted up, the
tip is left on the stage because the drop breaks into two parts on the stage and on
the nozzle.
Failure 2: The drop touches only the electrical plate when the chip is displaced in the Y-axis
direction. After suction, the tip stands to the base of the nozzle.
Failure 3: When the outer diameter of the nozzle is too small, the tip attaches to the side of
the nozzle even if the drop touches only the chip. It is avoidable if the diameter of
the nozzle is selected appropriately.
Figures 11 and 12 show the experimental results for various
x
D and
y
D , respectively. The
dotted line in these figures represents the limit
max
D of misalignment that is determined by
the geometrical constraints shown in Fig.13. It is given by

Microassembly Using Water Drop

9

Success Failure 1

Failure 2 Failure 3
Fig. 10. Classification of operation

( )
2
2
2
max
l
D R R h = − − +
for R h ≥ (1)
where l is the depth d of the tip in Fig.11 and the width w in Fig.12.
These results show that picking up is carried out successfully when misalignment is less
than 0.2 mm. Since the common positioning accuracy of present mounting machines is 0.05
mm approximately, the proposed method is applicable even if tips are displaced and also
for future smaller tips. In addition, larger drops enable successful picking up for more
displaced tips.
It is also found from the experimental results that Failure 1 and Failure 2 occur when
misalignment approaches to
max
D . In addition, Failure 2 occurs only for Y-axis
misalignment. The reason may be the inhomogeneousness of the surface of the chip in the Y-
axis direction. It indicates that the surface structure and shape affects on the applicability of
the proposed method.

New Technologies – Trends, Innovations and Research

10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.2 0.3 0.4 0.5 0.6 0.7
Sucsses
Failure
Radius of Water Drop : [mm] R
D
e
f
l
e
c
t
i
o
n

i
n
-
d
i
r
e
c
t
i
o
n
[
m
m
]
X
D
x

Fig. 11. Effects of deflection in X-direction
0
0.1
0.2
0.3
0.4
0.5
0.6
0.2 0.3 0.4 0.5 0.6 0.7
Sucsses
Failure1
Failure2
Radius of Water Drop :
R [mm]
D
e
f
l
e
c
t
i
o
n

i
n
-
d
i
r
e
c
t
i
o
n
[
m
m
]
X
D
y

Fig. 12. Effects of deflection in Y-direction
R
h
2
l
max
D

Fig. 13. Maximum deflection

Microassembly Using Water Drop

11
4.4 Picking up accuracy
The positioning accuracy of the chip to the nozzle was estimated. The relative displacements
of the gravity center of the chip to the center axis of the nozzle are measured with an optical
microscope with a resolution of 1mm in the Y-axis direction and an optical digital measure
with a resolution of less than 1µm in the X-axis direction.
Figure 14 shows the measurement results. The average error of the 33 measurements is
24µm. It indicates that the proposed method enables picking up with an accuracy of 24µm
for chips displaced by up to 0.2mm.

0
2
4
6
8
10
Deflection at suction [ m] m
N
u
m
b
e
r

o
f

t
i
m
e
s
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90

Fig. 14. Deflection at suction
4.5 Effects of vertical misalignment
The effect of misalignment in the vertical directions is investigated. In this experiment, the
reference position 0
z
D = is defined by the nozzle position just when the drop touches the
chip located at the center as shown in Fig.15. Figure 16 shows the results when the nozzle
descends by 0.05mm and 0.1mm from the reference. It also shows the distance at which the
drop touches the surface of the stage after deformation.
Figure 17 demonstrates the states for various misalignments. In Step 1, the nozzle just
touches the chip, which corresponds to 0
z
D = . In Step 2, the nozzle descends from the
reference position a little. The ill effect of misalignment is absorbed by the deformation of
the drop. When the misalignment exceeds some limit, the drop starts to move to the side of
the nozzle (Step 3) and then touches to the surface of the stage (Step 4), which is similar to
Failure 1.

New Technologies – Trends, Innovations and Research

12
The results indicate that misalignment less than 0.1mm can be absorbed by the deformation
of the drop. It is also found that the limit do not depend on the diameter of the drop. It is to
be noted that the limit of horizontal misalignment depends on the diameter given by Eq.(1).
Dz
Dz = 0

Fig. 15. Definition of deflection in Z-direction.
0
0.1
0.2
0.3
0.4
0.2 0.3 0.4 0.5 0.6 0.7
D
e
f
l
e
c
t
i
o
n
i
n
Z
-
d
i
r
e
c
t
i
o
n
:
D
z
[
m
m
]
Radius of water drop [mm] R
Sucsses
Failure1

Fig. 16. Effects of deflection in Z-direction
5. Picking up cylindrical object
In the previous section, it has been demonstrated that the method using water drop is
effective in picking up box-shaped objects. In this section, a cylindrical object is treated
(Kato et al., 2010). The self-centering effect is also expected.
5.1 Object for picking up
Figure 18 shows a new object. It is made by cutting a wire of a multicore cable. The X-, Y-,
and Z-axes are defined as shown in Fig.18.

Microassembly Using Water Drop

13

Step 1 Step 2

Step 3 Step 4
Fig. 17. Vertical deflection.
0.20 mm
0.55 mm
Z
X Y

Fig. 18. Cylindrical object.

New Technologies – Trends, Innovations and Research

14
5.2 Effects of misalignment
The results of picking up and sanction are classified into three types. Two of them, Success
and Failure 1 are similar to those in the experiments on the 0402 chip. Another type, Failure
4, was observed:
Failure 4: With a large misalignment in the Y-direction, the drop touched edge of the
cylindrical object, as shown in Fig.19(b-2). Thus, the center of the nozzle was not
aligned to the center of the cylindrical object.

2
3 4

(a) Failure 1

2
3 4

(b) Failure 4
Fig. 19. Classification of results for picking up a cylindrical object.

Microassembly Using Water Drop

15
Figure 20 shows the results of picking up with the nozzles of external diameters of 0.37 and
0.46 mm. The prescribed deflection was given (a) in the X-direction and (b) in the Y-
direction.
0
0.1
0.2
0.3
0.4
0.30 0.35 0.40 0.45 0.50
M
i
s
a
l
i
g
n
m
e
n
t

i
n
-
d
i
r
e
c
t
i
o
n

[
m
m
]
X
Diameter of water drop : [mm] D
Success
Failure 1

(a) X-direction
0
0.1
0.2
0.3
0.4
0.5
0.6
0.30 0.35 0.40 0.45 0.50
Diameter of water drop: [mm] D
Success
Failure 1
Failure 4
M
i
s
a
l
i
g
n
m
e
n
t

i
n
-
d
i
r
e
c
t
i
o
n


[
m
m
]
Y

(b) Y-direction
Fig. 20. Relation between the diameter of the hemispherical drop and the misalignment for
picking up a cylindrical object.
The lines in Fig.20 are the maximum deflection, where a drop contacts with both the
cylindrical object and the stage at the same time. In the XZ section, the object is circular.
Therefore, the maximum deflection in the X-direction is similar to that of the spherical object

New Technologies – Trends, Innovations and Research

16
(Kato et al., 2010). Similarly, it can be considered that the maximum deflection in the Y-
direction is similar to that of the 0402 chip.
Figure 20(b) shows that the drop of an approximate diameter of 0.38 mm picked up the
cylindrical object with a deflection between the center of a cylindrical object and the center
of the nozzle. The initial misalignment between the cylindrical object and the nozzle
remained after the pickup. It is supposed that the size of the drop was smaller than the size
of the cylindrical object.
To verify this expectation, a cylindrical object was picked up with misalignment in the Y-
direction using one of the nozzles, as shown in Fig.21. For water drop with a diameter less
than 0.45mm, almost all the trials resulted in Failure 4. However, when the drop size was
larger in diameter than 0.45 mm, Success was observed more often as drop size increased.
As a result, a drop whose size was about 80% of the cylindrical object was required for
obtaining the self- centering effect.




0
0.1
0.2
0.3
0.4
0.5
0.6
0.35 0.40 0.45 0.50 0.55
Success
Failure 4
Failure 1
M
i
s
a
l
i
g
n
m
e
n
t

i
n
-
d
i
r
e
c
t
i
o
n

[
m
m
]
Y
Diameter of water drop: [mm] D




Fig. 21. Relation between the diameter of the drop and the misalignment in the Y-direction
for picking up a cylindrical object.
6. Conclusions
A new method of microassembly using water drop for µm-order electric components was
proposed. This method is characterized by combining surface tension with negative
pressure produced by vacuum.

Microassembly Using Water Drop

17
An experimental apparatus was fabricated for its experimental study. Experiments
targeting actual industrial chips with a width of 0.4mm and a depth of 0.2mm were
carried out. It was confirmed that the proposed method enables picking up chips
displaced by up to 0.2mm due to self-centering effect. The average positioning error was
24µm even for such displaced objects. In addition, vertical misalignment can be absorbed
by the deformation of the liquid.
A cylindrical object was also picked up with the proposed method. It was shown that drop
with a size of about 80% of the cylindrical object was required for obtaining the self-
centering effect.
This chapter described the experiments in which the working liquid was pure water.
Haga et al. (2010) have studied the effect of liquid surface tension by using isopropanol
(IPA) and its water mixture. The adsorption force of a drop was measured for IPA-water
mixtures. It was found that the adsorption force of a drop was sufficient to lift up for the
microchip.
7. References
Bark, C., Binnenböse, T., Vögele, G., Weisener, T. & Widmann (1998). Gripping with Low
Viscosity Fluids, Proc. MEMS 98, pp.301-305.
De Genes, P.G., Brochard-Wyart, F. & Quéré, D. (2002). Gouttes, Bulles, Perles et Ondes,
ISBN 2-7011-3024-7.
Kajiwara, A., Suzuki, K., Miura, H. & Takanobu, H. (2007). Study on Actuation of Micro
Objects Using Surface Tension of Liquid Droplets (in Japanese), Proc. Conference on
Information, Intelligence and Precision Equipment, JSME No.07-7, pp.29-32.
Kato, Y., Mizuno, T., Takagi, H., Ishino, Y. & Takasaki, M. (2010). Experimental Study on
Microassembly by Using Liquid Surface Tension, SICE Journal of Control,
Measurement, and System Integration, Vol.3, No.5, pp.309-314.
Haga, T., Mizuno, T., Takasaki, M. & Ishino, Y. (2010). Microassembly Using Liquid Surface
Tension (2nd Report, Study on Working Fluids) (in Japanese), Trans. Japan Society of
Mechanical Engineers, Series C, Vol.76, No.761, pp.69-75.
Obata, K., Motokado, T., Saito, S. & Takahashi, K. (2004). A Scheme for Micro Manipulation
Based on Capillary force, Journal of Fluid Mechanics, pp.113-121.
Sato, K., Seki, T., Hata, S. & Shimokohbe, A. (2000). Principle and Characteristics of
Microparts Self-Alignment Using Liquid Surface Tension (in Japanese), Journal of the
Japan Society of Precision Engineering, Vol.66, No.2, pp.282-286.
Segovia, R., Schweizer, S., Vischer, P. & Bleuler, H. (1998). Contact Free Manipulation of
MEMS-Devices with Aerodynamics Effects, Proc. of the 4th International Conference
on Motion and Vibration Control (MOVIC’98), Vol.3, pp.1129-1132.
Shamoto, E., Komura, T. & Suzuki, N. (2005). Development of a New Fluid Bearing Utilizing
Surface Tension (in Japanese), Proc. 2005 JSPE (Japan Society of Precision Engineering)
Autumn Meeitng, pp.875-876.
Shikazono, N., Azuma, R., Sameshima, T. & Iwata, H. (2010). Development of Compact Gas-
Liquid Separator Using Surface Tension, Proc. 2010 International Symposium on Next-
generation Air Conditioning and Refrigeration Technology, pp.1-6.

New Technologies – Trends, Innovations and Research

18
Takagi, T., Mizuno, T., Takasaki, M. & Ishino, Y. (2008). Basic Study on Microassembly
Using Surface Tension (1st Report, Principle and Basic Experiments) (in Japanese),
Trans. Japan Society of Mechanical Engineers, Series C, Vol.74, No.741, pp.1317-1321.
2
Design and Simulation-Based Optimization of
Cooling Channels for Plastic Injection Mold
Hong-Seok Park and Xuan-Phuong Dang
University of Ulsan
South Korea
1. Introduction
Injection molding has been the most popular method for making plastic products due to
high efficiency and manufacturability. The injection molding process includes three
significant stages: filling and packing stage, cooling stage, and ejection stage. Among these
stages, cooling stage is very important one because it mainly affects the productivity and
molding quality. Normally, 70%~80% of the molding cycle is taken up by cooling stage. An
appropriate cooling channels design can considerably reduce the cooling time and increase
the productivity of the injection molding process. On the other hand, an efficient cooling
system which achieves a uniform temperature distribution can minimize the undesired
defects that influence the quality of molded part such as hot spots, sink marks, differential
shrinkage, thermal residual stress, and warpage

(Chen et al., 2000; Wang & Young, 2005).
Traditionally, mold cooling design is still mainly based on practical knowledge and
designers’ experience. This method is simple and may be efficient in practice; however, this
approach becomes less feasible when the molded part becomes more complex and a high
cooling efficiency is required. This method does not always ensure the optimum design or
appropriate parameters value. Therefore, many researchers have proposed some
optimization methods to tackle this problem. Choosing which optimization method was
used mainly depends on the experience and subjective choice of each author. Therefore,
finding appropriate optimization techniques for optimizing cooling channels for injection
molding are necessary.
This book chapter aims to show the design optimization method for designing cooling
channels for plastic injection molds. Both conventional straight-drilled cooling channels and
novel conformal cooling channels are focused. The complication of the heat transfer process
in the mold makes the analysis to be difficult when using the analytical method only.
Therefore, using numerical simulation tools or combination of analytical and numerical
simulation approach is one of the intelligent choices applied to modern mold cooling
design.
The contents of this book chapter are organized as follows. Cooling channels layout and the
foundation of heat transfer process happening in the plastic injection mold are presented
systematically. Physical and mathematical modelings of the cooling channels are also
introduced. This section supports the reader the basic governing equations related to the

New Technologies – Trends, Innovations and Research

20
cooling process and how to build an appropriate simulation model. Subsequently, the
simulation-based optimizations of cooling channels are presented. In this section, the state-
of-art of cooling channels design optimization is reviewed, and then the systematic
procedure of design optimization and optimization methods based on simulation are
proposed. Two optimization approaches applied to cooling channels design optimization
are suggested: metamodel-based optimization and direct simulation-based optimization.
The characteristics, advantages, disadvantages, and the scope of application of each method
will be analyzed. Finally, two case studies are demonstrated to show the feasibility of the
proposed optimization methods.
2. Cooling channels layouts
2.1 Mold cooling system overview
Mold cooling process accounts for more than two-thirds of the total cycle time in the
production of injection molded thermoplastic parts. An efficient cooling circuit design
reduces the cooling time, and in turn, increases overall productivity of the molding process.
Moreover, uniform cooling improves part’s quality by reducing residual stresses and
maintaining dimensional accuracy and stability (see Fig. 1).

Fig. 1. Proper cooling design versus poor cooling design (Shoemaker, 2006)
A mold cooling system typically consists of the following items:
- Temperature controlling unit
- Pump
- Hoses
- Supply and collection manifolds
- Cooling channels in the mold
The mold itself can be considered as a heat exchanger, in which the heat from the hot
polymer melt is taken away by the circulating coolant.
Figures 2 illustrates the components of a typical cooling system.
Poorer part in longer cooling time

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

21

Fig. 2. A typical cooling system in injection molding
2.2 Conventional straight-drilled cooling channels
The common types of straight-drilled cooling channels are parallel and series.
2.2.1 Parallel cooling channels
Parallel cooling channels are drilled straight channels that the coolant flows from a supply
manifold to a collection manifold as shown in Fig. 3c. Due to the flow characteristics of the
parallel cooling channels, the flow rate along various cooling channels may be different,
depending on the flow resistance of each individual cooling channel. This varying of the
flow rate, in turn, causes the heat transfer efficiency of the cooling channels to vary from one
to another. As a result, cooling of the mold may not be uniform with a parallel cooling-
channel configuration.
2.2.2 Serial cooling channels
Cooling channels that are connected in a single loop from the coolant inlet to its outlet are
called serial cooling channels (see Fig. 3b). This type of cooling channel network is the most
commonly used in practice. By design, if the cooling channels are uniform in size, the
coolant can maintain its turbulent flow rate through its entire length. Turbulent flow enables
the heat to be transferred more effectively. For large molds, more than one serial cooling
channel may be required to assure a uniform coolant temperature and thus uniform mold
cooling.
Temperature
control
Supply
manifold
Collection
manifold
Pump
Normal cooling
channels
Baffles

New Technologies – Trends, Innovations and Research

22

Fig. 3. Conventional straight cooling channels
2.3 Conformal cooling channels
To obtain a uniform cooling, the cooling channels should conform to the surface of the mold
cavity that is called conformal cooling channels. The implementation of this new kind of
cooling channels for the plastic parts with curved surfaces or free-form surfaces is based on
the development of solid free-form fabrication (SFF) technology. On the other hand,
conformal cooing channels can also be made by U-shape milled groove using CNC milling
machine (Sun et al., 2004).

Fig. 4. A layout of conformal cooling channels
The conformal cooling channels are different from straight-drilled conventional cooling
channels. In conventional cooling channels, the free-form surface of mold cavity is
surrounded by straight cooling lines machined by drilling method. It is clear that the
distance from the cooling lines and mold cavity surface varies and results in uneven cooling
in molded part. On the contrary, for the conformal cooling channels, the cooling paths
match the mold cavity surface well by keeping a nearly constant distance between cooling
paths and mold cavity surface (see Fig. 4). It was reported that this kind of cooling channels
gives better even temperature distribution in the molded part than the conventional one.
(b) Straight series cooling channels
(c) Straight parallel cooling channels
(a) Straight-drilled cooling channels

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

23
Figure 5 shows an example of molds with conformal cooling channels made by direct metal
laser sintering method. It was said that this cooling channels not only ensure the high
quality of the product but also increase the productivity by 20 %.

Fig. 5. Molds with conformal cooling channels made by laser sintering (Mayer, 2009)
3. Physical and mathematical modeling of cooling channels
In the physical sense, cooling process in injection molding is a complex heat transfer problem.
To simplify the mathematical model, some of the assumptions are applied (Park & Kwon,
1998; Lin, 2002). The objective of mold cooling analysis is to find the temperature distribution
in the molded part and mold cavity surface during cooling stage. When the molding process
reaches the steady-state after several cycles, the average temperature of the mold is constant
even though the true temperature fluctuates periodically during the molding process because
of the cyclic interaction between the hot plastic and the cold mold. For the convenience and
efficiency in computation, cycle-averaged temperature approach is used for mold region and
transition analysis is applied to the molded part (Park & Kwon, 1998; Lin, 2002; Rännar, 2008).
The general heat conduction involving transition heat transfer problem is governed by the
partial differential equation. The cycle-averaged temperature distribution can be represented
by the steady-state Laplace heat conduction equation. The coupling of cycle-averaged and one-
dimensional transient approach was applied since it is computationally efficient and
sufficiently accurate for mold design purpose (Qiao, 2006; Kennedy, 2008). Heat transfer in the
mold is treated as cycle-averaged steady state, and 3D FEM simulation was used for analyzing
the temperature distribution. The cycle-averaged approach is applied because after a certain
transient period from the beginning of the molding operation, the steady-state cyclic heat
transfer within the mold is achieved. The fluctuating component of the mold temperature is
small compared to the cycle-averaged component so that cycle-averaged temperature
approach is computationally more efficient than periodic transition analysis (Zhou & Li, 2005).
Heat transfer in polymer (molding) is considered as transient process. The temperature
distribution in the molding is modeled by following equation:

2
2
T T
t z
α
∂ ∂
=
∂ ∂

(1)
The partial difference equation (1) can be solved conveniently by finite difference method.
Due to the nature of thermal contact resistance between polymer and mold, a convective
boundary condition (Kazmer, 2007) was applied instead of isothermal boundary condition.

New Technologies – Trends, Innovations and Research

24
This boundary condition expresses the nature of the heat transfer in mold-polymer interface
better than isothermal boundary condition.

c ps m p
T
h T T k
z

(
− = −
¸ ¸


(2)
where T
ps
and T
m
are molded part surface temperature and mold temperature, respectively;
k
p
is the thermal conductivity of polymer.
The inversion of the heat transfer coefficient h
c
is called thermal contact resistance (TCR). It
is reported that the TCR between the polymer and the mold is not negligible. TCR is the
function of a gap, roughness of contact surface, time, and process parameters. The values of
TCR are very different (Yu et al., 1990; C-MOLD, 1997; Delaunay et al., 2000; Sridhar &
Narh, 2000; Le Goff et al., 2005; Dawson et al., 2008; Hioe et al., 2008; Smith et al., 2008), and
they are often obtained by experiment.
The heat flux across the mold-polymer interface is expressed as follows.

p
T
q k
n

= −


(3)
where n is the normal vector of the surface.
The cycle-averaged heat flux is calculated by the equation:

0
1
c
t
c
q qdt
t
=
}

(4)
The required cooling time t
c
is calculated as follows (Menges et al., 2001; Rao & Schumacher,
2004).

2
4
ln
i m
c
e m
T T s
t
T T πα π
( | | −
=
( |

( \ . ¸ ¸

(5)
where
m
p
k
c
α
ρ
=
is the thermal diffusivity of polymer
An example solution of the system of Eq. (1) to (5) for a specific polymer and a given process
parameters is depicted in Fig. 6.

Fig. 6. Typical temperature profile and heat flux of a given molding obtained by finite
difference method

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

25
When the heat balance is established, the heat flux supplied to the mold and the heat flux
removed from the mold must be in equilibrium. Figure 7 shows the sketch of configuration of
cooling system and heat flows in an injection mold. The heat balance is expressed by equation.
0 Q Q Q
m c e
+ + =
  

(6)
where
m
Q

,
c
Q

and
e
Q

are the heat flux from the melt, the heat flux exchange with coolant
and environment respectively.

Fig. 7. Physical modeling of the heat flow and the sketch of cooling system
The heat from the molten polymer is taken away by the coolant moving through the cooling
channels and by the environment around the mold’s exterior surfaces. The heat exchanges
with the coolant is taken place by force convection, and the heat exchanges with
environment is transported by convection and radiation at side faces of the mold and heat
conduction into machine platens. In application, the mold exterior faces can be treated as
adiabatic because the heat lost through these faces is less than 5% (Park & Kwon, 1998; Zhou
& Li, 2005). Therefore, the heat exchange can be considered as solely the heat exchange
between the hot polymer and the coolant. The equation of energy balance is simplified by
neglecting the heat loss to the surrounding environment.

0
m
Q Q
c
+ =
 

(7)
Heat flux from the molten plastic into the coolant can be calculated as (Rao et al., 2002)

3
10
2
[ ( ) ]
M E
s
Q c T T i x
m p m
ρ

= − +


(8)
Heat flux from the mold that changes with coolant in the time t
c
amounts to (Park & Kwon,
1998):

( )
1
1
3
1 1
10
3
10
W C
Q t T T
c c
k S
e d
st
απ

− | |

|
|
\ .
= −



(9)
In fact, the total time that the heat flux transfers to coolant should be cycle time including
filling time t
f
, cooling time t
c
and mold opening time t
0
. By comparing the analysis results

New Technologies – Trends, Innovations and Research

26
obtained by the analytical method using the formula (9) and the analysis result obtained by
commercial flow simulation software, the formula (9) under-estimates the heat flux value. On
the contrary, i
f,
t
c
in (9) is replaced by the sum of t
f
, t
c
and t
o
, the formula (9) over-estimates the
heat flux from the mold exchanges with coolant. The reason is that the mold temperature at
the beginning of filling stage and mold opening stage is lower than others within a molding
cycle. The under-estimation or over-estimation is considerable when the filing time and mold
opening time is not a small portion compared to the cooling time, especially for the large part
with small thickness (Park & Dang, 2010). For this reason, the formula (9) is adjusted
approximately based on the investigation of the mold wall temperature of rectangular flat
parts by using both practical analytical model and numerical simulation.

( )
1
1
1 1
3
2 3
1 1
10
3
10
f c W C
t t
o
Q t T T
c
k S
e d
st
απ

− | |
| |

| + +
|
|
\ .
\ .
= −



(10)
The influence of the cooling channels position on the heat conduction can be taken into
account by applying shape factor S
e
(Holman, 2002)

2
2 2 sinh( / )
ln
S
e
x y x
d
π
π
π
(
(
¸ ¸
=
(11)
Heat transfer coefficient of water is calculated by (Rao & Schumacher, 2004):

0 8
31 395
.
.
R
e
d
α =
(12)
where the Reynolds number

d
R u
e
ν
=
(13)
The cooling time of a molded part in the form of plate is calculated as (Menges et al., 2001;
Rao & Schumacher, 2004):

2
2
4
ln
M W
E W
s T T
t
c
a T T π π
( | |
( |
|
( \ . ¸ ¸

=


(14)
From the formula (14), it can be seen that the cooling time only depends on the thermal
properties of a plastic, part thickness, and process conditions. It does not directly depend on
cooling channels configuration. However, cooling channels’ configuration influences the
mold wall temperature
W
T , so it indirectly influences the cooling time.
By combining equations from (7) to (14), one can derive the following equation:
2 2 2
1 1 4 1 1
2
0 8 2
2 2 3
0 03139
sinh( ) [ ( ) ]
ln ln
.
.
M E
M W
W C E W
y s
x c T T i x
p m
T T s
x
t t
o
f
T T k d T T
R a
st
e
π ρ
π π π
π π
¦ ¹ (
− +
¦ ¦ ( ( | | − ¦ ¦
+ = + +
( ( | ´ `
− −
( \ . ( ¦ ¦ ¸ ¸
( ¦ ¦ ¸ ¸ ¹ )
(15)

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

27
Mathematically, with preset T
M
, T
E
,
W
T , predefined t
f
and t
o
, and others thermal properties
of material, equation (15) presents the relation between cooling time t
c
and the variables
related to cooling channels configuration including pitch x, depth y and diameter d. In
reality, the mold wall temperature
W
T is established by the cooling channels configuration
and predefined parameters T
M
, T
E
, t
f
, t
o
, and thermal properties of material in equation (15).
The value of
W
T , in turn, results in the cooling time calculated by the formula (14).
4. Simulation-based optimization of cooling channels
4.1 Cooling system design and optimization: The state-of-the-art
For many years, the importance of cooling stage in injection molding has drawn a great
attention from researchers and mold designers. They have been struggling for the
improvement of the cooling system in the plastic injection mold. This field of study can be
divided into two groups:
• Optimizing conventional cooling channels (straight-drilled cooling lines).
• Finding new architecture for injection mold cooling channels (conformal cooling
channels).
The first group focuses on how to optimize the configuration of the cooling system in terms of
shape, size, and location of cooling lines (Tang et al., 1997; Park & Kwon, 1998; Lin, 2002; Rao
et al., 2002; Lam et al., 2004; Qiao, 2005; Li et al., 2009; Zhou et al., 2009; Hassan et al., 2010).
These studies used some of methods from semi-analytical method to finite difference,
boundary element method (BEM), and finite element method (FEM). Rao N. (Rao et al., 2002)
proposed the optimization of cooling systems in injection mold by using an applicable
analytical model based on 2D heat transfer equations. Most studies mainly focus on the
numerical methods. Park and Kwon (Park & Kwon, 1998) proposed the optimization method
for cooling system design in injection molding process by applying design sensitive method.
The heat transfer was treated as 2D problem. Boundary element method is preferred to solve
the heat transfer problem in mold cooling design (Qiao, 2005; Zhou et al., 2009). BEM is
effective for calculating heat transfer in the mold because: (a) the discretization associated with
BEM does not extend to the interior region of the mold that there is no need for mesh
generation when the cooling channels are rearranged, (b) BEM method reduces the input data
due to the reduction of total nodes so that the computation cost is reduced in comparison to
finite element method. Although the BEM can extend to 3D application as the new feature of
most of commercial injection molding software, these works are mainly based on 2D case
studies that are not always practical. Moreover, most of case studies are simple.
For 3D analysis in heat transfer in injection mold, 3D simulation based on professional or
commercial software is the common approach. Nowadays, commercial simulation software
can help the designer to calculate the temperature distribution and cooling time.
Nevertheless, it is only the simulation tools, and these tools themselves are often confined in
a single simulation. The optimization task needs a scientific strategy and methodology to
obtain a believable result. Lam Y. C. et al. (Lam et al., 2004) proposed an evolutionary
approach for cooling system optimization in plastic injection molding. In their study, the
direct integration between GA algorithm in optimization and CAE software (Moldflow, a
software package that uses BEM for mold cooling analysis) is employed. This is the best

New Technologies – Trends, Innovations and Research

28
choice, nowadays, for cooling optimization for the injection mold. However, there are some
limitations about the simulation time or computing cost because GA requires a lot of
function evaluation before reaching convergence. If the molded part is complex or it has
great number of element, the computing cost is extremely high. The optimization strategy
also has some limits, and it is mentioned and discussed later.
The second group investigates the way to build the cooling layout namely conformal
cooling channels that conform to the mold cavity surface and examines the effectiveness of
this cooling system. Solid free-from fabrication (SFF) or rapid prototype (RP) techniques
have been applied to build this complex cooling system. It was reported that cooling quality
is better than that of conventional cooling channels (Sachs et al., 2000; Xu et al., 2001;
Ferreira & Mateus, 2003; Dimla et al., 2005; Au & Yu, 2007; Gloinn et al., 2007; Rännar et al.,
2007; Park & Pham, 2009; Safullah et al., 2009). Prototyping technologies with metal powder
that can make the mold with conformal cooling channels include selective laser sintering
(SLS), 3D printing (3DP), electron beam melting, and laser engineered net shaping.
Classifying optimization technique by searching direction, there are two different
algorithms: gradient-based and non-gradient-based optimization techniques. The
advantages and disadvantages of these algorithms are straightforward in the literature.
Gradient-based methods face difficulty when number of variables increase, and they get risk
of local extremum. On the contrary, GA algorithms tend to reach global optimum, but the
huge number of function evaluations or the number of simulations is required. If the
simulation cost of each simulation is high, GA tool is extremely expensive.
When the molded part or the cooling channels is complex, the analytical cooling design
formulas based on 1D or 2D analysis become inaccurate. The strength of general CAE tool
such as ANSYS and COSMOS, or professional CAE tools for injection molding simulation
such as Moldflow, Moldex3D, and Timon-3D have been exploited successfully in many recent
publications. ANSYS and COSMOS are based on FEM method for heat transfer analysis.
Moldflow uses the BEM method for the 3D mold cooling problem due to the need to mesh
only the outer surface of the mold. Moldex3D applies finite volume method. This CAE tool
uses a variety of element shapes for analysis, and it is possible to create fine wedge element
mesh near the mold surface and coarse tetrahedral mesh in the center to reduce the number of
elements and improve the heat transfer calculation near the mold wall (Kennedy, 2008).
As previously mentioned, using commercial CAE software for cooling simulation is the
main tendency of recent practical studies when the molded parts or cooling channels are
complex. Sun I. F. et al. (Sun et al., 2002) proposed U-shape conformal milled groove cooling
channels for injection molds. Simulation was done to compare the cooling effect of this kind
of channels with straight cooling channels by using COSMOS, an analysis software based on
FEM method. Of course, conformal cooling channels offer a better cooling effect than those
of straight cooling channels. Similarly, some of other studies investigated the cooling effect
of conformal cooling channels made by rapid prototyping method (Dimla et al., 2005; Au &
Yu, 2007; Gloinn et al., 2007; Rännar et al., 2007; Safullah et al., 2009). CAE simulation or
experiments show that conformal cooling channels are better than conventional straight
cooling channels in terms of heat transfer. The mold temperature distributes more even than
that of straight cooling channels. However, most of these studies have not mentioned about
the optimization problem of conformal cooling channels.

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

29
In fact, mold cooling design not only aim at the uniform cooling but also minimize cooling
time to a target mold wall temperature. How far it is from cooling channels to the mold cavity
surface and what the best coolant temperature is for complex cooling channels still the
considerable problems that have not been resolved thoroughly. It still lacks of study of how
well this conformal cooling system performs and how to optimize its configuration in order to
obtain the minimum cooling time, even cooling and reasonable mold making cost. In addition,
cooling design is often based on designer’s experience and intuition. When molding geometry
becomes more complex, experience-based and trial-and-error approaches would be time-
consuming and less feasible (Tang et al., 1997; Lin, 2002; Lam et al., 2004; Qiao, 2006).
4.2 Simulation-based optimization approaches
Over the past decade, we have seen a tremendous growth in the use of CAE in mold design
and injection molding process analysis. By using CAE software for numerical simulation of
the injection molding process, it is possible to predict the quality of molded part and to
detect the potential problems at the early design stage. Since a computer simulation is faster
and cheaper than building prototype molds or performing real test on injection molding
machines, it reduces manufacturing cost and the time-to-market. Design optimization
always requires a loop of design-evaluate-redesign. Therefore, the ability to quickly and
easily assess the different configuration of the mold and process parameters accelerates the
search of variety of process conditions and mold configuration to determine the optimum
design. More over, selecting appropriate optimization methodology also reduces the
simulation time as well as increases the fidelity of the optimization process.
Since 2000, the numerical methods for injection molding simulation are relatively mature
with the great contribution of academic works, commercial CAE companies, and the
continuous development of computer hardware. Nowadays, CEA software for injection
molding is an indispensable tool for plastic designers. The mold design process is
demonstrated as shown in Fig. 8. Before tool making and production, the designer must
ensure that the mold he/she designed and the production process can produce the molded
parts with the minimum defect, maximum productivity, and the best quality. To satisfy
these conditions, the iteration process including modification of the designed part or change
the mold design is required. After modification, the verification process is carried out again.
If the result does not meet the verification criteria, the loop must be continued.

Fig. 8. Mold design process

New Technologies – Trends, Innovations and Research

30
4.3 Optimization methods and systematic procedure for optimization
4.3.1 Direct numerical optimization methods
The terminology “direct numerical optimization methods” means that it is unnecessary to
use indirect metamodel (refers to Section 4.3.2). In this case, gradient-based using finite
difference method for calculating the derivative or other non-gradient-based algorithms
such as GA, simulated annealing, heuristic search is applied directly. The optimization loop
is terminated when the convergence is reached (optimum solution is found), or the
termination criteria are active. Because the computing cost of CAE simulation is usually
expensive, one of the termination criteria is often the pre-defined maximum number of
simulations. The systematic procedure of direct simulation-based optimization in injection
molding is depicted in Fig. 9.

Fig. 9. Systematic procedure of direct simulation-based optimization in injection molding
Determining objective(s), design
variables, design space
Choosing an arbitrary initial design
Running simulation
Evaluating objective(s) functions, constraint(s) and
modifying design variables according to a chosen
optimization technique
Convergence or
reach the termination criteria?
Obtaining an optimal solution
Fine-tune or refine search (if necessary)
Finish
Start
Yes
No

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

31
The number of iterations depends on the initial point and optimization technique, and it
may give different optimum design points. Usually, the number of iterations for searching
the optimum design point in gradient-based optimization is large. The more the number of
design variables, the more the number of iterations. Also, local optimum is some time
obtained rather than global optimum. There is no guarantee that the optimal solution or a
solution closed to the optimum is found. In case of using non-gradient-based optimization
techniques, for example GA, if the number of generations or the number of function
evaluations is low, they prone to reach the neighbor of optimum point rather the optimum
point. Therefore, the computational cost of direct simulation-based optimization method is
extremely high because the each simulation in injection molding may last hours if the
number of element is great.
The advantage of direct simulation-based optimization is that the verification at optimum
point is unnecessary. It is different from metamodel-based optimization method that will be
presented in Section 4.3.2
The framework of CAE simulation and computer-based optimizer integration using direct
numerical optimization techniques is proposed as shown in Fig. 10. There are two
components in this framework including optimizer controller component and CAE
component. The CAE component is responsible for analysis or simulation. Optimizer
controller is responsible for reading the output from CAE component, evaluating objective
and constraint functions, and modifying inputs (design variables) according to the
algorithm of the selected optimization technique. All the processes in the framework are
performed automatically by the instruction commands coded by a programming language.

Fig. 10. Framework of CAE simulation and computer-based optimizer integration using
direct numerical optimization techniques
CAE injection molding
modeler and simulation
Results file:
- Mold/part temperature
- Mold temperature deviation
- Required cooling time
- Warpage (deflection)
- Residual stress
- …..
Input data stored in
an input file
Read results file
(evaluating function)
Modify design variables
according to optimization
techniques (GA or gradient-
based techniques)
Scripting language
Optimizer (controller) CAE simulation
Query and extract the
necessary output
Scripting language

New Technologies – Trends, Innovations and Research

32
4.3.2 Simulation-based optimization using metamodeling techniques
Metamodeling technique or approximation-based optimization technique is a method that
objective functions are frequently approximated to explicit functions in the form of low
order polynomials with an acceptable accuracy. This technique has some benefits such as
being easy to connect to simulation program, to render a view of entire design space as well
as computational efficiency (Papalambros, 2002; Park, 2007; Wang & Shan, 2007; Park &
Dang, 2010). The systematic procedure of metamodel-based optimization technique applied
to injection molding is depicted in Fig. 11. The metamodel types can be RSM, radial basis

Fig. 11. Systematic procedure of metamodel-based optimization in injection molding
Determining objective(s), design
variables, design space
Choosing metamodel type
DOE or space sampling
Generating modeling & Running CAE
injection molding simulations
Fitting metamodel:
RSM, RBF, or ANN
Is the model
adequate?
Performing optimization
Evaluating optimum point
Is the accuracy
satisfactory?
Finish
S
e
q
u
e
n
t
i
a
l

i
m
p
r
o
v
e
m
e
n
t

o
r

c
h
a
n
g
e

t
h
e

m
e
t
a
m
o
d
e
l

t
y
p
e

Start
Yes
No
Yes
No

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

33
function, Kriging model, or ANN. The common DOE or space sampling techniques include
full factorial, D-optimal design, central composite design, orthogonal array, Latin
hypercube, and optimal Latin hypercube. After running a predefined number of simulations
(except adaptive metamodel technique) according to the DOE strategy, the approximation
process is carried out. The metamodel is then built. The optimization process is performed
using mathematical approximation or metamodel. Because the objective and constraint
functions are in the form of explicit equation, the computing cost for finding the optimum
solution can be ignored compared to the total simulation cost. The theories of metamodel-
based optimizations are out of the scope of this book chapter.
Different from direct simulation-based optimization method, the evaluating step needs to be
done in order to verify the fidelity of the metamodel at the “optimum” point because it always
exist an error between the metamodel and real response. In other words, there is an error
between the predicted and actual values at the optimum point because metamodels are
approximate models. If the error between the responses obtained by prediction and CAE
simulation is acceptable or if it satisfies the designer, the optimization process is finished
successfully. Otherwise, the sequential improvement step should be carried out.
Framework of CAE simulation and computer-based optimizer integration based on
metamodeling techniques is proposed as shown in Fig. 12. There are two components in this

Fig. 12. Framework of CAE simulation and computer-based optimizer integration based on
metamodeling techniques
Outputs (responses):
Mold temperature,
cooling time, stress,
shrinkage, warpage…
Output
data storage
Selecting the values of design
variables based on DOE
techniques
Meta-
model
Integration controller
& Metamodeling
processor
CAE injection molding
modeling & simulation
Fixed design
parameters
and
constraints
CAE component Integration controller
Input file

New Technologies – Trends, Innovations and Research

34
framework including integration controller component and CAE component. The CAE
component is responsible for reading the inputs data, performing analysis or simulation,
and writing outputs to a text file. Integration controller is responsible for DOE (determining
the combinations of design parameters) and controlling the synchronization of the
integration process. The controller must wait until a simulation finishes and ensure all the
outputs data are stored safely before calling the next simulation or iteration. The loop in the
framework is terminated when all the number of simulations determined by DOE technique
has been done. Metamodel is then built and verified. Subsequently, the optimization process
is carried out based on the metamodel.
4.4 Software implementation
Some software that satisfies the functional requirements shown in the framework can be
used to implement the integration and optimization processes. These software should have
the ability of automation. It means that all the tasks are programmed and performed
automatically without the interaction of the engineering designers while the program is
executed. Any CAE software that supports programming and I/O command can be used to
implement the proposed framework. In reality, there are not many injection molding
software. Moldflow is one of a popular CAE tool for injection molding simulation that offers
the API tool for automating most of the modeling, analyzing, and simulation task.
Selection of implementing software for the framework of integration system depends on the
available tools and individual choices of the engineering designers. They can use any
standard programming language such as Visual Basic, Visual C or MATLAB for
implementing the connection between the proposed integration controller and CAE
component, controlling the integration loop, generating the metamodel and solving the
optimization problem. iSight software is also a powerful tool that helps the designer to
integrate the optimizer and injection molding simulation software. It is clear that there are
some options for choosing appropriate software from the previously introduced ones that
can be used to build the implemented software. In this work, the collection of Matlab and
Moldflow, or the couple of iSight and Moldflow was implemented to make the integration
frameworks. The important thing is that an API program must be coded using Visual Basic
Scripting language. This API program calls most of the functions of Moldflow to perform
the modeling and simulation task.

Fig. 13. Apply computer-aided design and CAE simulation in cooling design and analysis
For conformal cooling channels, the procedure of implementation starts from calculating
the position of the cooling channels as shown in formula (15) and figure 7. Besides solving

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

35
explicit equation for finding the good initial cooling channels configuration, CAD
modeling and CAE simulation and analysis are the important tools to support design
process, fine-tune and verify the result. The systematic procedure of applying computer-
aided design and CAE simulation for cooling channels design optimization can be
presented as follows (see Fig. 13). First of all, based on the results obtained from the
analytical analysis step, approximate cooling channels are modeled by projecting cooling
channels layout from a plane to the offset surfaces of the molded part. Subsequently, the
coordinate of cooling channels are generated and stored in a text file. Next, the conformal
cooling channels are imported to CAE environment and meshed automatically by an
Application Programming Interface (API) via Visual Basis Scripting (VBS) language. After
that, cooling simulation is performed to obtain the exact results of average mold
temperature and temperature distribution of the molded part. Finally, the temperature of
all elements or considered elements are queried and stored in a text file to support data
for optimization process. The third step to the last step are looped until the optimal
conditions are satisfied. This process can be controlled automatically by an optimizer
programmed by Matlab and VBS language.
5. Case studies
5.1 Case study 1: Optimization of conventional straight cooling channels
Based on the two proposed algorithms and the two frameworks presented in Section 4, there
are two ways for implementing the optimization method for designing optimum straight
cooling channels. As previously mentioned, there is no theoretical method to prove that a
specific optimization technique is better than the others in all circumstance. The existence of
many optimization methods is the evidence for this conclusion. The way for implementing
the optimization method for designing optimum cooling system is illustrated by considering
a typical design example as shown in Figure 14. The molded part is a box made by PP
material with dimension 400×250×150 mm and 2.5 mm thickness.

Fig. 14. A plastic box used as a typical example for cooling design optimization
The cooling channels configuration is shown in Fig. 14. The positions of cooling lines are
determined by the coordinates of points P1 to P4 due to the symmetric characteristic of the
cooling channels (see Fig. 15). Mold material is P20 steel. All the material properties of PP
plastic and P20 steel are obtained from the material database of Moldflow software. Because
of the symmetry, there are 11 design variables as shown in Table 1.

New Technologies – Trends, Innovations and Research

36
No. Variable Lower
range
Upper
range
Unit No. Variable Lower
range
Upper range Unit
1 x1 -120 -90 mm 7 z3 30 60 mm
2 y1 -70 -40 mm 8 y4 35 65 mm
3 z1 -50 -20 mm 9 z4 120 150 mm
4 z2 40 70 mm 10 d 10 14 mm
5 x3 -190 -160 mm 11 T
w
15 22 °C
6 y3 110 140 mm
Table 1. Design variables and their ranges
Five optimization techniques (including genetic algorithm GA, gradient-based optimization
techniques, response surface model RSM, radial basis function RBF, and neural network
NN) from two main groups of optimization method were implemented. The optimization
problem is stated as follows:
Objective:
Minimize the mold temperature deviation
Subject to:
49.5 ≤ Target mold temperature ≤ 50.5
Side constraints are shown in Table 1.









Fig. 15. Coordinates of the cooling lines
The optimum values of design variables, constraints and objective function of different five
optimization techniques are shown in Table 2. The distribution of temperature for the
optimum case using GA optimization techniques is demonstrated in Fig. 16. The results
show that the temperature distributes evenly.
X
Y
X
Z
P1(x
1
,y
1
,z
1
)
P2(x2,y2,z2)
P3(x3,y3,z3)
P4(x4,y4,z4)

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

37
No. Variable Lower
range
Upper
range
Optimum
GA
Gradient-
based
Optimum
RSM
Optimum
RBF
Optimum
NN then
GA
1 x1 -120 -90 -119.0 -113.3 -120.0 -120.0 -116.5
2 y1 -70 -40 -68.1 -70.0 -70.0 -70 -69.1
3 z1 -50 -20 -38.0 -21.1 -39.4 -37.1 -28.8
4 z2 40 70 69.8 70.0 70.0 70.0 69.8
5 x3 -190 -160 -172.3 -160.0 -186.4 -175.6 -168.5
6 y3 110 140 123.5 110.0 140 118.1 113.4
7 z3 30 60 31.0 30.0 32.7 30.0 34.7
8 y4 35 65 40.3 35.0 45.6 50.6 37.3
9 z4 120 150 141.3 120.0 150.0 133.8 134.4
10 d 10 14 11.7 10 13.7 12.3 10.7
11 T
w
15 22 16.7 15.2 15.0 18.6 15.0
Table 2. Optimum results of different optimization methods
It can be seen that the final optimization results of five optimization technique are slightly
different from each other. However, in general, they tend to converge into the real optimum
point. The differences in the final results are not large. The slight differences are originated
from the characteristics of each optimization method and optimization technique as well as
the terminate conditions. The values of objective function, constraint and other response are
listed in Table 3. In this example, the direct gradient-based and simulation-based
optimization method seems to be trapped in the local minimum. Therefore, it has the largest
value of the objective function.

No Response (output)
Optimum
GA
Gradient-
based
Optimum
RSM
Optimum
RBF
NN &
GA
1
Average mold
temperature
50.5 50.3 50.5 50.5 49.6
2
Mold temperature
deviation
7.5 10.0 8.7 8.5 9.1
3 Required cooling time 11.9 12.0 11.8 11.9 11.9
Table 3. The values of responses obtained by different optimization technique

Fig. 16. Distribution of mold temperature with optimum cooling channels

New Technologies – Trends, Innovations and Research

38
5.2 Case study 2: Optimization of conformal cooling channels
To prove the applicability, the feasibility as well as the way of optimization of the conformal
cooling channels, a typical case study is presented. The molded part is a plastic car fender
with the bounding box dimensions and thickness are 348×235×115 mm and 2.5 mm
respectively as shown in Fig. 17. The polymer material is Noryl GTX979 which can suffer a
high temperature up to 180°C in the online painting process. Material properties of polymer,
mold, and coolant are shown in Table 4.

Material Water (25°C) Steel (P20) Plastic
Density (kg/m
3
) 996 7800 930
Specific heat (J/kg.°K) 4177 460 4660
Thermal conductivity (W/m.°K) 0.615 29 0.25
Viscosity (mm
2
/s) 0.801 - -
Table 4. Material properties

Fig. 17. A plastic car fender with free-form shape
The molding parameters are recommended by material manufacturer as shown in Table 5.
Filling time was obtained by performing filling simulation using Moldflow software. The
cooling time was calculated analytically by using the formula (11). Mold opening time was
estimated by the ratio of mold opening distance and mold opening velocity. The cooling

Parameters Value Unit
Melt temperature T
M
305 °C
Ejection temperature T
E
247 °C
Average mold temperature T
W
100 °C
Filling time t
f
(obtained by simulation) 1.9 s
Cooling time t
c
6.3 s
Mold opening time t
o
3 s
Velocity of cooling water u 1.0 m/s
Temperature of cooling water T
C
25 °C
Table 5. Molding parameters

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

39
channels are machined by milling machine. According to the required length of milling tool
to machine the cooling groove, the cooling channel diameter was selected as 12 mm. The
range of pitch x was selected from 4d to 5d due to a high level of ejection temperature and
requirement of reducing the number of cooling paths. By applying the solver tools, the
results of analytical method are shown in Table 6.

Parameters Value Unit
Cooling channel diameter d 12 mm
Cooling channels pitch x 57.7 mm
Cooling channels depth y 45.2 mm
Velocity of cooling water u 1.0 m/s
Reynolds number Re 11952
Total flow rate of coolant 40.7 l/min
Heat transfer coefficient α 4667 W/m
2
.°K
Table 6. The results of optimization obtained from analytical method
The results obtained by analytical method (equation 15) were used to deploy the conformal
cooling channels as an initial design. Subsequently, Moldflow software was used to perform
the cooling analysis. The simulation results for the first run showed that the average mold
cavity surface temperature was 98.6°C. This value nearly approaches the target mold
temperature (
W
T = 100°C). To approach the target mold temperature, the pitch x of cooling
channels was fixed and the depth y of both core side and cavity side were adjusted. Linear
interpolation method was used as a strategy to reduce the number of iteration of simulation.

Fig. 18. Average temperature distribution of the part
The final results were obtained rapidly after performing three more simulations. The
average mold temperature is 100.4°C. The maximum temperature at the middle layer of the
part is 221.2°C at the end of cooling time, so it can allow ejecting the molded part safely
without distortion. The temperature on the part distributes quite uniform even though the
free-form shape of the part is complex (see Fig. 18). The simulation result shows that the
time to freeze the part to ejection temperature is 6.1 second. This result agrees well with the
cooling time calculated by formula (11) (6.3 second). This means that the cooling design

New Technologies – Trends, Innovations and Research

40
results satisfy the optimality conditions. The optimum values of the distances from the
cooling channels to the part surface are 46.0 mm and 46.9 mm for the core side and cavity
side of the mold, respectively.
We compared the cooling effect of an un-optimized design and the optimized design and
found that the range between maximum and minimum temperature in case optimized
conformal cooling channel is always smaller than those of the un-optimized one (see Fig. 19
as an example). In addition, the comparison of the warpage between the best straight
cooling channel and the conformal one was also carried out. The simulation result shows
that conformal cooling channel reduces 15.7% warpage for this case study (see Fig. 20). The
effect of conformal cooling channel varies according to the complexness of the molded part.
In general, conformal cooling channels always offer a better uniform cooling and a lower
warpage than straight cooling channels. These are the advantages of conformal cooling
channels.

Fig. 19. Comparison of temperature profile between un-optimized and optimized conformal
cooling channels


Fig. 20. Comparison of warpage between conventional straight cooling channel and
conformal cooling channel
- Minimum temperature: 66.5°C
- Maximum temperature: 138.2°C
- Average temperature: 106.9°C
- Standard deviation: 14.47
(a) An un-optimized design:

- Minimum temperature: 70.5°C
- Maximum temperature: 132.8°C
- Average temperature: 107.0°C
- Standard deviation: 13.81
(b) Optimized design:

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

41
6. Conclusion
In summary, the foundation of heat transfer process happening in the plastic injection mold
was systematically present in this book chapter. Physical and mathematical modelings of the
cooling channels are introduced. It supports the reader the basic governing equations
related to the cooling process and how to build an appropriate simulation model.
Subsequently, the simulation-based optimizations of cooling channels are presented. The
state-of-the-art of cooling channels design optimization was also reviewed. Then, the
systematic procedure of design optimization and optimization methods based on simulation
were proposed. Two optimization approaches applied to cooling channels design
optimization were suggested: metamodel-based optimization and direct simulation-based
optimization. The characteristics, advantages, disadvantages, and scope of application of
each method were analyzed. Two case studies on conventional straight-drilled and
conformal cooling channels are demonstrated to show the feasibility of the proposed
optimization methods.
Cooling design optimization of injection molding for a complex free-form molded part
requires a complicated analysis steps, optimization strategy, and appropriate computer
aided tools. This book chapter presents a systematic method for optimizing the cooling
channels in order to obtain the target mold temperature and reduce the cooling time and the
non-uniformity of temperature distribution of the molded part. To increase the
computational effectiveness, both analytical method and simulation-based method were
used successively.
When the fidelity of the optimization result is considered, the support of CAE tools, API
programming language, and the combination optimization techniques are important to
increase the preciseness of the analysis results and to reduce the simulation cost. The
proposed methods have been tested in various practical cases in which the plastic car fender
and plastic box are the typical case studies. The results obtained from the case studies point
out that the proposed methods of cooling channels optimization can be used successfully
with less time-consuming and less effort of designers to improve the part quality and the
productivity of plastic production.
7. Acknowledgment
This work was supported by Research Fund of the University of Ulsan, Korea (2011)
8. References
Au, K. & Yu, K. (2007). A scaffolding architecture for conformal cooling design in rapid
plastic injection moulding. The International Journal of Advanced Manufacturing
Technology 34(5), pp. 496-515.
C-MOLD (1997). User's manual. New York, AC Technology.
Chen, X.; Lam, Y. C. & Li, D. Q. (2000). Analysis of thermal residual stress in plastic injection
molding. Journal of Materials Processing Technology 101(1-3), pp. 275-280.
Dawson, A.; Rides, M.; Allen, C. R. G. & Urquhart, J. M. (2008). Polymer-mould interface
heat transfer coefficient measurements for polymer processing. Polymer Testing
27(5), pp. 555-565.

New Technologies – Trends, Innovations and Research

42
Delaunay, D.; Bot, P. L.; Fulchiron, R.; Luye, J. F. & Regnier, G. (2000). Nature of contact
between polymer and mold in injection molding. Part I: Influence of a non-perfect
thermal contact. Polymer Engineering & Science 40(7), pp. 1682-1691.
Dimla, D. E.; Camilotto, M. & Miani, F. (2005). Design and optimisation of conformal cooling
channels in injection moulding tools. Journal of Materials Processing Technology 164-
165, pp. 1294-1300.
Ferreira, J. C. & Mateus, A. (2003). Studies of rapid soft tooling with conformal cooling
channels for plastic injection moulding. Journal of Materials Processing Technology
142(2), pp. 508-516.
Gloinn, T. O.; Hayes, C.; Hanniffy, P. & Vaugh, K. (2007). FEA simulation of conformal
cooling within injection moulds. International Journal of Manufacturing Research 2007
2(2), pp. 162-170.
Gloinn, T. O.; Hayes, C.; Hanniffy, P. & Vaugh, K. (2007). FEA simulation of conformal
cooling within injection moulds. International Journal of Manufacturing Research 2007
2(2), pp. 162 - 170
Hassan, H.; Regnier, N.; Le Bot, C. & Defaye, G. (2010). 3D study of cooling system effect on
the heat transfer during polymer injection molding. International Journal of Thermal
Sciences 49(1), pp. 161-169.
Hioe, Y.; Chang, K.-C.; Zuyev, K.; Bhagavatula, N. & Castro, J. M. (2008). A simplified
approach to predict part temperature and minimum ldquosaferdquo cycle time.
Polymer Engineering & Science 48(9), pp. 1737-1746.
Holman, J. P. (2002). Heat transfer, McGraw-Book Company.
Kazmer, D. O. (2007). Injection mold design engineering. Munich, Carl Hanser Verlag.
Kennedy, P. K. (2008). Practical and scientific Aspects of injection molding simulation.
Materials Technology, Eindhoven University of Technology. Doctoral.
Lam, Y. C.; Zhai, L. Y.; Tai, K. & Fok, S. C. (2004). An evolutionary approach for cooling
system optimization in plastic injection moulding. International Journal of Production
Research 42(10), pp. 2047 - 2061.
Le Goff, R.; Poutot, G.; Delaunay, D.; Fulchiron, R. & Koscher, E. (2005). Study and modeling
of heat transfer during the solidification of semi-crystalline polymers. International
Journal of Heat and Mass Transfer 48(25-26), pp. 5417-5430.
Li, X.-P.; Zhao, G.-Q.; Guan, Y.-J. & Ma, M.-X. (2009). Optimal design of heating channels for
rapid heating cycle injection mold based on response surface and genetic
algorithm. Materials & Design 30(10), pp. 4317-4323.
Lin, J. C. (2002). Optimum cooling system design of a free-form injection mold using an
abductive network. Journal of Materials Processing Technology 120(1-3), pp. 226-236.
Mayer, S. (2009). Optimised mould temperature control procedure using DMLS. Whitepaper,
EOS GmbH Electro Optical Systems, Robert-Stirling-Ring 1, D-82152 Krailling/Munich,
www.eos.info.
Menges, G.; Michaeli, W. & Mohren, P. (2001). How to make injection molds. Munich, Hanser
Publishers.
Papalambros, P. Y. (2002). The optimization paradigm in engineering design: promises and
challenges. Computer-Aided Design 34, pp. 939-951.
Park, G.-J. (2007). Analytic methods for design practice. London, Springer.
Park, H. & Pham, N. (2009). Design of conformal cooling channels for an automotive part.
International Journal of Automotive Technology 10(1), pp. 87-93.

Design and Simulation-Based Optimization of Cooling Channels for Plastic Injection Mold

43
Park, H. S. & Dang, X.-P. (2010). Structural optimization based on CAD-CAE integration and
metamodeling techniques. Computer-Aided Design 42 (10), pp. 889-902.
Park, H. S. & Dang, X. P. (2010). Optimization of conformal cooling channels with array of
baffles for plastic injection mold. International Journal of Precision Engineering and
Manufacturing 11(6), pp. 1-12.
Park, S. J. & Kwon, T. H. (1998). Optimal cooling system design for the injection molding
process. Polymer Engineering & Science 38(9), pp. 1450-1462.
Qiao, H. (2005). Transient mold cooling analysis using BEM with the time-dependent
fundamental solution. International Communications in Heat and Mass Transfer 32(3-
4), pp. 315-322.
Qiao, H. (2006). A systematic computer-aided approach to cooling system optimal design in
plastic injection molding. International Journal of Mechanical Sciences 48(4), pp. 430-439.
Rännar, L.-E. (2008). On Optimization of Injection Molding Cooling. Department of
Engineering Design and Materials. Trondheim, Norwegian University of Science
and Technology. Ph.D.
Rännar, L. E.; Glad, A. & Gustafson, C. G. (2007). Efficient cooling with tool inserts
manufactured by electron beam melting. Rapid Prototyping Journal 13(3), pp. 128-135.
Rao, N. S. & Schumacher, G. (2004). Design formulas for plastics engineers. Munich, Hanser
Verlag.
Rao, N. S.; Schumacher, G.; Schott, N. R. & O'brien, K. T. (2002). Optimization of Cooling
Systems in Injection Molds by an Easily Applicable Analytical Model. Journal of
Reinforced Plastics and Composites 21(5), pp. 451-459.
Sachs, E.; Wylonis, E.; Allen, S.; Cima, M. & Guo, H. (2000). Production of injection molding
tooling with conformal cooling channels using the three dimensional printing
process. Polymer Engineering & Science 40(5), pp. 1232-1247.
Safullah, A. B. M.; Masood, S. H. & Sbarski, I. (2009). Cycle time optimization and part
quality improvement using novel cooling channels in plastic injection moulding,
Society of Plastics Engineers.
Shoemaker, J. (2006). Moldflow design guide: a resource for plastic engineers. Munchen, Hanser
Verlage.
Smith, A. G.; Wrobel, L. C.; McCalla, B. A.; Allan, P. S. & Hornsby, P. R. (2008). A
computational model for the cooling phase of injection moulding. Journal of
Materials Processing Technology 195(1-3), pp. 305-313.
Sridhar, L. & Narh, K. A. (2000). Finite size gap effects on the modeling of thermal contact
conductance at polymer-mold wall interface in injection molding. Journal of Applied
Polymer Science 75(14), pp. 1776-1782.
Sun, Y.; Lee, K. & Nee, A. (2002). The application of U-shape milled grooves for cooling of
injection moulds. Proceedings of the Institution of Mechanical Engineers, Part B: Journal
of Engineering Manufacture 216(12), pp. 1561-1573.
Sun, Y. F.; Lee, K. S. & Nee, A. Y. C. (2004). Design and FEM analysis of the milled groove
insert method for cooling of plastic injection moulds. The International Journal of
Advanced Manufacturing Technology 24(9), pp. 715-726.
Tang, L. Q.; Chassapis, C. & Manoochehri, S. (1997). Optimal cooling system design for multi-
cavity injection molding. Finite Elements in Analysis and Design 26(3), pp. 229-251.
Wang, G. G. & Shan, S. (2007). Review of Metamodeling Techniques in Support of
Engineering Design Optimization. Journal of Mechanical Design 129(4), pp. 370-380.

New Technologies – Trends, Innovations and Research

44
Wang, T.-H. & Young, W.-B. (2005). Study on residual stresses of thin-walled injection
molding. European Polymer Journal 41(10), pp. 2511-2517.
Xu, X.; Sachs, E. & Allen, S. (2001). The design of conformal cooling channels in injection
molding tooling. Polymer Engineering & Science 41(7), pp. 1265-1279.
Yu, C. J.; Sunderland, J. E. & Poli, C. (1990). Thermal contact resistance in injection molding.
Polymer Engineering & Science 30(24), pp. 1599-1606.
Zhou, H. & Li, D. (2005). Mold cooling simulation of the pressing process in TV panel
production. Simulation Modelling Practice and Theory 13(3), pp. 273-285.
Zhou, H.; Zhang, Y.; Wen, J. & Li, D. (2009). An acceleration method for the BEM-based
cooling simulation of injection molding. Engineering Analysis with Boundary
Elements 33(8-9), pp. 1022-1030.
3
Biologically Inspired Techniques
for Autonomous Shop Floor Control
Hong-Seok Park, Ngoc-Hien Tran and Jin-Woo Park
University of Ulsan
South Korea
1. Introduction
Currently, the conventional manufacturing systems, such as the Flexible Manufacturing
Systems (FMSs) are unable to adapt to the complexity and dynamic of the manufacturing
environment. These systems activate the automatic operations by using the pre-instructed
programs and should be stopped to re-program and re-plan in case of changes of the
manufacturing environment, which reduce the flexibility of the systems and increase the
downtime. Self-adaptation to disturbances is a crucial issue in the development of intelligent
manufacturing systems, which keeps the manufacturing system running and avoids
stopping completely. Many methods for the management of changes and disturbances
within manufacturing systems were proposed in the literature such as rescheduling (Vieira
et al., 2003; Wang et al., 2008), reactive and collaborative approaches (Monostoni et al., 1998;
Leitao & Restivo, 2006). These methods can be classified by two criteria: reconfiguration and
autonomy (Saadat et al. 2008). Reconfiguration is to rearrange and restructure
manufacturing resources that require the rescheduling method (Vieira et al., 2003) and
reconfigurable ability of manufacturing systems (Park & H.W. Choi, 2008). A dynamic
rescheduling is done when there is an occurrence of disturbances such as the machine
breakdown, malfunction of robot or transporter with long recovering time. Here, a new
schedule is generated when the current schedule is affected by disturbances (Vieira et al.,
2003; Wang et al., 2008). Autonomy allows the system to recover autonomously without
modifying scheduling. Reactive and collaborative methods were proposed following this
criterion (Monostoni et al., 1998). Reactive method is an autonomous control of an entity to
overcome disturbances by itself, while the collaborative method is used for a cooperation of
an entity with other entities in order to adapt to disturbances. These methods are suitable
for disturbances, which are not necessary to reschedule. In order to implement
reactive/collaborative methods, the distributed control architecture is required (Park & Lee,
2000). The control architecture changes from centralized control of non-intelligent entities in
hierarchical structures of the FMSs towards decentralized control of intelligent entities in
distributed structures.
The new trend of the manufacturing system development is to apply autonomous behaviors
inspired from biology for the manufacturing systems. Existing researches can be classified
into two groups: the evolutionary algorithms based system and the manufacturing control
system. In the first group, evolutionary algorithms inspired from biology such as genetic

New Technologies – Trends, Innovations and Research

46
algorithms, ant colony optimization and particle swarm intelligence are applied for the
applications of Computer Aided Process Planning (CAPP) (Shan et al., 2009). In the second
group, many novel paradigms that are known as intelligent manufacturing systems were
proposed in the literature. The Biological, Holonic, and Cognitive manufacturing systems
are the most remarkable concepts.
In the Holonic Manufacturing System (HMS), the ADACOR holonic manufacturing control
architecture was proposed (Leitao, 2008). In this architecture, the manufacturing control
architecture is divided into holons (Christo & Cardeira, 2007) such as the product, task,
operational, and supervisor holon (Leitao & Restivo, 2006). Operational holons represent the
physical resources available on the shop floor. These holons adapt to unexpected
disturbances such as the machine breakdown, tool wear and so on by themselves or by the
interaction with other operational holons through a supervisor holon. In this architecture,
there still exists the weakness of traditional centralized and sequential manufacturing
systems due to the use of the supervisor holon that reduces the flexibility of the system to
respond to disturbances. This weakness will be overcome by a decentralized control
architecture in which the agent technology is applied for implementing the logical part of
operational holons so that these holons can directly interact among them for overcoming
disturbances (D.H. Kim et al., 2009a).
In the Biological Manufacturing System (BMS), machine tools, transporters, robots, and so
on should be seen as biological organisms, which are capable of adapting themselves to
environmental changes (Ueda, 2007). In order to realize BMS, agent technology was
proposed for carrying out the intelligent behaviors of the system such as the self-
organization, evolution and learning (Ueda et al., 2006). The reinforcement learning method
was applied for generating the appropriate rules that determine the intelligent behaviors of
machines.
In the Cognitive Manufacturing System, each machine and its process are equipped with
cognitive capabilities in order to enable the factory environments to react flexibly and
autonomously to the changes, which are similar to human behaviors (Zaeh et al., 2009;
Nobre et al., 2008). A cognitive architecture for manufacturing systems introduced to reach
this goal, is named Beliefs-Desires-Intentions (BDI) (Zhao & Son, 2008). This architecture is
based on a human decision-making model from cognitive science that comprises knowledge
models, methods for perception and control, methods for planning, and a cognitive
perception-action loop (Zaeh et al., 2009; Zhao & Son, 2008).
Most of the current researches were focused on the rescheduling method for adapting to
disturbances within the manufacturing system, while only a few researches were
concentrated on reactive/collaborative method with applying agent or cognitive
technologies. On the other hand, agent and cognitive technologies are applied separately in
order to face with disturbances. The integration of these technologies brings greater
efficiency for applications. BDI agents and other cognitive architectures for agents have been
developed. In which agents and cognition are integrated. However, these architectures
should be adjusted for specific applications in the manufacturing control field, particularly
the adaptability of the manufacturing systems for unexpected disturbances.
This chapter proposes an Autonomous Shop Floor Control system (ASFrC) to adapt to
internal disturbances happening on the shop floor. In the ASFrC, the resources on the shop

Biologically Inspired Techniques for Autonomous Shop Floor Control

47
floor such as machine tools, robots and so on are considered as the autonomous entities.
Each entity overcomes the disturbance by itself or negotiates with the others. The
combination of agent and cognitive technologies for building the autonomous control entity
is proposed in which the shop floor overcomes the disturbances by agent cooperation
without upper level aids such as the Enterprise Resource Planning (ERP) and
Manufacturing System Execution (MES). To increase autonomous operation scope of agent,
the cognitive agent is proposed. Consequently, resources on the shop floor are controlled by
corresponding cognitive agents. The ASFrC is designed with following characteristics for
adapting to disturbances:
- Allowing the control system to take an action when the disturbance happens and to
continue to operate instead of stopping the manufacturing system completely.
- Equipping entities in the manufacturing system with the decision making and self-
controlling abilities.
The aim of this research is how the ASFrC adapts to internal disturbances (such as tool
wear, machine breakdown and malfunction of robot or transporter) in a short recovering
time with the non-negotiation or negotiation plan to recovery. The functionality of the
proposed system was proven on the ASFrC testbed in which an ant colony inspired solution
for negotiating among entities using pheromone value enables the system to overcome the
disturbance in an optimal way.
2. Core technologies
2.1 Cognitive agent
The cognitive agent is a computer program which uses the beliefs-desires-intentions (BDI)
architecture to arm an agent with cognitive capabilities (Zhao & Son, 2008). Beliefs are the
information of the current states of an agent’s environment. Desires are all the possible
states of tasks that the agent could carry out. Intentions are the states of the tasks that the
agent has decided to work towards. As a result, the agent performs cognitive activities that
emulate the human cognitive behaviors. Cognitive activities perform a loop of three steps:
perception, reasoning, and execution.
The cognitive agent inherits all characteristics from the traditional agent, including the
cooperation, reactivity and pro-activeness (Toenshoff et al., 2002). The cooperation of agents
is to get the global goal of the system. The reactivity is an ability of agents to respond to
changes of the environment that is based on the relation between perception and action. The
pro-activeness of agents is an ability to express the goal-directed behaviors. The different
feature of cognitive agent in comparison with the traditional agent is intelligence shown by
improving the pro-activeness characteristic. Intelligence is the ability of the agent to use its
knowledge (intentions) and reasoning mechanisms for making a suitable decision with
respect to the environmental changes.
The architecture of a cognitive agent is shown in Fig. 1. It consists of five modules:
perception, decision making, knowledge, control, and communication. The perception
module is responsible for data acquisition from the environment. The decision-making
module is in charge of making a decision autonomously. The control module processes the
plan into tasks and executes the tasks to the environment. The interactions between the

New Technologies – Trends, Innovations and Research

48
cognitive agents are carried out via the communication module. The knowledge base
module contains intentions, plans, and behavior mechanism of the agent.

Fig. 1. Architecture of a cognitive agent
2.2 Ant colony technique
In the natural environment, a collective intelligence is carried out by simple interactions of
individuals. A concept found in the colonies of insects, namely swarm intelligence, exhibits
this collective intelligence. Swarm intelligence is established from simple entities, which
interact locally with each other and with their environment (Garg et al., 2009). Ant colonies
show the collective intelligence as finding the shortest route from the food to their nest
through the simple interactions of ants using chemical substances called pheromones as
shown in Fig. 2. In order to adapt with the dynamic evolution of environment, a swarm of
ants needs the self-organization ability. Self-organization is carried out by re-organizing its
structure through a modification of the relationships among entities without external
intervention. Transferring this principle to the manufacturing system considered as a
community of autonomous and cooperative entities, the manufacturing system adapts to
changes by locally matching between machine capabilities and product requirements. Each
machine has a pheromone value for overcoming a specific disturbance type, and the
machine with the highest pheromone value is chosen for disturbance handling (Peeters et
al., 2001; Leitao, 2008).

Biologically Inspired Techniques for Autonomous Shop Floor Control

49

Fig. 2. The shortest route chosen by ants
2.3 ICT Infrastructure
Information and Communication Technology (ICT) infrastructure contributes significantly
to the success of implementing the ASFrC. The MES provides an interface between an ERP
system and shop-floor controllers due to executing functionalities such as scheduling, order
release, quality control, and data acquisition (B.K. Choi & B.H. Kim, 2002). Radio Frequency
Identification (RFID) technology and related sensors have a great potential in changing the
way of control, production automation, and special data collection (Günther et al. 2008).
They also make a contribution for cutting down labor cost, reducing breakdown time, and
improving production effectiveness. Ubiquitous Sensor Network (USN) is a tool of
collecting production data in real-time constraint. According to (Serrano & Fischer, 2007; M.
Kim et al., 2007) the main components of an USN are the sensor network, USN access
network, network infrastructure, USN middleware, and USN application platform.
In the machining system controlled by the cognitive system, RFID technology plays the role
of tracking on core components in complicated processes in real time because this technology
enables to read and write data to an RFID tag at the moving parts. The USN plays the role of
monitoring for machine’s operating status, actual production and increasing the product
quality improvement (D.H. Kim et al., 2009b). The vision of “feeling” machine components is
achieved by attaching multi-sensor system to these components (Denkena, 2008). Intelligent
components are the results of applying sensor technologies and the ICT progress that ensure
the precise operations and flexibility of the manufacturing system.
3. The manufacturing system with biologically inspired techniques
3.1 An autonomous shop floor control system
The cognitive agent based autonomous machining shop for adapting to disturbances is shown
in Fig. 3. Resources on the shop floor such as machines and transporter are controlled by the
corresponding agents. The workpiece agent manages the workpiece through the information
stored in the RFID tag. It cooperates with the transporter agent to transfer the workpiece to the

New Technologies – Trends, Innovations and Research

50
right machine. In the normal status, the MES controls the shop floor. Otherwise, the agent
overcomes the disturbance by itself or cooperates with other agents through wireless
communication. In case the agents cannot solve the happened disturbance, a message is sent to
the MES for rescheduling. If it takes long time to fix the occurred problems, the MES manages
the whole system through communication with the ERP system. These concepts are applied to
solve the internal disturbances with a short recovering time.

Fig. 3. Concept of an autonomous machining shop based on agents
Fig. 3 also shows the machining system for manufacturing the transmission case of the
automotive company in Korea. In this machining system, the mass production method has
been used. The output requirement is 300,000 parts per year. This production method
requires the short cycle time such as one minute per part. Normally, the transmission case
can be machined by several machines, which are the machining center with the multi-
functionality. However, this method takes the long machining time. Due to the short cycle
time, the operations for machining the transmission case are distributed to 17 machines on
the shop floor by the MES in which one machine can carry out maximal one or two
operations. To increase the flexibility of the machining system, the machining centers in the
machining system are used.

Biologically Inspired Techniques for Autonomous Shop Floor Control

51
There were 685 disturbances happened within the machining system during three years.
From the analysis of happened disturbances, they can be classified into three groups of
disturbances such as the rescheduling, non-negotiation, and negotiation group. In the
consideration of taking measures, the rescheduling group means that the assigned
machining task should be rescheduled due to the long recovery time, e.g. more than one
hour before stopping the whole system. This time was supposed from the effect of
disturbance to the planned schedule of the considered machining shop. In case that it is very
hard to keep the planned schedule within the limited tolerance due to the disturbance, the
rescheduling should be done by the MES. In our research, we don’t consider to the
rescheduling problem. We concentrate on how to remove the occurred disturbances which
belong to the non-negotiation or negotiation group. The non-negotiation group consists of
the disturbances of which the recovering time is less than 30 minutes and the methods for
recovery are known from the previous experience. The given time for classifying non-
negotiation or negotiation groups is based on the statistics of disturbances when machining
transmission cases. The disturbances requiring less than 30 minutes for recovering them are
mostly fixed by an operator with his own knowledge. So these disturbances were classified
into the non-negotiation group. The remainder of disturbances is grouped to the negotiation
type. Those disturbances can be solved with the knowledge collected when operating the
conventional machining shop through the agent negotiation process within the machining
shop. The disturbance analysis points out the 685 disturbances (100%) collected in the
machining shop can be distributed into: the non-negotiation with 11.4%, negotiation with
40.9% and rescheduling with 47.7%. The mechanisms for adapting to disturbances that
belong to non-negotiation and negotiation types are presented in Section 3.2 and 3.3.
3.2 Cognitive agent based disturbance handling
Fig. 4 shows the mechanism of the cognitive agent for overcoming the disturbance
happening at the machine tool. At the beginning, both of the controllers and the cognitive
agent receive the task from the MES (denoted by 1). The cognitive processor identifies the
goals and transforms them into the desires. The perception module collects and filters data
to obtain the information corresponding to the responsibilities of the agent. Then, the feature
extraction unit categorizes the data into high and low frequencies. To diagnose the states of
the machine according to the data types, the pattern recognition algorithms such as fuzzy
logic or neural network are used. The cognitive agent has the reasoning process with the
recognized features, desires, and intentions to make a decision. If the data obtained from the
output of the perception module (denoted by 2) match the desired goals, a message is sent
to the MES to report the normal state of the machine (denoted by 3), and the shop floor
continues running. Otherwise, the cognitive agent reasons the disturbance cases. If the
disturbance takes a long time to recover or is unable to recover, agent sends a message to the
MES to require the rescheduling (denoted by 3). Otherwise, the decision-making module
generates a new plan based on the data, desires, and intentions using the neural network or
rule base (denoted by 4). This plan is immediately carried out by the disturbed machine if
the disturbance is easy to recover and its measure is already known (denoted by 5). For
example, a tool wear is recovered by changing the cutting parameters without affecting the
quality of the product. In this case, the plan is processed into tasks, and then the task
command is sent to the controllers of the machine. In case the disturbance is difficult to
recover. For example, if the machine breaks down, the assigned task must be executed by

New Technologies – Trends, Innovations and Research

52
another machine. The cognitive agent implements a negotiation with the other agents. The
pheromone based negotiation mechanism is presented in Section 3.3. The job of the failure
machine is taken over by another machine to keep the operation of the manufacturing
system (denoted by 6). The agent selected through the negotiation sends a message to the
workpiece agent and the transporter agent (denoted by 7) to inform them of performing the
task of the failure machine. The shop floor uses the previous plan after fixing the failure
machine. In case the negotiation between agents does not have any solutions, the request for
rescheduling is sent to the MES (denoted by 8).

Fig. 4. Mechanism of cognitive agents for adapting to disturbances
3.3 Ant-like pheromone based agent negotiation mechanism
When the disturbance which belongs to the negotiation group happens to the machine
during carrying out the operation dispatched by the MES, we need an alternative machine

Biologically Inspired Techniques for Autonomous Shop Floor Control

53
to carry out that operation in order to keep the given schedule within the tolerance range.
So, we consider only the disturbed operation at that time occurring the disturbance, not all
operations for machining the transmission case. Due to using the machining center, there are
several machines in the machining system which can carry out this operation. Therefore, we
must choose a most appropriate machine among the alternative machines.
To select the most appropriate machine, the machine agent #1 managing the failure machine
sends the task information to the remaining machine agents. The task information consists
of the machining method, the cutting conditions, and the tool type. The machine agents
compare these information to their machine ability through their database. In the database,
potential factors of a machine for carrying out a task such as machine specification and
capability to machine workpiece according to its functional requirements are stored. Each
machine agent is considered as an ant, and the pheromone is used as a communication
mediator in agent negotiation. The function of pheromone is to indicate the ability of
machine for carrying out the task roughly. In agent negotiation, pheromone value is used as
the criterion for choosing the optimal machine among the alternative machines. In case the
machine agents meet the requirements of the task, they generate the pheromone values.
Otherwise, the pheromone value equals zero.
3.3.1 Nomenclature
Q cutting volume (mm
3
)
T
t
tool life (min)
T
s
tool setup time (min)
MRR metal removal rate of the process (mm
3
/min)
v
c
cutting speed (mm/min)
f feed rate (mm/rev)
a
p
depth of cut (mm)
k the hourly operation cost of the machine tool ($/hour)
R
a
surface roughness of the machined part (µm)
α
IT
the coefficient mentions the accuracy and reliability of the machine tool affecting to
the dimensional tolerance of the machined part
β

the coefficient mentions the hardness and thermal stability of the cutting tool and
workpiece affecting to the form tolerance and surface integrity of the machined part
3.3.2 Pheromone value
Based on the ant colony algorithm (Xiang & Lee, 2008), the formulation for calculating the
pheromone value was designed in consideration of the processing time, machining cost, and
machining quality. It is shown as follows:

1
1
i
MA t
PT c
q
p q
M M
M
(
(
(
=
(
| |
+ +
( |
\ . ¸ ¸
(1)
q
t
is the executing ability of the machine MA
i
about the task asked from the failure machine.
If the task t can be carried out at the machine MA
i
, q
t
=1, otherwise, q
t
=0. M
PT
, M
c
, and M
q


New Technologies – Trends, Innovations and Research

54
represent the processing time, machining cost, and machining quality of the task t at the
machine MA
i
, respectively. The highest pheromone value of the task requires the lowest
processing time, machining cost, and the highest machining quality. After calculating the
values of the M
PT
, M
c
, and M
q
using equations (2), (4) and (5) respectively, these values in
Eq. (1) are assumed non-dimension to calculate the pheromone value which can be used as
thumb rule for assessing the machining ability of machine in terms of processing time,
machining time and quality.
The same task t may have different processing times on different machines due to the
different cutting parameters. These parameters are determined by the cutting conditions,
machine capability and tool type. The processing time of the task t at the machine MA
i
is
calculated using Eq. (2). The value of the metal removal rate (MRR) of the process depends
on the cutting parameters and the operation types. The value of the MRR in case of the
turning operation, for example, is shown in Eq. (3).

1 .( )
s
PT
t
T Q
M
MRR T
= +
(2)

. .
c p
MRR v f a =
(3)
The machining cost factor is calculated in consideration of the hourly operation cost of the
machine tool and the machining time as shown in Eq. (4).

60
.
PT
C
k M
M =
(4)
In the machining quality, the functional requirements of workpiece such as dimension,
tolerance, surface roughness and micro structural change must be fulfilled. The machining
quality factor was considered in the relationship between the machine specifications, cutting
tool, and material properties (Toenshoff et al., 2000). It was empirically evaluated for the
quantification in consideration of the allowed limitations of cutting condition, the
machining ability of a machine in terms of accuracy and reliability as well as the hardness
and thermal stability of the cutting tool and workpiece. The formula for quantifying the
machining quality is given as follows:

1
. .
q
a IT
M
R α β
=
(5)
The surface roughness of the machined part is calculated using the theoretical formula (Eq.
(6)) (Cus & Zuperl, 2006).

1 2 3
. . .
x x x
a c p
R p v f a =
(6)
where x1, x2, x3, and p are the constants relative to the combination of tool and workpiece,
which are given in the machining handbooks. The values of v
c
, f, and a
p
are in the allowed
limitations of cutting condition of the machine tool.
The machines in terms of the accuracy and reliability can be classified into the precision
machine and the high precision machine. The dimensional tolerance of the machined part is

Biologically Inspired Techniques for Autonomous Shop Floor Control

55
in the range of IT6÷IT7, IT3÷IT5 using the precision machine, and the high precision
machine, respectively. The α
IT
coefficient was determined as follows:

Machine Precision High precision
International Tolerance (IT)
quality
IT6÷IT7 IT3÷IT5
α
IT
1 0.5
Table 1. The value of α
IT
.
The objective of any machining operation is to maximize the MRR after the fulfillment of
all required quality conditions. The machining method in terms of the MRR can be
classified into the conventional machining and the high speed machining. The MRR of the
high speed machining is 5÷10 times higher than of which of the conventional machining.
However, the higher MRR will result in the higher thermal damage on the workpiece and
cutting tool which affects to the machining quality of the machined part. The differences
in dimensional accuracy of the machined part are caused by the thermal expansion of tool
and workpiece. In particular, with the same machining conditions thermal expansion on
the tool tip and workpiece can reach up to 10 and 15 μm, respectively (Zhou et al., 2004).
The experimental results reported in the literature show that the use of cooling lubricants
increases the workpiece quality and prevents the form errors due to thermal effects
(Toenshoff et al., 2000). Assuming that the contribution of thermal effects to the overall
error of the machined parts is more than 50%, and the MRR of the conventional
machining calculated in consideration of the optimal cutting parameters (v
c
, f, and a
p
) is
known. So the MRR of the high speed machining is in the range of (5÷10) MRR. Based on
the machining methods (conventional machining or high speed machining) and the
cooling method (using coolant or dry machining), the value of the β coefficient was given
in Table 2 and Table 3.

Method β
Conventional machining and using coolant 0.5
Conventional machining and dry machining 0.6
Table 2. The value of β in the case of the conventional machining.

Method β
High speed and using coolant 0.5
High speed and dry machining 5.MRR 0.6
6.MRR 0.7
7.MRR 0.8
8.MRR 0.9
9.MRR 1.0
10.MRR 1.1
Table 3. The value of β in the case of the high speed machining.


New Technologies – Trends, Innovations and Research

56
3.3.3 Pheromone based agent negotiation
According to the generated pheromone values of the task t at the different machines, the
machine agent #1 uses the algorithm given as follows for making a decision.
Case 1: All pheromone values are zero.
send (message) /*requesting the MES for rescheduling*/
Case 2: There is only one the pheromone value of the machine agent (i) that is not zero.
send (message) /* the machine agent (i) is selected*/
Case 3: There are more than two pheromone values that are not zero.
If the machine agent (j) has the highest pheromone value
send (message) /*the machine agent (j) is selected*/
The algorithm for the remaining machine agents in the negotiation process is given as
follows:
analyse (message) /*matching the content of task information with their ability*/
generate (pheromone) /*generating the pheromone value of the assigned task*/
4. Implementation
The cognitive agents were developed using the .NET platform and C#. The system
architecture of the ant colony inspired machining shop is shown in Fig. 5. It points out the
three kernel issues to implement the cognitive agents, which are the interaction protocol,
agent behaviors, and database (DB) as well as the information flow among components in
the system for carrying out the functionalities. The agent interacts with the MES and the
other agents via the extensible markup language (XML) messages. The process control
protocol (OPC) for linking and embedding objects is used for communicating the agent with
the programmable logic controllers (PLC) which connect to the physical devices on the
machining shop such as the RFID reader, disturbance input, and alarm device. The
databases, including the processing information, the agent addresses for communicating in
the network, the pheromone values of the tasks related to the machine agents, and the
disturbance DB, were built using SQL Server
TM
2005. The agent uses the “search” method to
diagnose and classify the disturbance. According to the disturbance type, the agent reasons
to make a decision using the “adjust” or “collaboration” methods. In collaboration, the
agents generate the pheromone value of the assigned task using the “calculate” method.
Then, the “negotiate” process is carried out among agents to find the agent with the highest
pheromone value for carrying out the task.
4.1 Reaction of the system in the case of non-negotiation
Fig.6 illustrates the non-negotiation process of the ASFrC. At the beginning, the MES system
dispatches the jobs to the corresponding machines based on the machine agent ID. The
normal status of the machine is shown by the green light. The disturbance occurs at the
machine #1 that is shown by turning “ON” of the disturbance generator. The red light is
“ON” and the alarm is shown on the display screen. The machine agent #1 gets the
disturbance signal through the PLC #1. It diagnoses the disturbance type based on its
disturbance database. If the disturbance belongs to the non-negotiation type; for example,
the tool wear, the agent adjusts the cutting parameters, which are determined by using the

Biologically Inspired Techniques for Autonomous Shop Floor Control

57






Fig. 5. System architecture of the ant colony inspired machining shop

New Technologies – Trends, Innovations and Research

58
neural network with the inputs such as the existing cutting parameters and conditions, tool
information. In case the new parameters are generated, the machine runs the operation
continuously and the green light is “ON”. Otherwise, the disturbance is considered as the
negotiation type, and the agent activates the negotiation with other agents.
The screen shot of the developed system in the case of tool wear is shown in Fig. 7. The
machine agent #1 gets the disturbance signal from the PLC #1 through KEPServerEx
TM

software (denoted by 1). It analyses the disturbance type based on its disturbance database
(denoted by 2). If the disturbance belongs to the non-negotiation type such as the tool wear
(denoted by 3), the agent adjusts the cutting parameters determined by using the neural
network. After changing the parameters newly (denoted by 4), the machine agent sends
these parameters to the controller of the machine.












Fig. 6. Non-negotiation process of the ASFrC

Biologically Inspired Techniques for Autonomous Shop Floor Control

59

Fig. 7. The screen shot of the system in the case of tool wear
4.2 Reaction of the system in the case of negotiation
Assuming that the disturbance happens on the machine #1, and the agent diagnoses it
belongs to the negotiation group, for example, tool-broken. Immediately, the negotiation of
machine agents is activated as shown in Fig. 8. The machine agent #1 sends a message for
help to the remaining machine agents. This message consists of the machining information
and addresses of the receiving machine agents. The machine agents negotiate to find out
another route. This negotiation is based on the evaluation of the pheromone values of
machine agents, the precedence relationship between the operations, and current status of
the machines. Each machine has a pheromone value for a specific operation and the machine
with the shortest processing time, lowest machining cost and highest machining quality for
a specific operation has the highest pheromone. After negotiating, the machine agent #2 is
chosen for machining the task #1 of the machine #1. The machine agent #2 cooperates with
the transporter and workpiece agent to carry out the accepted job. As the result, the green
light at the machine #2 is “ON”.
The screen shot of the developed system in the case of tool broken is shown in Fig. 9. The
disturbance belongs to the negotiation type (denoted by 3). The network of server/clients is
established for agent negotiation (denoted by 4). Then, the negotiation of machine agents is

New Technologies – Trends, Innovations and Research

60
activated using the ant colony based mechanism presented in Section 3.3 (denoted by 5).
After negotiating, the machine agent with the highest pheromone value is chosen for
carrying out the task #1 of the machine #1.

Fig. 8. Negotiation process of the ASFrC

Biologically Inspired Techniques for Autonomous Shop Floor Control

61


Fig. 9. The screen shot of the system in the case of tool broken

New Technologies – Trends, Innovations and Research

62
4.3 Experimental results
The functionality of the developed system was proven on the ASFrC testbed shown in Fig.
10. The disturbance generators (turn on/off switches) are used to generate disturbances. The
PLCs considered as the controllers of the machine tools get the processing information from
the MES and execute the machining jobs. The processing information of the system is
displayed on the monitoring screen. The workpiece information is collected by the RFID
system. The cognitive agents representing the machines, workpiece, and transporter are
installed on the personal computers (PCs). Through the collaboration of each PC, the
machining process of a workpiece is executed completely. The experimental results show
that the developed system overcomes the disturbances successfully which belong to the
non-negotiation or negotiation type. Through that, the manufacturing productivity is
increased.


Fig. 10. Experimental setup
5. Conclusion
The Autonomous Shop Floor Control system (ASFrC) with biologically inspired techniques
is a feasible solution for adapting autonomously to disturbances. It meets the requirements
of flexibility, adaptability, and reliability. This research also proved the efficiency of
applying the biologically inspired technologies such as cognitive agent and ant colony
technique into the manufacturing field. These technologies are necessary for the future
manufacturing systems.

Biologically Inspired Techniques for Autonomous Shop Floor Control

63
6. Acknowledgements
This research was supported by the Ministry of Knowledge Economy, Korea, under the
Industrial Source Technology Development Programs supervised by the Korea Evaluation
Institute of Industrial Technology
7. References
Choi, B.K. & Kim, B.H. 2002. MES (manufacturing execution system) architecture for FMS
compatible to ERP (enterprise planning system), International Journal of Computer
Integrated Manufacturing, Vol. 15, pp.274-284, ISSN: 1362-3052.
Christo, C. & Cardeira, C. 2007. Trends in intelligent manufacturing systems, Proceedings of
the IEEE International Symposium on Industrial Electronics, pp.3209-3214.
Cus, F. & Zuperl, U. 2006. Approach to Optimization of Cutting Conditions by Using
Artificial Neural Networks, Journal of Materials Processing Technology, Vol. 173,
pp.281-290, ISSN: 0924-0136.
Denkena, B., Mohring, H.C. & Litwinski, K.M. 2008. Design of dynamic multi sensor
systems, Production Engineering, Vol. 2, pp.327-331, ISSN: 0944-6524.
Garg, A., Gill, P., Rathi, P., Amardeep & Garg, K.K. 2009. An insight into swarm intelligence,
International Journal of Recent Trends in Engineering, Vol. 2, pp.42-44, ISSN: 1797-9617.
Günther, O.P., Kletti, W. & Kubach, U. 2008. RFID in Manufacturing, Springer, ISBN:
3540764534
Kim, M., Lee, Y.J. & Ryou, J. 2007. How to share heterogeneous sensor networks in
ubiquitous environment, Proceeding of the International Conference on Wireless
Communications, Networking and Mobile Computing, pp.2799-2802.
Kim, D.H., Song, J.Y. & Cha, S.K. 2009a. Development and evaluation of intelligent machine
tools based on knowledge evolution in M2M environment, Journal of Mechanical
Science and Technology, Vol. 23, pp.2807-2813, ISSN: 1976-3824.
Kim, D.H., Song, J.Y., Lee, S.H. & Cha, S.K. 2009b. Development and evaluation of Zigbee
node module for USN, International Journal of Precision Engineering and
Manufacturing, Vol. 10, pp.53-57, ISSN: 2005-4602.
Leitao, P. & Restivo, F. 2006. ADACOR: A holonic architecture for agile and adaptive
manufacturing control, Computers in Industry, Vol. 57, pp.121-130, ISSN: 0166-3615.
Leitao, P. 2008. A bio-inspired solution for manufacturing control systems, In: A. Azevedo
(Eds.), IFIP International Federation for Information Processing, Innovation in
Manufacturing Networks, pp.303–314.
Monostori, L., Szelke, E. & Kadar, B. 1998. Management of changes and disturbances in
manufacturing systems, Annual Reviews in Control, Vol. 22, pp.85-97, ISSN: 1367-5788.
Nobre, F.S., Tobias, A.M. & Walker, D.S. 2008. The pursuit of cognition in manufacturing
organizations, Journal of Manufacturing Systems, Vol. 27, pp.145-157, ISSN: 0278-6125.
Park, H.S. & Choi, H.W. 2008. Development of a modular structure–based changeable
manufacturing system with high adaptability, International Journal of Precision
Engineering and Manufacturing, Vol. 9, pp.7-12, ISSN: 2005-4602.
Park, H.S. & Lee, W.G. 2000. Agent-based shop control system under holonic manufacturing
concept, Proceeding of the 4th Korea-Russia International Symposium, Vol. 3, pp.116-121.

New Technologies – Trends, Innovations and Research

64
Peeters, P., Brussel, H.V., Valckenaers, P., Wyns, J., Bongaerts, L., Kollingbaum, M. &
Heikkila, T. 2001. Pheromone based emergent shop floor control system for flexible
flow shops, Artificial Intelligence in Engineering, Vol. 15, pp.343-352.
Saadat, M., Tan, M.C.L. & Owliya, M. 2008. Changes and disturbances in manufacturing
systems: A comparison of emerging concepts, World Autom Congress Proceedings,
pp.556-560.
Serrano, V. & Fischer, T. 2007. Collaborative innovation in ubiquitous systems, J Intell
Manuf, Vol. 18, pp.599-615, ISSN: 1572-8145.
Shan, H., Zhou, S. & Sun, Z. 2009. Research on assembly sequence planning based on genetic
simulated annealing and colony optimization algorithm, Assembly Automation, Vol.
29, pp.249-256, ISSN: 0144-5154.
Toenshoff, H.K., Arendt, C. & Ben Amor, R. 2000. Cutting of Hardened Steel, Annals of the
CIRP, Vol. 49, No. 2, pp.547-566.
Toenshoff, H.K., Woelk, P.O., Herzog, O. & Timm, I.J. 2002. Agent-based in-house process
planning and production control for enterprises in supply chains, In: Sullivan, W.G.
et al. (Eds.) Proceedings of the 12th International Conference on Flexible Automation and
Intelligent Manufacturing, pp.329-338.
Ueda, K. 2007. Emergent synthesis approaches to biological manufacturing systems, In: P.F.
Cunha, P.G. Maropoulos (Eds.), Digital Enterprise Technology, pp.25-34, ISBN: 978-0-
387-49863-8.
Ueda, K., Kito, T. & Fujii, N. 2006. Modeling biological manufacturing system with
bounded-rational agents, Annals of the CIRP, Vol. 55, pp.469-472, ISSN: 0007-8506.
Vieira, G.E., Hermann, J.W. & Lin, E. 2003. Rescheduling manufacturing systems: A
framework of strategies, policies, and methods, Journal of Scheduling, Vol. 6, pp.39-62.
Wang, Y.F., Zhang, Y.F., Fuh, J.Y.H., Zhou, Z.D., Lou, P. & Xue, L.G. 2008. An integrated
approach to reactive scheduling subject to machine breakdown, Proceeding of the
IEEE International Conference on Automation and Logistics, pp.542-547.
Xiang, W. & Lee, H.P. 2008. Ant colony intelligence in multi-agent dynamic manufacturing
scheduling, Engineering Applications of Artificial Intelligence, Vol. 21, pp.73-85.
Zaeh, M.F., Beetz, M., Shea, K. et al. 2009. The cognitive factory, In: H.A. EIMaraghy
(Eds.), Changeable and reconfigurable manufacturing systems, pp.355-371, ISBN: 978-
1-84882-066-1.
Zhao, X. & Son, Y. 2008. BDI-based human decision-making model in automated
manufacturing systems, International Journal of Modeling and Simulation, Vol. 28, pp.
347-356, ISSN: 0228-6203.
Zhou, J.M., Anderson, M. & Ståhl, J.E. 2004. Identification of cutting errors in precision
machining hard turning process, Journal of Materials Processing Technology, Vol. 153–
154, pp.746–750, ISSN: 0924-0136.
4
The Micro Injection Moulding Process for
Polymeric Components Manufacturing
R. Surace, G. Trotta, V. Bellantone and I. Fassi
ITIA-CNR, Institute of Industrial Technology and Automation,
National Research Council,
Italy
1. Introduction
In recent years, there is an increasing demand for small and even micro scale parts and this
trend towards miniaturization makes the micro system technologies of growing importance.
Microfabrication process capabilities should expand to encompass a wider range of materials
and geometric forms, by defining processes and related process chains that can satisfy the
specific functional and technical requirements of new emerging multi-material products, and
ensure the compatibility of materials and processing technologies throughout these
manufacturing chains. Example technologies to be investigated either individually or in
combination are technologies for direct- or rapid manufacturing, energy assisted technologies,
microreplication technologies, qualification and inspection methods, functional
characterisation methods and integration of "easy and fast" on-line control systems.
The processes should demonstrate significantly high production rates, accuracy and
enhanced performance or quality, creating capabilities for mass manufacture of
microcomponents and miniaturised parts incorporating micro- or nanofeatures in different
materials. Processes should also provide high flexibility and seamless integration into new
micro- and nanomanufacture scenario. Micro- and nano-manufacturing technologies can
provide the basis of the next industrial revolution that could dramatically modify the way in
which businesses are setup, run and marketed.
Micro injection moulding can be defined as one of the key technologies for micro
manufacturing because of its mass production capability and relatively low production cost.
It is the process of transferring the micron or even submicron features of metallic moulds to
a polymeric product. During the process, the material, in form of granules, is transferred
from a hopper into a plasticizing unit so that it becomes molten and soft (Fig. 1a). The material
is then forced, under pressure, inside a mould cavity where it is subjected to holding pressure
for a specific time to compensate for material shrinkage (Fig. 1b). After a sufficient time, the
material freezes into the mould shape, gets ejected and the cycle is repeated.
This technology was firstly introduced from traditional injection moulding since late
eighties but no appropriate machine technology was available and only modified
commercial units of traditional injection moulding machine could be used. Only in the
middle of nineties, special new micro injection machines were developed specifically

New Technologies – Trends, Innovations and Research 66
addressing micro moulding parts and thus, research efforts have still to be done. Currently,
the injection moulding process offers several advantages in terms of mass
manufacturability, variety of materials and accurate replication of micro-scaled features, and
it is being used commercially for producing some types of devices. A number of limitations,
however, need to be overcome before the wide-scale fabrication of micro components can be
realized by micro injection moulding. In particular, the nature of end-shape processes puts
limitations on the allowed geometrical designs to ensure smooth demouldability. Moreover,
the study and optimization of the process parameters, especially for high aspect ratios
features, are essential for producing parts with acceptable quality. The variables, that affect
the quality, can be classified into four categories: mould and component design,
performance of moulding machine, material, and processing conditions [1].

(a)

(b)
Fig. 1. Example of micro injection moulding machine (a) and 1 half mould (b)

The Micro Injection Moulding Process for Polymeric Components Manufacturing 67
This chapter intends to review the state of the art of micro injection moulding for micro
components, to highlight both the potential developments and research gaps of this process.
Tool design principles, plastic materials and process parameters commonly reported in
literature are critically reviewed towards the identification of the most effective processing
conditions, given a specific application. Finally, the injection moulding process of a micro
part (a miniaturized dog bone shaped specimen for tensile tests) is presented and discussed
as case study.
2. Definition of micro moulded components
Several definitions of micro-component can be found in literature, relying either on the
overall manufactured or process characteristics. A product manufactured by micro-injection
moulding process can be defined [2] as reported below [2]::
1. the mass of the part is few milligrams;
2. the part exhibits dimensions with tolerances in the micrometric range;
3. some features are in the order of micrometers.
Nowadays, micro components are widely used and they can be classified also with respect
to their application as reported in Table 1. Some examples are reported in Fig. 2.


a) b)
c)
Fig. 2. a) Microelectromechanical systems chip (source Wikipedia), b) Neurochip developed by
Caltech (source Wikipedia) and c) micro bars test part (courtesy of University of Nottingham)

New Technologies – Trends, Innovations and Research 68
APPLICATION FIELDS EXAMPLES
Micromechanical parts
• Locking lever for micro mechanical
industry or micro switch;
• Latch for the watch industry;
• Catch wheel for micro switch;
• Operating pin;
• Gear plate for motive power engineering.
Micro gear wheel
• Dented wheel for watch industry;
• Rotor with gear wheel for watch
industry;
• Gear wheel for micro gear;
• Spur wheel in the field of electrical
technology;
• Spiral gear in the field of electrical
technology/metrology;
• Spline in the field of electrical
technology/metrology.
Medical industry
• Micro filter for acoustics, hearing aid;
• Implantable clip;
• Bearing shell/bearing cap;
• Sensor housing implantable;
• Aseptic expendable precision blade.
Optical and Electronic industries
• Coax plug/switch MID for mobile phone
• SIM card connector for mobile phone;
• Pin connector for mobile phone;
• Single mode and multi mode ferrules.
Table 1. Micro components applications
An open research issue in micro injection moulding is related to fabrication of parts with a
higher and higher aspect ratio (as micro bars in Fig. 2c). The aspect ratio of a shape is
defined as the ratio of its longer one to its shorter dimension. It may be applied to two
characteristic dimensions of a three-dimensional shape, such as the ratio of the longest and
shortest axis. The aspect ratio, achievable in replicating micro features is one of the most
important characteristic of the micro fabrication processes and it constitutes a constraint in
applying injection moulding. High Aspect Ratio (HAR) components can be found in many
applications and therefore have to be investigated to break trough previous barriers in
miniaturization. Concerning achievable aspect ratios, there is a limitation which is a
function of the geometry of the micro-features, their position on the sample, the polymer
type and the process parameters [3]. The literature suggests that the critical minimum
dimensions which can be replicated successfully by injection moulding are mainly
determined by the aspect ratio. Polymeric materials with minimum wall thickness of 10 µm,
structural details in the range of 0.2 µm, and surface roughness of about Rz < 0.05 µm have
been manufactured [4].

The Micro Injection Moulding Process for Polymeric Components Manufacturing 69
Beyond geometry and HAR, also physical phenomena have to be taken into account in the
micro world differently from macro as for example the “hesitation effect”. This effect (Fig. 3)
is a phenomenon that can occur during the filling of polymers, and it is common when an
injection moulded part contains different thicknesses [5]. It may take place also when HAR
microstructures (usually having large than 2) are placed on a relatively thick substrate,
which is the case for example of microfluidic devices [6]. The polymeric melt tends to flow
more easily into cavities with relatively low resistance areas of greater cross section while
the flow stagnates at the entrance of micro-structures; the result is that the melt freezes in
this area because the filling time of the substrate is usually greater than the freezing time of
the micro feature. It was recommended in the literature that injection moulded parts with
HAR microstructures should have a thickness in which a quick filling of the substrate can
allow for filling of the micro-cavities before solidification starts [7]. In addition the literature
shows that, in unidirectional flow, the depth of filling in micro channels is sensitive to the
channel width [8].

Fig. 3. Hesitation effect of the melt flow in the proximity of micro channels
3. Design of components mouldable by micro injection moulding
Unlike conventional injection moulding, where manufacturability issues are considered in
product design phase, very little has been done so far for micro injection moulding. The
research community is still assessing the process capabilities. The open questions in micro
injection moulding are: ’how small can we go with the product’? Which is the maximum
achievable aspect ratio?’ Still there is not a consolidated approach towards the design for
manufacturability.
Part dimensions, position and shape of the parting line, existence of undercuts, mould-
cavity features in addition to tolerances and surface finishing are commonly considered in
part design for conventional injection moulding. A number of studies have suggested
techniques to evaluate the complexity of injection moulded shapes with respect to
replication and demoulding [9,10]; but the overall small dimensions of micro moulded parts
do not always allow the use of the above mentioned strategies. In the following, the design
factors affecting the overall quality of a micro-injected part are critically discussed.
3.1 Mould cavity design
An important aspect to take in consideration in mould cavity design is related to the large
surface to volume ratio of many micro components leading to fast cooling or even freezing

New Technologies – Trends, Innovations and Research 70
of the injected melts into tools. Despite the fact that polymers have a low thermal
conductivity and usually show a ‘self-isolating’ effect, the injected materials rapidly freeze
on the tool wall and the microcavities could not be filled completely. As a consequence of
the thin walls and large surfaces of micro components compared with their volume, the
mould temperature of the materials adapts to the mould within milliseconds.
The evacuation of the air from the mould cavity is another important issue for the
evaluation of the quality of produced micro component in order to prevent compression-
induced defects in the material. If the cavities contain micro features that are so small that
they cannot be vented in the standard way through the parting plane or special bore holes, it
is necessary to develop a system dedicated to the evacuation of the air from the cavity. Some
applications are reported in literature of creating the vacuum in the mould [11,12,13].
In micro injection moulding it is quite difficult to design the cooling system because of the
dimension of the mould, where in few centimeters, are located the cavity and the ejection
mechanism and this means that a temperature variation across the moulded part should be
expected depending on the geometry [14]. In any case, by literature it was seen that the
cooling of mould is not always required, especially when it is desired to keep the mould
temperature above the “Glass Temperature” (Tg), the temperature below which an
amorphous material behaves as a glassy solid. Thermoplastic polymers may have a further
value of Tg: a low temperature below which they become hard and brittle taking easy
tendency to shatter. In addition, at temperatures greater than Tg, polymers have such
flexibility and ability to undergo plastic deformation without encountering fractures, a
characteristic that is particularly exploited in the plastic material technology.
Demoulding is another important aspect to take care in micro mould design. A factor that
affects demoulding is the orientation of the polymeric chain being injected, because this
influences the direction at which shrinkage is most observed [3]. A geometrical useful
method to obtain a successful demoulding consist in the use of draft angles. A positive draft
angle, greater than ¼°, has been successfully used for demoulding in plastic micro injection
moulding [15].
The use of inserts is another typical application of the injection moulding process and it
becomes very important in micro injection moulding when, for example, micro cavities for
microfluidic applications are realized and then fitted in the main mould body. The main
goal of using mould with changeable inserts resides in the ability to test different micro-part
geometries (removable cavities) without discarding the basic structure of the mould,
specifically designed for micro-components injection [16]. The use of moulds with inserts
reduces the overall cost of process setup, where the finalized mould design is produced by a
number of iterative steps in which parts are injected and the mould design is changed [6].
The concept of replaceable cavities can be applied in design of mould for different
applications and the efficiency of the product development stage is greatly improved. The
inserts allow easy testing of the design prototypes especially in those products where clear
design guidelines are not available. Another advantage of using inserts is related to the
material with which they can be manufactured. Infact, the material can be different from the
one used for the mould, usually made of steel, and it can depend on the manufacturing
technology available and on costs.

The Micro Injection Moulding Process for Polymeric Components Manufacturing 71
Another special feature usually used in injection moulding, that are still under evaluation
for micro injection moulding, is the system to measure the mould cavity pressure. In
literature there are different methods proposed to measure the cavity pressure as for
example a piezoelectric force trasducer located behind the injection pin [2] or a miniaturized
quartz sensor to direct measure the pressure in the micro mould cavity applied at the end of
the sprue channel [17].
3.2 Micro component design
One of the main goals related to the design of a micro mouldable component is the
reduction of the shrinkage affecting shape stability in the form of induced warpage. The
warpage is due to the non-uniformity of the shrinkage induced by the complex thermal
variation inside the mould [14]. Warpage prediction is important for parts with relatively
large area compared to their thickness.
Different techniques have been suggested to decrease the effect of shrinkage:
• to increase the value of holding pressure, which, on the other hand, will also increase
stresses inside the part [18];
• to have a long cooling time so that the part can thermally equilibrate inside the mould
cavity and become approximately uniform [14];
• to increase the cycle time, as a trade-off of a long cooling time.
A second aspect that have to be considered is the geometrical configuration. In order to
explain the dependence of the degree of filling of the distance from the gate, from where the
polymer enters inside the cavity, it was introduced the parameter time to pressure [19]. The
measurement of this parameter, compared with the injection speed for sections with
different thickness demonstrates that the shear stresses and accordingly the pressure drop
required to fill the feature, are in general much higher than that to fill the substrate.
Concerning aspect ratios, it was suggested that there is a limitation regarding the achievable
aspect ratio [3]. The maximum achievable HAR, which is a function of the geometry of the
micro-features, the position on the sample, the polymer type and the process parameters. As
suggested in the literature [20], standard testing shapes can be helpful in comparing filling of
structures with different wall thicknesses but the same aspect ratio. This will help in
investigating the relation between wall thickness and flow path length and their limits. They
can also be used for a wide range of polymers, since material properties affect flow behavior.
4. Moulding machine
The micro injection moulding technology was firstly applied modifying units of traditional
injection moulding machine [21]. Lately, special new micro injection machines were
developed specifically addressing micro-moulding parts. In the conventional reciprocating
screw injection moulding process, polymer materials are melted and injected into mould
cavities through a screw-barrel system and there are limitations regarding the reduction of
screw dimension for constructive problems. Moreover, cycle times are usually longer than
necessary using conventional machine for micro injection moulding. At the moment,
commercial micro moulding systems are produced from Ferromatik Milacron, Arburg and

New Technologies – Trends, Innovations and Research 72
Sumitomo Demag as microinjection units for conventional machines and Wittmann-
Battenfeld, Babyplast and Desma as dedicated micro injection moulding machines.
Ferromatik Milacron developed two types of microinjection units: a two stage injection unit
with an extruder and injection plunger and a fully electric injection unit with 14 mm screw.
Arburg launched its new micro-injection module, which operates with an 8 mm injection
screw that guarantees a high degree of dosing precision and it is combined with a second
screw, which is responsible for melting the material. Sumitomo Demag developed a
customized unit for shot weights of 5 g to 0.1 g. In addition Chang et al. [22] developed a
novel concept of micro-injection moulding system designed as a separated module, which is
a hot runner plunger-type injection moulding module and could be applied to small size
(30–100 t) reciprocating screw hydraulic or fully electric injection moulding machines.
Instead, the dedicated micro moulding machines use a separate screw or piston in the
plasticizing unit and a plunger injection system. The new born Wittmann-Battenfeld
MicroPower is a modular fully electric production cell in which the plasticizing is realized
by means of 14 mm extruder screws, the piston injection by means of 5 mm pistons and the
maximum injection speed is of 750 mm/s. The injection unit allows processing of all
injectable materials with shot volumes of up to 3 cm
3
and feeding of all common standard
granulate sizes. The injection process guarantees processing of thermally homogeneous
melt, which ensures an outstanding quality for micro parts. Babyplast from Cronoplast is a
fully hydraulic machine and it is ideal for producing small and microscopic parts and
suitable for processing all injectable thermoplastic materials. The DesmaTec FormicaPlast
has a two phase piston injection units: pre-plasticization is realized with a 6 mm piston
while a 3 mm piston is used for the high precision injection [23]. Moreover, a fast electrical
drive is used, ensuring a high precision of control for the injection speed and the plunger
position. The maximum injection pressure and injection rate of the machine are 300 MPa
and 3.5 cm
3
/s respectively. Finally, a prototype of a micro injection moulding machine was
built and tested at IKV-Institute of Plastics Processing at RWTH Aachen University [24]. For
this micro injection moulding machine, a concept using a two plungers unit was followed:
during the plasticizing phase, the upper plasticizing plunger pushes resin through a die
heated at melting temperature as the injection plunger is cored back at the same time.
Injection follows when the desired shot volume is reached. A ball check valve between
injection plunger and metering plunger prevents the melt from flowing back into the
metering cylinder. Thermoset micro parts with a shot weight in the area of 0.05 g to 3.0 g can
be manufactured with this setup. These applications, though difficult for thermosetting
polymer grades, are advantageous in bio-medical applications.
A recent pursued objective is the realization of two-component injection moulding which
allows for the production of multi-material and, hence, multi-functional micro
components modifying also the injection machine. The main technical challenges are the
process parameters which have to be suitable for both materials and the design of the
necessary moulding tools and machine which at least have to be equipped with two
feeder systems. In particular, the micro injection moulding can be used for the generation
and direct assembly of hybrid micro system. Using this process one process step leads to
compound part consisting of two thermoplastics or a thermoplastic and an insert part
(metal, silicon, glass, ceramic). Michaeli et al. [25] studied the generation of hybrid-micro

The Micro Injection Moulding Process for Polymeric Components Manufacturing 73
system for medical applications. This part consist of a carbon-fibre reinforced PEEK
puncture needle, which incorporates three lumens and in order to attach additional
equipment a plastic connector needs to be overmoulded on the needle. The investigation
demonstrates that the resulting bond strength between needle and connector is that
required from standard even if the standard deviation between experiments is high.
Further perspectives are the manufacturing of micro joints by using polymers with
different shrinkage values and the production of microstructured preforms for a
subsequent electroplating process.
5. Analysis on the polymeric materials and their selection
Several polymeric materials have been used for producing micro moulded parts, thus
affecting the experimental results. The high shear rates occurring in the micro processes
encourage the use of materials that exhibit high shear thinning rheology, allowing cavity
filling at the lowest possible injection pressure [2]. The interaction between the type of used
polymer and the quality of the moulded part makes a challenging task to define the useful
material for each application without testing it under different conditions. The most
common polymers used in micro injection moulding are reported in Table 2 [26,27].

POLYMER FULL NAME APPLICATIONS
POM Polyoxymethylene Micro gears and micro filters
LCP Liquid Cristal Polymer Connectors, ferules and microelectronic
devices
PC Polycarbonate Optical application as lens and sensor
discs
PEEK Polyetheretherketone Micro bearings and pistons
PMMA Polymethylmethacrilate Optical fiber connector
PA Polyamide Micro gear wheels
PSU Polysulfone Housing for microfluidic devices
PE Polyethylene Components for micro actuators
PLA Polylactic acid Biodegradable implants
Table 2. Materials and applications for micro injection moulding
The properties of the chosen plastic, such as its flowability, heat transfer ability and cooling
shrinkage, affect moulding efficiency. Recent investigations report a series of measurement
of melt viscosity within small dimension geometries using high-fluidity amorphous ABS
and PS resin [28,29], and high-low density PE, as well as high crystallinity POM resin [30].
From the measured pressure drop obtained from pressure transducers and melt volumetric
flow rate, it is possible to calculate the viscosity values. The investigation of ABS, PS and
POM resin found that as micro-channel size decreases, the percentage reduction in viscosity
value increases, when compared with data obtained from traditional capillary rheometer.
The ratio of slip velocity relative to mean velocity was found to increase as the size of the

New Technologies – Trends, Innovations and Research 74
micro-channels decreases for ABS and PS. It seems that wall slip plays a dominant role
when the melt flows through micro-channels, resulting in a greater apparent viscosity
reduction when the size of micro-channel decreases. In addition, the wall-slip effect becomes
more significant as melt temperature increases. Compared with PS resin within the micro-
channels, the percentage reduction in the viscosity value as well as the ratio of slip velocity
relative to mean velocity, all increases with decreasing micro-channel size, but appears to be
less significant for POM resin.
The viscoelastic nature of the polymeric melt becomes more significant at the micro scale
because of the high shear rates involved in, for example, narrow gates. It has been
mentioned in the literature that increasing the shear rate decreases the melt viscosity to
values that are different from those that may be specified in data sheets [31].
In order to obtain the required accuracy and prevent premature material freezing when
producing high-aspect-ratio micro features, materials with low melt viscosity are desirable.
Among the best candidates, thermotropic liquid-crystalline polymers (LCPs) are well
known for their low viscosity and their pronounced shear-thinning behaviour. Berton and
Lucchetta [32] proposed also the addition of LCP to improve the properties of Polyamide 66
(PA66). The results show that LCP strongly affects the rheology of the blend, lowering the
shear viscosity and increasing the extensional viscosity. The most of the LCP effect in
decreasing the PA66 viscosity is reached for a content of 10% by weight.
Another important aspect that has to be considered is the skin–core crystalline morphology
behaviour of injection- moulded semi-crystalline polymers. Once a plastic fills a mould, the
plastic should have enough heat transfer so parts do not warp because of differential cooling
in the mould. A relatively uniform mould temperature also helps optimum part
characteristics to develop as crystalline resins crystallize or amorphous ones anneal. Mould
cavities are sized to account for shrinkage as a thermoplastic solidifies from a shot, so
finished part dimensions fall within tolerances. The skin–core crystalline morphology of
semi-crystalline polymers is well documented in the scientific literature. Crystalline
morphologies of a high - density polyethylene (HDPE) micro- moulded part and a classical
part are compared with different techniques [33,34]. Results show that the crystalline
morphologies vary between the two parts. While a ‘skin–core’ morphology is present for the
macropart, the micro-part exhibits a specific ‘core-free’ morphology, i.e. no spherulite is
present at the center of the thickness. In fact, the high flow strength and cooling rates
promote the homogeneity of the morphology through the thickness, with a flow-induced
crystallization. As a result, highly oriented structures are created within the micro-part,
conferring anisotropy to the final product. This could be a challenge to overcome, as this
anisotropy affects both polymer shrinkage and the overall final part behaviour. The results
of Lu and Zhang [35] show that all types of manufactured micro columns (φ60, 90, 110, and
130 μm) present a “skin-core” structure composed of skin layer, shear zone with column
crystal, and spherulites core. PP spherulite size diminishes gradually with the decrease of
diameter of the manufactured micro columns. Different structures of micro columns have
different hardness and modulus and the hardness and modulus of the same column
increase gradually from core zone to skin layer.

The Micro Injection Moulding Process for Polymeric Components Manufacturing 75
In the field of sustainability and with the ever increasing price of oil, the use of recycled
polymers have to be promoted and it is becoming an economical alternative for the injection
moulding. In particular, polyolefins represent the largest plastics constituent in the
municipal waste stream (high-density polyethylene-HDPE bottles). Recycling of these
containers yields a stream of recycled plastic that is highly homogeneous and consistent [36]
and the resultant recyclate has essentially the same rheological properties as the virgin resin.
Therefore, a possibility could be the recycling of HDPE into products manufactured by
injection moulding. Nevertheless, HDPE has a very high melt viscosity and usually recycled
polymers are blended with virgin polymers to obtain the best trade-off between cost and
low melt viscosity. In literature [37], a new approach to the optimization of blends
composition in the injection moulding of recycled polymers has been proposed for the
macro world but in the next future it have to be extended also to meso and micro injection
moulding.
Recently, the use of plastic material with added reinforced fillers has become a potential
alternative approach due to its high strength and the ease of batch fabrication. The use of
filler materials can improve the mechanical performance of the resins, but the small
feature dimensions present in micro mould cavities deny the use of conventional fillers,
such as glass or carbon fibres. Nano fillers such as exfoliated clay platelets, polyhedral
oligosilesquioxanes (POSS) and carbon nano tubes show potential for use in the micro
moulding environment [2]. The addition of montmorillonite nano clays to polymer
systems has emerged as a viable method to improve mechanical, barrier and flame-
retarding properties [38]. The maximum benefits of clays, however, are only realized if
care is taken to disperse the platelets evenly throughout the material (exfoliation).
Exfoliation is best achieved through pre-polymerization dispersion of the clay in the
monomer, but can also be achieved by shear-driven melt processing (usually extrusion).
Dispersion of the nano tubes, within a polymer matrix, is possible using conventional
polymer processing technology. The polymer with added nanomaterials effectively
increased the hardness achieved [39]. In addition, a nanoceramic material, such as ZnO,
improved wear resistance by 70% when nanoparticles were uniformly dispersed in the
polymer and a suitable surfactant solvent was chosen. However, wear resistance
decreased significantly if the nanoparticles were not processed well and a proper
surfactant solvent was not chosen. Other results [40] show that the polymer degradation
during compounding affects the plasticizing behaviour and provoke a reduction of the
Charpy impact strength when nanosized c-alumina particles were added to
polycarbonate. Although the Young’s modulus remained almost constant, the impact
strength as well as the glass transition temperature were reduced with increasing
nanofiller content, which can be attributed to polymer chain degradation effects.
The possibility of using biodegradable polymers is also a frontier in micro injection
moulding that received attention from many scientists [41]. Since two decades ago,
researchers in pharmacy, chemical engineering, and other disciplines have striven to design
biodegradable polymers with desired degradation mechanisms and mechanical properties.
This polymer can be used, for instance, as drug carriers: they have advantages over other
carrier systems in that they need not be surgically removed when drug delivery is
completed and that they can provide direct drug delivery to the systemic circulation. The

New Technologies – Trends, Innovations and Research 76
drug and polymer may be combined in a number of different ways depending upon the
application of interest. Biodegradable polymers for controlled drug delivery, contains
usually poly(lactic acid), poly(glycolic acid) or their copolymers.
Plastic selection is a complex task that involves many considerations not limited only to the
material properties, such as:
1. Temperature: looking at thermal stress during normal and extreme end-use conditions,
as well as during assembly, finishing and shipping.
2. Chemical resistance: evaluating the effect on the part of every solid, liquid or gas that
can contact it.
3. Standardization: factor in governmental and private standards for properties such as
heat resistance, flammability, and electrical and mechanical capabilities.
4. Assembly: ensure the proposed plastic works with all assembly steps, such as solvent
bonding, mechanical fasteners or ultrasonic welding.
5. Finishing: also ensuring the plastic can provide the desired gloss, smoothness and
other appearance values as it comes from the mould or that it can be finished
economically.
6. Other conditions: considering all other items relevant to fabrication, assembly and end
use. These include maximum loads, deflections and other mechanical stresses, relative
motion between parts, electrical stresses, color and tolerances.
7. Cost: using total finished-part cost to guide design. In addition to resin pricing, factor in
manufacturing, maintenance, assembly and disassembly to reduce labor, tooling,
finishing and other costs.
8. Availability: make sure the resin is available in the amount needed for production.
Summarizing previous considerations, the most innovative frontiers in the research about
materials are [42]:
• biocompatible materials;
• novel polymers especially nanocomposites;
• controlled architecture polymers, plus ceramic and metal powder formulations;
• recycled polymers.
On the other hand, polymers have some limitations related to their properties or
manufacturing processes. These include, for example, limited operation-temperature range,
high auto-fluorescence and limited well-established surface modification techniques [6] that
have still to be overcome.
6. Process parameters influence on components quality and their
optimization
Determining the most effective processing conditions for micro injection moulding was the
subject of many studies, which used different experimental conditions and test parts. It has
been shown that the main process parameters affecting the part quality include:
• Mould temperature;
• Melt temperature;
• Injection speed;

The Micro Injection Moulding Process for Polymeric Components Manufacturing 77
• Injection pressure;
• Holding time;
• Holding pressure;
• Cooling time.
Quality parameters in the micro injection moulding are usually associated with the ability to
completely fill the micro size cavities of the mould during processing, even if this process
could require a number of quality criteria to be met simultaneously. Quality responses are
usually associated with the evaluation of the replication by complete filling of the mould
cavity. The most widespread responses reported in literature include filling quality of micro
sized channel [43], feature dimension [44,45], part mass [46], flow length [47], filling volume
fraction [48], weld-line formation [49], demoulding forces [50], mould cavity pressure [51,
52], and minimizing injection time, pressure and temperature distribution using a three-
dimensional simulation packages [53]. The different chosen responses of statistical studies
can lead to different main results. Huang et al. [54] applied the robust parameters design to
the fabrication of a micro gear and found that the significant parameters for diameter
dimensions are mould temperature, injection speed and holding pressure whereas for tooth
thickness are holding pressure, cooling time and mould temperature.
Not only the process-parameters but also part geometry affects the quality of filling for
micro parts. Especially for a complex part some results showed that the holding pressure
can be the significant process parameter for different shapes as also the injection speed and
mould temperature [46]. Song et al. [55] have been performed injection moulding
experiments and numerical simulation on ultra-thin wall plastic parts. Especially ultra-thin
wall plastic parts have great application potentialities on MEMS even if the process becomes
difficult and complicated with the reducing of the part thickness. The results show that part
thickness is a decisive parameter because the filling capability of the melt declines rapidly
with the reducing of part thickness; metering size and injection rate are the principal factors
in ultra-thin wall injection moulding and appropriate metering size and accelerating
injection rate are the necessary condition for moulding.
Different authors report that usually the increase of parameters can improve the quality of
filled part and in particular the increase of temperatures (barrel and mould) and of injection
speed improve the polymer melt fill in micro-cavities even if the time needed to heat up and
cool down the mould is longer [43,44,56,57]. Moreover Zhao et al. [58] found that metering
size and holding pressure time are the process parameters that have the most significant
effects on part quality but the process is also significantly affected by the interaction of these
two parameters that have to be taken in account.
Also the interaction with process of the surface roughness of the mould is of paramount
importance. Griffiths et al. [47] studied the factors affecting the flow behaviour in the
interaction between the melt flow and the tool surface and PP, ABS and PC polymers were
employed to perform moulding tests using cavities with the same geometry but different
surface finish. It was found that there is a relationship between the tool surface finish and
the level of turbulence in the melt flow. The trails for all three materials in the cavity with
the highest surface finish indicate the existence of two distinctive phases in the polymer
flow, while the patterns are mixed and not so clear for the other two.

New Technologies – Trends, Innovations and Research 78
As mentioned above, quality factors related to cavity pressure can provide useful
information directly connected with the dynamics of the process as well as with the filling of
the cavity by the polymer melt. Griffiths et al. [51] reports an experimental study on the
manufacture of micro fluidic parts on three different polymers studying four parameters
(melt temperature, mould temperature, injection speed, and packing pressure). In order to
predict the pressure state of the polymer inside the mould cavity a condition monitoring
system was set-up to conduct various pressure measurements. The two parameters derived
from cavity pressure data collected by a pressure sensor have been defined: pressure
increase rate during filling and the integral of pressure over time (i.e. pressure work). As
effects, similar trends have been found for all three materials: higher injection speed in
decreasing the pressure work and of a lower mould temperature in decreasing pressure rate.
Also the Institute of Plastics Processing at RWTH Aachen University [52] developed a
system that controls the quality determining directly variable cavity pressure and realizes a
desired course of cavity pressure in the injection and holding pressure phases. The cavity
pressure course in the holding pressure phase is controlled online on the basis of pvT
behavior of the processed plastic material. The pvT optimization of the holding pressure
phase enables a balancing of disturbance variables on the process through an active
adaptation of the pressure course. In addition, the optimization is also capable of almost
entirely compensating the influence of the melt and mould temperature changes on the
moulded part weight. The direct control of the cavity pressure in combination with the pvT
optimization in the holding pressure phase ensures increased robustness against
disturbance variables caused by process fluctuations.
The final stage of process parameters investigation in micro injection moulding is the
optimization. For the parameters optimization different tools can be applied. Attia et al. [46]
applied response surfaces and desirability functions to minimize process variation. As
results, they have shown that increasing the melt temperature decreased the standard
deviation in part mass. Ozcelik and Erzurumlu [59] proposed an efficient optimization
methodology using artificial neural network and genetic algorithm to minimize the warpage
of thin shell plastic parts. The results indicate that packing pressure, mould temperature,
melt temperature, packing time, cooling time, runner type and gate location influence
warpage by 33.7, 21.6, 20.5, 16.1, 5.1, 1.5, and 1.3% respectively.
7. Simulation
The process design of micro injection moulding involves the determination of a number of
processing parameters like pressure (injection, holding, and melt), temperature (coolant,
nozzle, barrel, melt and mould), time (fill, holding, cooling and cycle), clamping force, injection
speed, injection stroke, etc. In such process, due to the irregular geometry in micro scale and
the complex thermo-mechanical history during the injection molding cycle, it is generally
necessary to resort to numerical simulation methods to properly simulate the moulding
process and develop the capability of predicting the final configuration of the moulded part.
Nowadays, one of the main challenges related to the micro injection molding technology is
the possibility to simulate the process. The main goals, that researchers all over the world
try to achieve, can be summed up in the following steps [6]:

The Micro Injection Moulding Process for Polymeric Components Manufacturing 79
• Visualization of the flow and prediction of the last-filled sections of the mould. A
method to evaluate all these aspects is the short-shots method, in which the mould is
filled with different amounts of material in order to evaluate the distribution of the flow
during the injection phase. This method is useful to identify some defects that are
usually in the last filled parts like incomplete filling, weld lines and voids.
• Optimization of the design of the moulds before manufacturing in order to prevent
high cost of reconstruction or remaking. The simulation approach would be very useful
to try different geometrical designs, sprue and gating systems, flow-paths to determine
the optimum mould design.
• Simulation of the thermal conditions of the flow during filling and cooling which would
be useful in estimating the cycle time and determine the critical processing areas.
• To identify post-processing properties, such as residual stresses, shrinkages and
warpage. In fact during micro injection moulding process, the material is subject to the
increasing of pressure and temperature due to significant shear deformation, followed
by a rapid decay of temperature and pressure in the mould cavity. This leads to
solidification, high residual stress, complex molecular orientation, that determine the
moulded part quality.
• Supporting the experimentation and in particular the design of experiments in
determining the most influential processing parameters on the part quality.
Several factors affect the accuracy of modelling For the process [60,61]. For micro-injection
moulding, three-dimensional modelling becomes significant because, on the micro-scale, it
is not possible to approximate the melt shape as flowing between two parallel plates, as it is
usually in conventional injection moulding. Also mesh elements also meshing elements
should be chosen carefully: two-dimensional elements (as shell elements) give over-
predicted filling.
The Hele-Shaw approximation is also commonly used to model injection moulding
process, providing simplified governing equations for non-isothermal, non-Newtonian
and inelastic flows in a thin cavity. It has been applied also to simulate micro-injection
moulding, but it does not allow to model some specificities of the micro-injection process,
as fountain-flow, jetting, particle tracing, filler/matrix secretion and transverse pressure
gradients. In addition, this approximation simplifies the modeling of near corners,
bifurcations and changes in the part thickness. In Hele-Shaw model, applied to micro
injection moulding, there are some assumptions that need to be changed compared to
conventional injection moulding; for example, the pressure in flow fronts might not be
zero since the surface tension produce extra pressures and the frozen layer of polymer
melts, near to mould wall, may slide due to the high shear stress resulting from high shear
rate.
Some effects, that are neglected in conventional injection moulding, become significant in
the micro-scale due to the increased surface-to-volume ratio, such as surface roughness,
surface tension, heating of the melt by viscous friction and cooling of the melt front due to
increased heat loss. In addition, models should account for the differences in dynamics of
heat and mass transfer in the micro-scale. The heat transfer coefficient between the polymer
and the mould, for example, was shown to be significant on the micro-scale [62].

New Technologies – Trends, Innovations and Research 80
By using precise material data and considering the melt compression in the barrel, the actual
volume rate and the temperature of the melt at the entrance of the cavity can be correctly
calculated. The heat transfer coefficient increases by decreasing cavity thickness or injection
speed. It is believed that the pressure level in the cavity is mostly responsible for the thermal
contact between the polymer and the mould wall. A pressure-dependent model for the heat
transfer coefficient would be more suitable to describe the thermal contact behavior in micro
injection moulding, especially in case of micro-cavities of high aspect ratio. To take this
phenomenon into consideration in the numerical simulation, three different aspects have to
be considered: surface roughness of the mould, material properties of the polymer in the
molten and solidifying state, as well as the pressure distribution along the mould wall [17].
Special processing conditions, such as the Variotherm processes, or air evacuating, should
be considered in modelling.
In a moulding simulation, the advancing of the flow front is quite an issue. The volume of
fluid (VOF) method and the level set method (LSM) have been widely adopted for a variety
of applications including boiling, casting, different moulding processes and broken column
flows since they can be easily incorporated with a fixed grid system [63,64]. Each method
has its own strengths. The LSM has better performance at curvature representation while
the VOF method is stronger in cavity filling prediction. For simulation of slip and surface
tension, the surface curvature is more important [65].
In literature, two different approaches can be found regarding the choice of simulation
packages: the first is to develop in house finite element codes specifically for simulating
micro injection moulding [60], while the second approach is to enhance the commercially
available software packages for conventional injection moulding, in order to accurately
simulate micro injection moulding [66].
Some packages over predict the filling of the cavity; other packages instead give acceptable
qualitative simulation results, but fail to give reliable quantitative values [67,68,69,70].
Moreover, recent CAE tools allow convenient interfaces to user codes that facilitate realizing
user material models and boundary conditions [71]. However, a better understanding of the
heat transfer phenomena in micro-scale is necessary for predicting the phase change and
morphology evolution during the melts fill into the cavity.
8. Case study
In this section, the injection moulding process of a micro part (a miniaturized dog bone
shaped specimen for tensile tests) is presented and discussed as case study realised by
authors.
Micro electro discharge machine technology (using a Sarix SX200 available at CNR-ITIA
premises) [72] was used for preparing the mould for the micro injection production of the
specimen under investigation. The geometry and dimensions of the specimen are illustrated
in Fig. 4. This part is representative for micro moulding because it has features in the order
of micrometers and a part weight of few milligrams, falling in the category of micro
moulded products.

The Micro Injection Moulding Process for Polymeric Components Manufacturing 81




Fig. 4. Project design and dimensions (mm) of the specimen.
The experimentation has been divided into three steps: a screening phase in order to
identify the working technological window, an experimental plan including only the most
influential parameters as resulting from the screening and finally the optimization [73]. All
the tests were carried out in a climatic chamber set at 20°C and RH 50% with the machine
Formica Plast 1k by DesmaTec. The polymers chosen for this study are Polyoximethilene
(POM Basf Ultraform N2320 003) and Liquid crystal polymer (LCP Ticona Vectra E130i);
these two grades were selected for their properties and suitability for micro moulding.
Before moulding, POM was preconditioned at 110 °C for 3 hours and LCP at 150 °C for 4
hours.
The process parameters systematically investigated were: injection speed (Vinj), melt
temperature (Tm), mould temperature (Tmo), holding time (th), and holding pressure (Ph).
All control parameters together with their interactions were factors affecting the capabilities
of the process in optimizing parts mass that has been chosen as quality response together
with the corresponding standard deviation. Part mass gives information about the filling
quality of the specimen while the standard deviation of part mass gives information about
the variability of the process.
To assess the effects of the selected parameters on the micro injection moulding, the design
of experiment (DoE) approach was applied. In particular, a two-level five-factor randomized
half fractional factorial design of resolution V (2
5-1
) was chosen and the experiments were

New Technologies – Trends, Innovations and Research 82
conducted in a randomized sequence. The chosen plan provided sufficient information
about single-factor and two-factor interaction effects. This allowed for a relatively small
number of experiments to be undertaken without compromising the accuracy of the results.
Table 3 presents the levels of the five factors for the tested component.

Factors Description
POM LCP
Low
Level (-1)
High
Level (+1)
Low
Level (-1)
High
Level (+1)
Vinj Injection speed (mm/s) 100 150 100 150
Tm Melt Temperature (°C) 190 230 335 345
Tmo Mould Temperature (°C) 60 100 80 120
Th Holding time (s) 1 3 1 3
Ph Holding pressure (bar) 500 1500 500 1500
Table 3. Experimental factors and levels
For each run, the first 10 injection cycles were discarded in order to stabilize the process,
then 10 parts were collected and then the masses were measured. Each treatment of the
designed experiments was repeated three times in a completely randomized order. With the
aim of minimize interference from external variability sources, the same mould was used
during all experiments without dismounting and the same batches of polymers were
utilized. The quality and the variability of the product were evaluated by measuring the
masses of ten samples of each treatment of the treatment and the corresponding standard
deviation. The mass of moulded parts was measured just after the ejection from mould
cavity. The stabilization and maximization of part mass in general indicates stabilized
processing conditions [74]. A sensitive weighing scale (Gibertini E154) with accuracy of 0.1
mg was used to weigh the parts. Data analysis was conducted with statistical software
Minitab®. Figs. 5 and 6 show the average masses of the samples in run order for the three
replicates. Vertical lines represent the standard deviations of the corresponding repeats for
each of the 16 treatments plus the centre point.
It has been observed that, both for POM and LCP, the trend of the masses are quite similar
and the corresponding standard deviation values are similar too; furthermore the larger the
standard deviation the larger the difference of the average mass values as expected. It
follows that the replicability and the repeatability of the process achieved are very high.
The results of the experimental design analysis showed that the holding pressure results as
the main factor influencing the process. This result emphasizes the importance of a correct
holding phase in the micro injection moulding to allow the completely filling of the mould
before freezing and, hence, the increase of the mass specimen as desired.
Contrary to mass response, the main parameter that influences variability is the melt
temperature for both the polymers. An increase of the melt temperature improves the
polymer flow due to a reduction of the material viscosity and shear stress, hence these
conditions help to reduce the variability of the process and of the products.

The Micro Injection Moulding Process for Polymeric Components Manufacturing 83

Fig. 5. Average mass for each treatment of 3 replications (replicate 1 in red, replicate 2 in
blue, replicate 3 in green) - POM parts



Fig. 6. Average mass for each treatment of 3 replications (replicate 1 in red, replicate 2 in
blue, replicate 3 in green) for LCP parts

New Technologies – Trends, Innovations and Research 84
The final experiments were carried out with the aim to optimize the process parameters
according to both responses adopting in the implemented DOE, the part mass and the
corresponding standard deviation. Optimization was carried out using the desirability
function approach to individuate the optimum parameter levels values that must be used.
The optimized process parameters that were obtained are reported in Table 4 both for POM
and LCP; the improvements in the mass and corresponding standard deviation were
confirmed. Considerable improvements are observable in particular for POM; in fact, the
average mass has increased of about 4.5% for POM and of 2.7% for LCP while the reduction
of the standard deviation is similar for both the materials.

All runs Optimized runs
Material Mass average
(mg)
Standard
deviation (mg)
Mass average
(mg)
Standard
deviation (mg)
POM 68.85 1.681 71.95 0.097
LCP 83.29 1.609 85.54 0.250
Table 4. Mass results for different process parameters and for the optimized process
For the POM material, tensile test have been then performed using a Shimadzu EZ-S tensile
test machine settled in micro-test configuration (200 N load cell). The speed of the
translating upper slide was set to 5 mm/s. Cross section areas have been measured for each
specimen before the test, obtaining values in the following range: 1.45±0.01 mm
2
.
The strain at break was calculated as the ratio between the elongation and the initial length
of the specimen free from the grasp of the tweezers (4.5 mm). This is the region of the
sample with a constant section and where the deformation occurs.

Fig. 7. Force vs displacement curves of three samples: type 1-ductile tensile behaviour with
local striction, type 2-brittle behaviour with small deformation at break and type 3-very
ductile with long striction and hardening strain.

The Micro Injection Moulding Process for Polymeric Components Manufacturing 85
Three main behaviours of the deformation of the material have been observed. In Fig. 7 the
force versus displacement curves of three samples (type 1, 2 and 3) with very different
trends are plotted, showing that the process parameters affect significantly the results of the
tensile stress. The sample of type 2 breaks after only 2.10 mm of elongation (about 40%),
with a behaviour typical of the almost-amorphous plastic materials, whereas the sample of
type 3, can elongate until about 300% and shows both a long striction and strain hardening
phenomena. Finally, the sample of type 1 shows an intermediate behaviour with an
elongation of around 100%.
In Fig. 8, the SEM images of the three types of breaks are shown: type 1 almost brittle, type 2
ductile with striction and type 3 very ductile with long striction and strain hardening.


(a) (b) (c)
Fig. 8. SEM images of the break of a sample type 1 (a), type 2 (b), type 3(c)
9. Conclusions
Micro injection moulding process is becoming of greater and greater importance for the
manufacturing of polymeric micro-components. This technology has the characteristic to
play a fundamental role in the near future to sustain the growing request for miniaturization
components production in biomedical, optical, and IT technology applications for these
advantages:
• the ability of low cost and short cycle times process, useful for mass production;
• the increasing capacity to achieve components of high aspect ratio and micro
dimensions with demanding fabrication tolerances;
• the ability of processing polymers with a wide range of properties according to the
functionality requested.
Several issues have to be defined as evidenced from this review: the standardization of the
process, and the best approach to follow according to part geometry or chosen polymer. The
research in the micro injection moulding quickly develops and it seems able to overcome
rapidly most of the actual technological limits by developing new materials, process control,
simulation techniques, and quality testing methods.
10. Acknowledgement
This research has been supported by the project REMS (‘Rete lombarda di eccellenza per la
meccanica strumentale e laboratorio esteso’), funded by Lombardy Region under the

New Technologies – Trends, Innovations and Research 86
framework ‘Promozione accordi istituzionali’. The collaboration of Eng. A. Bongiorno and
Dr. C. Pagano for the tensile tests is also kindly acknowledged.
11. References
[1] Min B.H., 2003, A study on quality monitoring of injection-molded parts, J Mat Proc
Tech 136, pp. 1
[2] Whiteside B.R., Martyn M.T., Coates P.D., Allan P.S., Hornsby P.R., Greenway G., 2003,
Micromoulding: process characteristics and product properties, Plastic Rubber and
Composites, 32, 6, pp. 231-239
[3] Heckele M., Schomburg W., 2004, Review on Micro Molding of Thermoplastic Polymers,
J. Micromech Microengineering, 14, 3
[4] Piotter V., Mueller K., Plewa K., Ruprecht R., Haußelt J., 2002, Performance and
simulation of thermoplastic micro injection molding, Microsystem Technologies, 8,
6, pp.387-390
[5] Yao D., Kim B., 2002, Injection molding high aspect ratio microfeatures, J Inject Molding
Technol, 6, 1, pp. 11-17
[6] Attia U.M., Marson S., Alcock J.R., 2009, Micro-injection moulding of polymer
microfluidic devices, Microfluid Nanofluid, 7, pp. 1-28
[7] Rötting O., Röpke W., Becker H., Gärtner C., 2002, Polymer microfabrication
technologies, Microsystem Technologies, 8, 1, pp. 32-36
[8] Yu L., Koh C., Lee L., Koelling K., Madou M., 2002, Experimental investigation and
numerical simulation of injection molding with micro-features, Polym Eng Sci, 42,
5, pp. 871-888
[9] Hoffmann W., Bruns M., Büstgens B., Bychkov E., Eggert H., Keller W., Maas D., Rapp
R., Ruprecht R., Schomburg W.K., Süss W., 1995, Electro-chemical microanalytical
system for ionometric measurements, Proc. of the mTAS ’94 - MicroTotal Analysis
Systems Workshop, A. Van den Berg publisher, University of Twente, Enschede
NL, November 21-22, 1995, Kluwer Acad. Publ., pp. 215-218
[10] Dittrich H., Wallrabe U., Mohr J., Ruther P., Hanemann T., Jacobi O., Müller K., Piotter
V., Ruprecht R., Schaller T., Zißler W., 2000, RibCon-Steckverbinder für 16
Multimode-Fasern, FZKA-report 6423, Forschungszentrum Karlsruhe, D
[11] Ruprecht R., Bacher W., Haußelt J.H., Piotter V., 1995, Injection molding of LIGA and
LIGA similar microstructures using filled and unfilled thermoplastics, Proc. SPIE
2639, pp. 146
[12] Hagmann P., Ehrfeld W., 1989, Fabrication of Micro structures of Extreme Structural
Heights by Reaction Injection Molding, Int. Polymer Processing IV, 3, pp. 188-195
[13] Ruprecht R., Piotter V., Benzler T., Hausselt J., 1998, Spritzgießen von Mikrobauteilen
aus Kunststoffen, Metallen und Keramiken, FZKA- report 6080,
Forschungszentrum Karlsruhe, D, pp. 83-88
[14] Greener J., Wimberger-Friedl R., 2006, Precision injection molding: process, materials,
and applications, Hanser Gardner Publications, Cincinnati
[15] Surace R., Trotta G., Bongiorno A., Bellantone V., Pagano C., Fassi I., Micro injection
moulding process and product characterization, Proc. of the 5
th
International

The Micro Injection Moulding Process for Polymeric Components Manufacturing 87
Conference on Micro- and Nanosystems IDETC/MNS 2011, August 28-31, 2011,
Washington, DC, USA
[16] Marquez J.J., Rueda J., Chaves M.L. , 2009, Design and manufacturing of a modular
prototype pold to be employed in micro injection molding experiments, CPl 181, 3
rd

Manufacturing Engineering Society International Conference, edited by V. J. Segui
and M. J. Reig, American Institute of Physics, pp. 353-360
[17] Nguyen-Chung T., Juttner G., Loser C., Pham T., Gehde M., 2010, Determination of the
heat transfer coefficient for short-shot studies and precise simulation of
microinjection molding, Polymer Engineering and Science ,165, pp. 173
[18] Giboz J., Copponnex T., Mélé P., 2007, Microinjection molding of thermoplastic
polymers: a review, J. Micromech. Microengineering, 17, 6
[19] Wimberger-Friedl R., Balemans W., Van Iersel B., 2003, Molding of microstructures and
high aspect ratio features, Proc. of the Annual Technical Conference - ANTEC 2003,
4-8 May 2003, Nashville, TN
[20] Kemmann O., Schaumburg C., Weber L., 1999, Micro moulding behaviour of
engineering plastics, Proc. of SPIE 20 - The International Society for Optical
Engineering, 30 Mar-1 Apr 1999, Bellingham, WA, US, pp. 464-471
[21] Piotter V., Bauer W., Benzler T. and Emde A, 2001, Injection molding of components for
microsystems, Microsyst Technol 7, pp. 99-102
[22] Chang P.C., Hwang S.J., Lee H.H., Huang D.Y, 2007, Development of an external-type
microinjection molding module for thermoplastic polymer, J Mater Process Tech
184, pp. 163-172
[23] Dormann B., 2009, 2K – Micro injection molding with formicaPlast “Industrial solution
for precise mass production of micro parts”, Proc. of 4M ICOMM 2009, DOI:
10.1243/ 17547164C0012009072, pp.347-349.
[24] Michaeli W., Kamps T., 2007, Design of a micro injection moulding machine for
thermosetting moulding materials, Proc. of 4M 2007
[25] Michaeli W., Opfermann D., Kamps T., 2007, Advances in micro assembly injection
moulding for use in medical systems, Int J Adv Manuf Technol, 33, pp. 206-211
[26] Battenfeld Micro Molding, Microsystem presentation, available on www.battenfeld-
imt.com
[27] König C., Ruffieux K., Wintermantel E., Blaser J., 1997,Autosterilization of
biodegradable implants by injection molding process, J Biomed Mater Res, 38, pp.
115–119
[28] Chien R.D., Jong W.R., Chen S.C., 2005, Study on rheological behavior of polymer melt
flowing through micro-channels considering the wall-slip effect, J Micromech
Microeng, 15, pp. 1389
[29] Chen S.C., Tsai R.I., Chien R.D., Lin T.K., 2005, Preliminary study of polymer melt
rheological behavior flowing through micro-channels, Int Commun Heat Mass
Transfer, 32, pp. 501-510
[30] Chen C.S., Chen S.C., Liaw W.L., Chien R.D., 2008, Rheological behavior of POM
polymer melt flowing through micro-channels, European Polymer Journal, 44, pp.
1891–1898
[31] Tolinski M., 2005, Macro challenges in micromolding, Plast Eng, 61,9, pp. 14-16

New Technologies – Trends, Innovations and Research 88
[32] Berton M., Lucchetta G., 2010, Optimization of the rheological properties of a PA66-LCP
blend for micro injection moulding, Proc. of the 7
th
International Conference on
Multi Material Micro Manufacture, 4M 2010
[33] Giboz J., Copponnex T., Mele P., 2009, Microinjection molding of thermoplastic
polymers: morphological comparison with conventional injection molding, J
Micromech Microeng, 19, 025023 (12pp)
[34] Giboz J., Spoelstra A.B., Meijer H.E.H., Copponnex T., Mélé P., 2010, Observation of
specific polymer morphologies in a microinjection moulded part, Proc. of the 7
th

International Conference on Multi-Material Micro Manufacture - 4M 2010
[35] Zhen Lu, Zhang K.F., 2009, Morphology and mechanical properties of polypropylene
micro-arrays by micro-injection molding, Int J Adv Manuf Technol, 40, pp. 490–496
[36] Knight W.A., Sodhi M., 2000, Design for bulk recycling: analysis of materials separation,
Annals of the CIRP, 49, 1, pp. 83-86
[37] Lucchetta G., Bariani P.F., Knight W.A., 2006, A new approach to the optimization of
blends composition in injection moulding of recycled polymers, Annals of the
CIRP, Manufacturing technology, 55, 1, pp. 465-468
[38] Schmidt D., Shah D., Giannelis E.P., 2002, New advances in polymer/ layered silicate
nanocomposites, Current Opinion in Solid State and Materials Science, 3, pp. 205–
212
[39] Huang C.K., 2006, Filling and wear behaviors of micro-molded parts made with
nanomaterials, European Polymer Journal, 42, pp. 2174–2184
[40] Hanemann T., Haußelt J., Ritzhaupt-Kleiss E., 2009, Compounding, micro injection
moulding and characterization of polycarbonate-nanosized alumina-composites for
application in microoptics, Microsyst Technol, 15, pp. 421–427
[41] Peppas N.A., 2004, Devices based on intelligent biopolymers for oral protein delivery,
International Journal of Pharmaceutics, 277, pp. 11-17
[42] Coates P.D., Martin M.T., Gough T.D., Spares. R. , Whiteside B.R., 2010, Process
structuring of polymers and polymer nanocomposites in micromoulding, Proc. of
the 7
th
International Conference on Multi Material Micro Manufacture, 4M 2010
[43] Monkonnen K., Hietala J., Paakkn P., Paakkn E., Kaikuta T., Pakkn T., 2002, Replication
of sub-micron featuresusing amorphous thermoplastics, Polym Eng Sci, 42, pp.
1600
[44] Sha B., Dimov S., Griffiths C., Packianather M.S., 2007, Investigation of micro-injection
moulding: Factors affecting the replication quality, Journal of Materials Processing
Technology, 183, pp. 284-296
[45] Sha B., Dimov S., Griffiths C., Packianather M.S., 2007, Microinjection moulding: factors
affecting the achievable aspect ratios, Int J Adv Manuf Technol, 33, pp. 147–156
[46] Attia U., Alcock M., Jeffrey R., 2010, Optimising process conditions for multiple quality
criteria in micro-injection moulding, Int J Adv Manuf Tech 50, pp. 533
[47] Griffiths C.A., Dimov S.S., Brousseau E.B, Hoyle R.T., 2007, The effects of tool surface
quality in micro-injection moulding, Journal of Materials Processing Technology,
189, pp. 418-427

The Micro Injection Moulding Process for Polymeric Components Manufacturing 89
[48] Lee B.K., Hwang C.J., Kim D.S., Kwon T.H., 2008, Replication quality of flow-through
microfilters in microfluidic lab-on-a-chip for blood typing by microinjection
molding, J Manuf Sci, E-T ASME 130:0210101–0210108
[49] Tosello G., Gava A., Hansen H.N., Lucchetta G., 2007, Influence of process parameters
on the weld lines of a micro injection molded component, ANTEC: Proc. Annual
Technical Conf., Cincinnati,OH, 6–11 May 2007, pp. 2002–2006
[50] Griffiths C.A., Dimov S., Brousseau E.B., Chouquet C., Gavillet J., Bigot S., 2008, Micro-
injection moulding: surface treatment effects on part demoulding, Proc. of 4M 2008,
Cardiff, UK, 9–11 September 2008
[51] Griffiths C.A., Dimov S.S., Scholz S., Hirshy H., Tosello G., Hansen H.N., Williams E.,
Cavity pressure behaviour in micro injection moulding, Proc. of the 7
th

International Conference on Multi-Material Micro Manufacture - 4M 2010
[52] Michaeli W., Schreiber A., 2009, Online control of the injection molding process based
on process variables, Advances in Polymer Technology, 28, 2, pp. 65–76
[53] Shen Y.K., Yeh S.L., Chen S.H., 2002, Three-dimensional non-Newtonian computations
of micro-injection molding with the finite element method, Int Commun Heat
Mass, 29, pp. 643–652
[54] Huang M.S., Li J.C., Huang Y.M., Hsieh L.C., 2009, Robust parameter design of micro-
injection molded gears using a LIGA-like fabricated mold insert, J. Mater Process
Technol, 209, pp. 5690-5701
[55] Song M.C., Liu Z., Wang M.J., Yu T.M., Zhao D.Y., 2007, Research on effects of injection
process parameters on the molding process for ultra-thin wall plastic parts, Journal
of Materials Processing Technology, 187–188, pp. 668–671
[56] Gornik C., 2004, Injection moulding of parts with microstructured surfaces for medical
applications, Macromol Symp, 217, pp. 365
[57] Nagahanumaiah, Ravi B., 2009, Effects of injection molding parameters on shrinkage
and weight of plastic part produced by DMLS mold, Rapid Prototyping J, 15, pp.
179
[58] Zhao J., Mayes R.H., Chen G., Xie H., and Poh Sing Chan, Effects of Process Parameters
on the Micro Molding Process, Polymer Engineering and Science, September 2003,
Vol. 43, No. 9
[59] Erzurumlu T., Ozcelik B., 2006, Minimization of warpage and sink index in injection-
molded thermoplastic parts using Taguchi optimization method, Mater Design, 27,
pp. 853
[60] Ilinca F., Hétu J.F., Derdouri A., 2004, Numerical simulation of the filling stage in the
micro-injection molding process, Proc.s of the Annual Technical Conference
ANTEC 2004, 16-20 May 2004, Chicago, IL
[61] Hill S., Kämper K., Dasbach U., Döpper J., Ehrfeld W., Kaupert M., 1995, An
investigation of computer modelling for micro-injection moulding, Proc. of
Microsym '95, September 1995
[62] Yu L., Lee L., Koelling K., 2004, Flow and heat transfer simulation of injection molding
with microstructures, Polym Eng Sci, 44, 10, pp. 1866-1876
[63] Lin H.Y., Young W.B., 2009, Analysis of the filling capability to the microstructures in
micro-injection molding, Applied Mathematical Modeling, 33, pp. 3746–3755

New Technologies – Trends, Innovations and Research 90
[64] Sussman M., Smereka P., Osher S., 1994, A level set approach for computing solution to
incompressible two-phase flow, J. Comput. Phys., 114, pp. 146-159
[65] Sussman M., Almgren A. S., Bell J.B., Colella P., Howell L.H., Welcome M.L., 1999, An
adaptive level set approach for incompressible two-phase flows, J Comput Phys,
148, pp. 81-124
[66] Kirkland C., 2003, A first in micromold flow analysis, Injection Molding Mag, May 2003
[67] Stange T., 2002, Development and production of microfluidic chips made of polymers,
Am Biotechnol Lab, 20, 8, pp. 8-10
[68] Chen S., Chang J., Chang Y., Chau S., 2005, Micro injection molding of micro fluidic
platform, Proc. of the Annual Technical Conference -ANTEC 2005, 1-5 May 2005
[69] Piotter V., Finnah G., Oerlygsson G., Ruprecht R., Haußelt J., 2005, Special variants and
simulation of micro injection moulding, Injection Moulding 2005: Collected Papers
of the 5
th
International Conference, Copenhagen, Denmark, March 1-2, 2005
[70] Kemmann O., Weber L., Jeggy C., Magotte O., 2000, Simulation of the micro injection
molding process, Proc. of the Annual Technical Conference - ANTEC 2000, pp. 576-
580
[71] Choi S.J., Kim S.K., 2011, Multi-scale filling simulation of micro-injection molding
process, Journal of Mechanical Science and Technology, 25, 1, pp. 117-124
[72] Modica F., Marrocco V., Trotta G., Fassi I., Micro electro discharge milling of freeform
micro-features with high aspect ratio, Proc. of the 5
th
International Conference on
Micro- and Nanosystems IDETC/MNS 2011, August 28-31, 2011, Washington, DC,
USA
[73] Trotta G., Surace R., Modica F., Spina R., Fassi I., 2010, Micro injection moulding of
polymeric component, Proc. of the International Conference AMPT 2010 -
Advances in Materials and Processing Technology, Paris, France, Oct. 24-27 2010,
pp. 378
0
Recent Advances in Multi-Dimensional
Packing Problems
Teodor Gabriel Crainic
1,2
, Guido Perboli
1,3
and Roberto Tadei
3
1
CIRRELT
2
UQAM
3
DAUIN, Politecnico di Torino
1,2
Canada
3
Italy
1. Introduction
Packing problems have been much studied in the past decades due, in particular, to
their wide range of applications in many settings of theoretical and practical interest,
including packing/loading, scheduling, and routing. We focus on Multi-Dimensional Packing
problems, which present specific methodological challenges while also being of particular
interest to transportation and modern supply chains, due to the need to consolidate and
optimize flows of freight and vehicles. The rich literature presents a plethora of problem
variants, models, and solution methods. Yet, a general overview and synthesis of the
field is missing, as we lack a general methodological framework able to efficiently address
different problem variants, i.e., obtain good-quality solutions with limited computational
efforts. Addressing these issues is the main goal of this chapter.
All Multi-Dimensional Packing problems display an identical structure, defining two sets of
elements in one or more (usually two or three) geometric dimensions: 1) a set of large items,
often called containers, bins, or knapsacks and 2) a set of small items, usually referred to simply
as items. The goal is to select all or some of the items, group theminto one or more subsets, and
assign each of the resulting subsets to one of the bins, such that the geometric conditions hold,
i.e., the group of items of each subset fits completely within the corresponding bin with no
overlapping. Problem variants differ by the particular definition of their packing constraints
(presence of guillotine cuts, balancing and stability of the packing, possible overlapping of
certain items, forbidden rotations of the items, etc.) and objective function, going by the
well-known names of Knapsack, Bin Packing, Strip Packing, Variable Sized Bin Packing,
Container Packing, to name just a few (see Wäscher et al., 2007 for a tentative taxonomy of
multi-dimensional packing problems). We focus in this chapter on orthogonal packings, i.e.,
items and bins are rectangular in two-dimensions (2D) and boxes in three-dimensions (3D),
and items must be placed into bins with their sides parallel to the sides of the respective bin.
The aim of this chapter is twofold. We first survey the different approaches used to represent
the packing of items into bins. The issue is common to all problem variants, the packing
representation playing a central role in the efficiency of the solution methods. Second, we
discuss the different solution approaches proposed for the main Multi-Dimensional Packing
5
2 New Technologies: Trends, Innovations and Research
problems, with respect to their performance, limits, and implementation issues within realistic
applications.
The chapter is organized as follows. We address packing issues in Section 2, introducing
the main mathematical models and the rules to define where to place an item into a
bin (usually already holding some items), and discussing efficiency, effectiveness, and
implementation aspects. Section 3 is dedicated to Multi-Dimensional Packing problems. We
describe bounds, exact methods, and meta-heuristics, discussing their effectiveness compared
to their computational effort. A more general discussion of the flexibility, simplicity of
implementation, and public availability of the codes of these methods is the topic of Section 4.
2. Multi-Dimensional Packing: models and packing rules
One of the main issues in addressing Multi-Dimensional Packing problems is defining the
position where to place items inside the bin (Crainic et al., 2008; Lodi et al., 2002; Perboli, 2002).
Indeed, the performance of exact and heuristic solution methods targeting these problems is
very sensitive to the item-positioning rule in terms of computational efficiency and solution
quality (Crainic et al., 2008). While the issue is not relevant for mono-dimensional packing
problems, it is harder to address in the 3D case than in the 2D one.
The packing rules are an issue common to all the problems addressed in this chapter.
Moreover, different methods share the same packing rule whilst, on the other hand, the same
rule is applied to different Multi-Dimensional Packing problem settings. We therefore survey
the main rules presented in the literature, focusing on those that can be used in both 2D and
3D cases. The first subsection addresses the mathematical models defining a packing, while
the second one the most efficient rules for placing items into an existing bin.
2.1 Models for Multi-Dimensional Packing
A first attempt to model packings is due to Gilmore & Gomory (1965). They proposed a
representation given by the enumeration of all the patterns, i.e., all subsets of items that
could be accommodated into a bin, given the problem constraints. The huge number of
patterns that can be defined from a given set of items makes this approach appropriate for
column-generation approaches only (Baldacci & Boschetti, 2007).
Beasley (1985b) considered a formulation for 2D packing based on the discretization of the
bin surface into p × q rectangles, the bottom-left corner of each item being then placed
on the bottom-left corner of a rectangle. A similar representation was introduced by
Hadjiconstantinou & Christofides (1995), except that instead of explicitly partitioning the bin
into rectangles, they limited the set of coordinates each itemcould assume to p and q values. In
both cases, the number of variables grows with the accuracy of the discretization. Therefore,
both representations are principally used to compute bounds through Lagrangian relaxation
and subgradient optimization.
Egeblad & Pisinger (2009) provided a model for the Three-Dimensional Knapsack Problem.
The model represents the packing by specifying the overlapping of items using binary
variables. The model is able to deal with rotations and additional constrains, e.g., fixing
the position of an item. A similar mathematical representation of item overlapping was
introduced by Baldi et al. (2011) to model the variant where balancing constraints must be
92 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 3
considered, i.e., the items are characterized by mass and center of mass, and the center of
mass of the overall packing must lie inside a given domain.
2.2 Packing rules for Multi-Dimensional Packing
All Multi-Dimensional Packing models present, at different degrees, two main challenges: the
large number of variables and the high level of solution degeneracy. In fact, these models
can be used only for small-sized instances (20 items in single-bin 3D problems). Rules were
therefore introduced to define where to place additional items into a bin already holding some.
All these methods deal with two different issues: reduce the computational effort and the
data structure complexity needed to use the rule, and the possibility to introduce additional
packing constraints, like fixing item positions and introducing guillotine cuts.
An approach often used for 2D-packing building consists in combining procedures designed
for mono-dimensional problems, namely shelf (or layer) methods (Berkey & Wang, 1987;
Bortfeldt & Winter, 2009; Chung et al., 1982). The items are first sorted and packed into
“shelves" with sizes equal to the width of the bin. The problem then reduces to solving a
mono-dimensional packing instance. Indeed, a 2D packing can be obtained by placing the
shelves into the bin according to the solution of a mono-dimensional packing problem, where
the size of the items equals the depth of the shelves and the size of the mono-dimensional
bin equals the depth D of the two-dimensional one. The same approach can also be used to
build 3D packings. Build first two-dimensional shelves by using any 2D algorithm and, then,
arrange them into a three-dimensional bin by solving a mono-dimensional packing problem,
where the size of the items equals the height of the shelves and the size of the bin equals the
height H. When the 2D shelves are also built according to the shelf approach, the method
is known as wall-building (George & Robinson, 1980; Pisinger, 2002). The drawback of the
shelf approach is that it introduces guillotine cuts on the depth and height of the two and
three-dimensional bins, respectively, leading to their underutilization. Figure 1 illustrates 2D
and 3D packings obtained by means of the shelf approach.
Fig. 1. Shelf Packings in 2D and 3D
A graph-theoretical approach for the characterization of Multi-Dimensional Packings was
proposed by Fekete & Schepers (1997; 2004a). The authors considered the relative positions
of the items in a feasible packing and defined a graph describing the item “overlapping"
according to the projection of the items on each orthogonal axis. In this way, the authors
were able to deal with classes of packings sharing a certain combinatorial structure, instead
of having to consider one packing at a time. The packing classes are represented by a series
of graphs, one for each axis. The graphs are proven to be interval graphs, i.e., a special and
93 Recent Advances in Multi-Dimensional Packing Problems
4 New Technologies: Trends, Innovations and Research
well-studied class of graphs for which elegant and extremely efficient algorithms have been
developed. More formally, let G
d
(V, E
d
) be the graph associated to the d
th
axis. Each vertex
of G
d
(V, E
d
) is associated to an item i in the bin and a non-oriented edge (i, j) between two
items i and j exists if and only if their projections on axis d overlap (see Figure 2). The authors
proved necessary conditions on the interval graphs to define a feasible packing. Combined
to good heuristics for dismissing infeasible subsets of items, this characterization was used
to develop a two-level tree search. According to computational results, mainly limited to
2D problems, this strategy outperformed previous methods. The method cannot handle
additional constraints on the packing, however, such as fixing the position of one or more
items. No direct comparison with the Branch &Bound of Martello et al. (2007) was performed.
The link between guillotine cuts and interval graphs was analyzed by Perboli (2002). Recently,
Joncour et al. (2010) introduced an efficient algorithm to manage the interval-graph structure
by means of MPQ-trees, combinatorial structures introduced in Korte & Möhring (1989).
Fig. 2. Packings and Associated Interval Graphs
A similar approach to the one by Fekete & Schepers has been used by Imahori et al. (2003) to
give a general representation for packing problems where the costs in the objective function
depend on the location of the items. Instead of working on interval graphs, however, the
authors directly deal with the sequencing of the items, i.e., which item is to be put before
another into the packing. The decoding algorithm is managed by a dynamic programming
method with pseudo-polynomial complexity and is used to derive Multi-Start and Iterated
Local Search heuristics. As the representation addresses 2D packings, the computational
experiments were limited to the Minimum Area Packing Problem, a variant of the 2D
Container Loading problem where the minimal boxed envelope of the final packing must
be considered (Murata et al., 1996).
Martello et al. (2000) defined Corner Points as the non-dominated locations where an item
can be placed within an existing packing. In two dimensions, Corner Points are defined
where the envelope of the items in the bin changes from vertical to horizontal (the large dots
in Figure 3b). Corner Points on the three-dimensional envelope can be found applying the
two-dimensional algorithm for each distinct value of the height of the bin defined by the
lower and upper terminal lines of each item (large dots in Figure 3a). A Corner Point set
can be computed in O(n
2
). Martello et al. (2000) used this idea to design a Branch & Bound
algorithm to verify whether a given set of items can be packed into a bin or not. den Boef
et al. (2005) showed that the algorithm to compute the Corner Points presented in Martello
et al. (2000) may miss some feasible packings. Martello et al. (2007) addressed this issue by
providing a newversion of the procedure to compute the Corner Points, as well as an updated
version of the related Branch & Bound algorithm.
94 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 5
Fig. 3. Corner Points in 3D and 2D Packings
Fig. 4. Extreme Points in 3D and 2D Packings
Crainic et al. (2008) defined the Extreme Points, an extension of Corner Points providing a
better exploitation of the bin volume by identifying additional points where an item can be
added to an existing packing, as illustrated in Figure 4. In particular, while one cannot use
Corner Points to add an item within the space left inside an existing packing, e.g., the dark
gray regions in Figure 3b, Extreme Points provide this capability. Thus, for example, item 11
can be accommodated within the dark gray region on top of item 7 in Figure 4b, which is
not possible with the Corner Points of Figure 3b. The Extreme Point idea was used by the
authors to design new constructive heuristics based on the First Fit Decreasing and the Best
Fit Decreasing heuristics for the mono-dimensional problem. Computational results showed
that the proposed method outperformed all the other constructive heuristics for both 2D and
3D Bin Packing problems, and that it obtains, in negligible time, results comparable to those
of the best existing meta-heuristics.
3. Multi-Dimensional Packing problems
We now turn to the main packing classes, 2D and 3D Bin Packing, 2D and 3D Knapsack, and
3D Container Loading. For each problem, we give its definition, the classification according
to Wäscher et al. (2007), and a brief description of the state-of-the-art solution methods.
Finally, a comparison of the computational results obtained with the state-of-the-art methods
is presented. For this survey, we focus on meta-heuristic methods, as these methods are the
most efficient way to solve real-sized instances while preserving sufficient accuracy.
95 Recent Advances in Multi-Dimensional Packing Problems
6 New Technologies: Trends, Innovations and Research
The results of all the methods are taken from the literature. For the computational times, due
to the variability of the different processors used in the computational tests, we adopted a
Unified Computational Time (UCT) obtained considering a Pentium4 3000 MHz workstation
as the reference machine and scaling the computational times according to the SPEC CPU2006
benchmarks published in SPEC(2006). Notice that, due to the limited amount of memory used
by all the methods, this parameter does not effect the overall results.
3.1 Multi-dimensional bin packing problems
Given a set of box items i ∈ I, with sizes w
i
, l
i
, and h
i
, and an unlimited number of bins
of fixed sizes W, L, and H, the Three-Dimensional Orthogonal Bin Packing problem (3D-BP)
consists in orthogonally packing the items into the minimum number of bins. We assume
that the items cannot be rotated. According to the classification introduced by Wäscher et al.
(2007), the problem is also known as the Three-Dimensional Single Bin-Size Bin Packing Problem
(3D-SBSBPP). The Two-Dimensional Orthogonal Bin Packing problem (2D-BP) is the restriction
of 3D-BP in two dimensions.
TSPACK is the Tabu Search algorithm for the 2D-BP developed by Lodi et al. (1999). This
algorithm uses two simple constructive heuristics to pack items into bins. The Tabu Search
only controls the movement of items between bins. Two neighborhoods are considered to
try to relocate an item from the weakest bin (i.e., the bin that appears to be the easiest to
empty) to another. Since the constructive heuristics produce guillotine packings, so does the
overall algorithm. The algorithm is presently the best meta-heuristic for 2D-BP, but it requires
a computation effort of the order of 60 CPU seconds per instance to achieve these results.
The same authors presented a shelf-based heuristic for the 2D-BP, called Height first - Area
second (HA) (Lodi et al., 2004a). The algorithm chooses the best of two solutions. To
obtain the first, items are partitioned into clusters according to their height and a series of
layers are obtained from each cluster. The layers are then packed into bins by using the
Branch-and-Bound approach by Martello & Toth (1990) for the 1D-BP problem. The second
solution is obtained by ordering the items by non-increasing area of their base and new layers
are built. As previously, the layers are packed into bins by solving a 1D-BP problem. The
method is faster but less accurate than TSPACK.
The first exact method for the 3D-BP was a two-level Branch-and-Bound algorithm proposed
by Martello et al. (2000). The first level assigns items to bins. At each node of the first-level
tree, a second level Branch-and-Bound is used to verify whether the items assigned to each
bin can be packed into it. In the same paper, the authors introduced two constructive
heuristics. The first, called S-Pack, is based on a layer-building principle derived from the
shelf approach. The second, called MPV-BS, repeatedly fills one bin after the other by means
of the Branch-and-Bound algorithm for the single bin presented by the authors in the same
paper. The authors also gave the results of their method by limiting its computational effort
to 1000 CPU seconds.
Faroe et al. (2003) presented a Guided Local Search (GLS) algorithm for the 3D-BP. Starting
with an upper bound on the number of bins obtained by a greedy heuristic, the algorithm
iteratively decreases the number of bins, each time using GLS to search for a feasible packing.
The process terminates when a given time limit has been reached or the upper bound
matches a precomputed lower bound. Computational experiments were reported for 2 and
96 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 7
3-dimensional instances with up to 200 items. The results were satisfactory, but required a
computational effort of the order of 1000 CPU seconds to be reached.
Crainic et al. (2008) defined the Extreme Points (EPs) and combined them to the well-known
Best First Decreasing (BFD) heuristic for the 1D-BP, producing the EP-BPH heuristic.
Extending the EP-BPH to address the 3D-BP proved far from trivial, however, as the
ordering of items in higher dimensions may be affected by more than one attribute (e.g.,
volume, side area, width, length, and height of the items). Several sorting rules were tested
and the best ones were combined into C-EPBFD, a composite heuristic based on EP-BPH.
Extensive experimental results showed C-EPBFD requiring negligible computational efforts
and outperforming both constructive heuristics for the 3D-BP and more complex methods,
e.g., the truncated Branch-and-Bound by Martello et al. (2000).
Crainic et al. (2009) proposed TS
2
PACK, a two-level Tabu Search meta-heuristic for the 3D-BP.
The first level is a Tabu Search method that changes the assignment of items to bins. For each
assignment, the items assigned to a bin are packed by means of the second-level Tabu Search,
which makes use of the Interval Graph representation of the packing by Fekete & Schepers
(2004a) to reduce the search space. The accuracy of the overall meta-heuristic is enhanced by
the k-chain-move procedure, which increases the size of the neighborhoods without increasing
the overall complexity of the algorithm. TS
2
PACK currently obtains the best solutions for
the 3D-BP. Nevertheless, the method has a rather slow convergence rate, requiring 300 CPU
seconds to find the best solution.
Finally, Perboli et al. (2011) introduced GASP - Greedy Adaptive Search Procedure, a
meta-heuristic able to efficiently address two and three-dimensional multiple bin packing
problems. GASP combines the simplicity of greedy algorithms with learning mechanisms
aimed to guide the overall method towards good solutions. Extensive computational results
showed that GASP is able to obtain state-of-the-art results for both 2D-BP and 3D-BP in
negligible computational times.
We compare the different methods on standard benchmark instances, the results being taken
from the literature.
We consider ten classes of instances from Berkey & Wang (1987) (Classes I-VI) and Martello
& Vigo (1998) (Classes VII-X) for 2D-BP. Each class is characterized by different distributions
of the item sizes and considers a number of items equal to 20, 40, 60, 80, and 100. For each
combination of class and instance size, 10 repetitions are considered.
We consider the instances of Martello et al. (2000) for the 3D-BP. The instances are organized
in six classes.
The bin size is W = H = D = 100 for Classes I to III, the items belonging to five types, ranging
from small to large-sized. The five classes mix the item types in order to test different usage
scenarios. Bin and item dimensions in Classes IV to VI vary according to the following rules:
• Class IV: w
i
, l
i
, h
i
∼ U[1,10] and W = L = H = 10;
• Class V: w
i
, l
i
, h
i
∼ U[1,35] and W = L = H = 40;
• Class VI: w
i
, l
i
, h
i
∼ U[1,100] and W = L = H = 100.
The number of items is fixed to 50, 100, 150, and 200 items for each class, and 10 instances are
considered for each combination of class and cardinality of the item set.
97 Recent Advances in Multi-Dimensional Packing Problems
8 New Technologies: Trends, Innovations and Research
The solution methods used for each problem variant and their experimental settings are:
• 2D-BP
– TSPACK: coded in C and run on a Silicon Graphics INDY R10000sc (195 MHz) with a
time limit of 60 CPU seconds for each instance (Lodi et al., 1999);
– GASP: coded in C++, runs were performed on a Pentium4 3 GHz workstation. The time
limit was set to 3 seconds (Perboli et al., 2011).
• 3D-BP
– GLS: coded in C and run on a Digital workstation with a 500 MHz CPU. A time limit
of 1000 CPU seconds was imposed for each instance (Faroe et al., 2003);
– MPV: this is the truncated Branch and Bound proposed in Martello et al. (2000). It was
coded in C and run on a Pentium4 with 3 GHz CPU with a time limit of 1000 CPU
seconds per instance;
– TS
2
PACK: coded in C++ and run on a Pentium4 with 2 GHz CPU with a time limit of
300 CPU seconds per instance;
– GASP coded in C++, runs were performed on a Pentium4 3 GHz workstation. The time
limit was set to 5 seconds.
The results for 2D-BP are summarized in Table 1. The instance type is given in the first column,
while Columns 2, 3, and 4 present the results of GASP, TSPACK, and the best known solution
taken from the literature (the optimal value in most cases), respectively. Notice that the best
known solutions have been generally obtained by means of different exact methods and with
a computational effort of several thousands of seconds. Finally, Columns 5 and 6 give the
relative percentage gaps of GASP with respect to TSPACK and the best known solutions (a
negative value means a better performance of GASP). All the time limits reported in the table
are expressed in UCT.
GASP achieves better results than TSPACK, while reducing the computational effort by a
factor of about 4. As the code of TSPACK is publicly available, we also run it for 30 UCT
seconds but the results did not change. Moreover, GASP achieves results that are less than 1%
from the overall optima.
Class GASP TSPACK UB* Gap TSPACK Gap UB*
3 s 12 s
I 100.1 101.5 99.7 -1.40% 0.40%
II 12.9 13 12.4 -0.81% 4.03%
III 70.6 72.3 68.6 -2.48% 2.92%
IV 13 12.6 12.4 3.23% 4.84%
V 90.1 91.3 89.1 -1.35% 1.12%
VI 11.8 11.5 11.2 2.68% 5.36%
VII 83.1 84 82.7 -1.09% 0.48%
VIII 83.6 84.4 83 -0.96% 0.72%
IX 213 213.1 213 -0.05% 0.00%
X 51.4 51.8 50.4 -0.79% 1.98%
Total 729.6 735.5 722.5 -0.82% 0.98%
Table 1. 2D-BP Comparative Results
The results of 3D-BP are summarized in Table 2. Columns 1-3 give the instance type, bin
dimension, and number of items, respectively, Column 4 presents the results of GASP, while
98 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 9
Columns 5-8 give the gaps of the solutions obtained by GASP relative to those of MPV, GLS,
TS
2
PACK, and the best lower bound available, respectively. The gaps were computed as
(mean
GASP
−mean
o
)/mean
o
, where, for a given set of instances, mean
GASP
and mean
o
are the
mean values obtained by the GASP heuristic and the method compared to, respectively. A
negative value means that GASP yields a better mean value. The last row displays the total
number of bins used by GASP, computed as the sum of the values in the column, and the
average of the mean gaps. As for the 2D-BP, the time limits displayed are given in UCT.
The results indicate that GASP performs better than the truncated Branch & Bound and has
a gap of only 0.9% with the best algorithm in the literature, with a negligible computational
time: 5 CPU seconds compared to 1000 for GLS and 300 for TS
2
PACK.
To further illustrate this efficiency, Table 3 displays the performance of GASP w.r.t. those of
GLS and TS
2
PACK, in comparable times (i.e., 60 CPU seconds on a Digital 500 workstation
for GLS, equivalent to 18 UCT seconds, and 18 seconds for TS
2
PACK). These results are
impressive as GASP actually improves the solutions of both GLS and TS
2
PACK up to 0.6% on
average.
Class Bins n GASP MPV GLS TS2PACK LB
5 s 1000 s 300 s 300 s
I 100 50 13.4 -1.47% 0.00% 0.00% 3.88%
100 26.9 -1.47% 0.75% 0.75% 5.08%
150 37 -3.14% 0.00% 0.00% 3.35%
200 51.6 -1.34% 0.78% 0.98% 3.82%
II 100 50 29.4 0.00% 0.00% 0.00% 1.38%
100 59 -0.17% 0.00% 0.17% 0.85%
150 86.8 -0.46% 0.00% 0.00% 0.46%
200 118.8 -0.59% -0.17% 0.00% 0.42%
III 100 50 8.4 -8.70% 1.20% 1.20% 10.53%
100 15.1 -13.71% 0.00% -0.66% 7.86%
150 20.6 -14.17% 1.98% 2.49% 9.57%
200 27.7 -12.89% 1.84% 1.09% 6.54%
IV 10 50 9.9 1.02% 1.02% 1.02% 5.32%
100 19.1 -1.55% 0.00% 0.00% 3.80%
150 29.5 -0.34% 0.34% 1.03% 3.51%
200 38 -0.52% 0.80% 0.80% 3.54%
V 40 50 7.5 -8.54% 1.35% 1.35% 10.29%
100 12.7 -16.99% 3.25% 3.25% 10.43%
150 16.6 -15.74% 5.06% 5.06% 15.28%
200 24.2 -13.88% 2.98% 2.98% 6.61%
VI 100 50 9.3 -7.92% 1.09% 1.09% 6.90%
100 19 -5.94% 0.53% 1.06% 3.26%
150 24.8 -9.16% 3.77% 3.77% 10.22%
200 31.1 -10.89% 4.01% 3.67% 10.28%
Total 736.4 -4.35% 0.85% 0.90% 3.89%
Table 2. 3D-BP Comparative Results
99 Recent Advances in Multi-Dimensional Packing Problems
10 New Technologies: Trends, Innovations and Research
Class Bins n GASP GLS TS2PACK LB
5 s 18 s 18 s
I 100 50 13.4 0.00% 0.00% 3.88%
100 26.9 0.00% -0.37% 5.08%
150 37 -1.33% -1.86% 3.35%
200 51.6 -2.27% -2.64% 3.82%
IV 100 50 29.4 0.00% 0.00% 1.38%
100 59 0.00% -0.34% 0.85%
150 86.8 -0.34% -0.57% 0.46%
200 118.8 -0.92% -0.34% 0.42%
V 100 50 8.4 1.20% 1.20% 10.53%
100 15.1 0.00% -1.95% 7.86%
150 20.6 -0.48% -1.44% 9.57%
200 27.7 -0.36% -1.07% 6.54%
VI 10 50 9.9 1.02% 0.00% 5.32%
100 19.1 -1.04% -2.05% 3.80%
150 29.5 0.00% 0.34% 3.51%
200 38 -1.30% -1.81% 3.54%
VII 40 50 7.5 1.35% 1.35% 10.29%
100 12.7 3.25% 3.25% 10.43%
150 16.6 5.06% 3.75% 15.28%
200 24.2 -0.82% -2.42% 6.61%
VIII 100 50 9.3 1.09% 1.09% 6.90%
100 19 0.53% -1.04% 3.26%
150 24.8 1.22% 0.81% 10.22%
200 31.1 1.63% 0.97% 10.28%
Total 736.4 -0.23% -0.57% 3.89%
Table 3. 3D-BP Comparison of State-of-the-Art Methods in Comparable Computational
Times
3.2 Multi-dimensional knapsack problems
Given a set of rectangular items i = 1, . . . , n with sizes w
i
and l
i
and a profit p
i
, and a bin of
fixed dimensions W and L, the Two-Dimensional Orthogonal Knapsack problem(2D-KP) consists
in orthogonally packing a subset of the items into the bin to maximize the sum of the profits
of the loaded items. Most algorithms present in the literature assume the items cannot be
rotated.
The natural extension to the three-dimensional case is called Three-Dimensional Orthogonal
Knapsack problem (3D-KP). The variant studied in the literature considers explicitly the
rotation of the items.
The restriction of 2D-KP to the case where the item profits are equal to the rectangle areas
is also known in the literature as the Cutting Stock problem. According to the classification
of Wäscher et al. (2007), 2D-KP can be characterized as a Two-Dimensional Single Large
Object Placement Problem (2D-SLOPP) and 3D-KP as Three-Dimensional Single Large Object
Placement Problem (3D-SLOPP).
100 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 11
Integer Programming formulations for the 2D-KP were presented by Beasley (1985a),
Hadjiconstantinou & Christofides (1995), and Boschetti et al. (2002), among others. Fekete
& Schepers (1997; 2004b) addressed 2D and 3D Knapsack problems using an advanced
graph representation within a Branch-and-Bound algorithm, which assigned items to the
bin without specifying their position. Pisinger & Sigurd (2007) proposed a Branch-and-Cut
approach for the 2D-KP, in which a one-dimensional knapsack problem selects the most
profitable items whose overall area does not exceed the area of the bin. A two-dimensional
packing problem in decision form is then solved through constraint programming to check
the feasibility of loading the selected items. Caprara & Monaci (2004) also proposed a
Branch-and-Bound algorithm for the 2D-KP, where items are assigned to the bin without
specifying their positions, the feasibility check being performed afterwords through an
enumeration scheme.
Many authors presented heuristic procedures for the 2D-KP. Lai & Chan (1997a;b) developed
two meta-heuristics based on Simulated Annealing and evolutionary principles. The former
proceeds in three steps: splitting the master surface into sub-areas that can be used for
packing, placing the items according to a fitting heuristic, and finally a classical search
procedure based on moving the items and a cooling scheme. The evolutionary strategy
includes hill-climbing and mutation procedures. Both meta-heuristics were tested on
randomly generated instances as well as on real world problems with the objective of
minimizing the wasted material. Leung et al. (2001) proposed a Genetic Search approach
and a Simulated Annealing meta-heuristic. The authors hybridized the Genetic Search with a
simple on-line bottom-left heuristic that packed the items as down and as left as possible. An
extensive study of different crossover operators is presented but no detailed computational
results are given. The meta-heuristics proposed by Lai & Chan (1997a) and Leung et al. (2001)
cannot produce all the feasible cutting patterns. Liu & Teng (1999) used a different heuristic,
denoted the improved BL-algorithm, to overcome this issue by, first, placing the first item in the
bottom left-hand corner of the master surface, and then inserting all the other items starting
from the top right-hand corner of the surface and then shifting them alternatively left and
down until no further shifting is possible. Unfortunately, no computational results are given
for problem instances drawn from the literature, making impossible a direct comparison with
other heuristics.
Beasley (2004) proposed an innovative population-based meta-heuristic for a new nonlinear
formulations of the problem. Boolean variables were used to indicate whether an item is cut
from the master surface or not, two other integer variables representing the coordinates of the
center of the itemcut. This formulation leads to a three-dimensional solution encoding used to
create the individuals of the population, which was evolved through crossover and mutation.
Infeasible solutions were penalized during the fitness-evaluation step. Computational results
were presented for a number of standard problems fromthe literature, as well as for a number
of large randomly generated problems.
These results were improved by Hadjiconstantinou & Iori (2007), who proposed a Genetic
Search meta-heuristic hybridized with a greedy procedure, where the items can be placed on
the point maximizing the so-called touching perimeter, i.e., the fraction of the perimeter of
an item to be added to the bin touching either the edges of the existing packing, or the edges
of the surface of the bin. Computational results showed that this method outperformed the
algorithm by Beasley (2004) from both the computational and the solution-quality points of
view.
101 Recent Advances in Multi-Dimensional Packing Problems
12 New Technologies: Trends, Innovations and Research
In the same year, Alvarez-Valdes et al. (2007) introduced another meta-heuristic that improved
the results of Beasley (2004). The algorithm is a Tabu Search meta-heuristic implementing
interesting moves able to compact the packing, thus reducing the wasted space. Moreover,
the algorithm is able to deal with additional constraints, e.g., the presence of different types
of items and lower bounds on the number of items to be loaded for each type.
The best results for 2D-KP are due to Bortfeldt & Winter (2009) who proposed a Genetic
Algorithm for guillotine packings. The algorithm is also able to deal with item rotations
and was tested on a wide series of instances with and without guillotine cuts. Their results
improved both Hadjiconstantinou & Iori (2007) and Alvarez-Valdes et al. (2007), but with a
significant computational effort.
The first contribution to 3D Knapsack problems is to be found in Egeblad & Pisinger (2009),
where the authors proposed an exact model and heuristics for 2Dand 3DKnapsack problems.
The model cannot be used to derive both lower and upper bounds, however, whilst the
heuristic manages instances up to 60 items for the 3D case.
Finally, Perboli (2011) extended the GASP algorithm (Perboli et al., 2011) for the
Multi-Dimensional Bin Packing problem. The algorithm shares the same structure and
packing representation by means of EPs with the original method. Compared to the version
for Multi-Dimensional Bin Packing Problem, this variant of the algorithm incorporates a
long-term memory mechanism, which adapts the search according to the number of times an
itemis loaded in a solution considered during the search. The method achieves state-of-the-art
results for both 2D-KP and 3D-KP, with or without rotation, within negligible computational
times.
We used two set on instances to test 2D-KP. A set of 38 small-size instances, with less than 100
items and known optimal solutions (except gcut13), was built as follows:
• Twelve instances (ngcut01 to ngcut12) from Beasley (1985b), available from the
ORLIB-Library (Beasley, 1990);
• Four instances (hccut03, hccut08, hccut11, hccut12) fromHadjiconstantinou &Christofides
(1995);
• Five instances (okp01 to okp05) from Fekete & Schepers (2004a);
• Three instances (cgcut01 to cgcut03) from Christofides & Whitlock (1977);
• Thirteen instances (gcut01 to gcut13) from Beasley (1985a), available from the
ORLIB-Library (Beasley, 1990);
• One instance (wang20) from Wang (1983).
A second set contains large-size instances with up to 4000 items from the sets ngcutfs01,
ngcutfs02, and ngcutfs03, randomly generated by Beasley (2004) similarly to the procedure of
Fekete & Schepers (2004a). All these instances are available from the ORLIB-Library Beasley
(1990). The large-size test set works with a bin of size [100, 100] and is composed of instances of
Type I, II, and III, according to the criteria used for the randomgeneration of items. For each of
the three types, m = 7 item subtypes and Q = 3 items for each item subtype were considered
and 10 randominstances were built for each combination of m and Q. The complete set is thus
made up of 630 instances with up to 4000 items. Both sets refer to the 2D-KP problem with
fixed rotation, i.e., items cannot be rotated.
For the 3D-KP, we consider the instances presented in http://www.diku.dk/hjemmesider/
ansatte/pisinger/codes.html, together with the corresponding results.
102 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 13
The instances were obtained as follows:
• number of items: n ∈ {20, 40, 60}
• item generation strategy: t ∈ {C, R}, where:
– C alias clustered, because the instance consists of only 20 items, which are duplicated
appropriately;
– R alias random, because the instance consists of independently generated items
• bin size: p ∈ {50, 90}, expressed as a percentage of the total volume of the items
• item attributes:
– size: s
i
= (w
i
, d
i
, h
i
), which must belong to one among the following geometric classes
(see Egeblad & Pisinger (2009)):
*
Cubes (C). The items are cubic and their sizes are defined as w
i
∈ [1, 100] , d
i
=
w
i
, h
i
= w
i
;
*
Diverse (D). The sizes of the items are randomly chosen in the following ranges
w
i
∈ [1, 50] , d
i
∈ [1, 50] , h
i
∈ [1, 50];
*
Long (L). The sizes of the items are randomly chosen in the following ranges w
i

[1, 200/3] , d
i
∈ [50, 100] , h
i
∈ [1, 200/3];
*
Uniform (U). The sizes of the items are randomly chosen in the following ranges
w
i
∈ [50, 100] , d
i
∈ [50, 100] , h
i
∈ [50, 100];
– profit: p
i
= 200 + w
i
d
i
h
i
.
The combination of all the values gives 60 instances for each set. In this case, the items can be
rotated.
The different algorithms compared and the test environments are:
• 2D-KP
– H
B
: heuristic by Beasley (2004), coded in FORTRAN and run on a Silicon Graphics O2
workstation with a R10000 225 MHz processor;
– H
HI
: heuristic by Hadjiconstantinou & Iori (2007), coded in FORTRAN and run on a
Pentium IV 1700 MHz;
– H
AP
: heuristic by Alvarez-Valdes et al. (2007), coded in C++ and run on a Pentium III
at 800 MHz;
– H
BW
: heuristic by Bortfeldt & Winter (2009), coded in C and run on an Intel PC 3 GHz,
Dual Core;
– GASP coded in C++ and run on a Pentium4 3000 MHz workstation
• 3D-KP
– H
EP
: heuristic by Egeblad & Pisinger (2009), coded in C++ and run on an AMD Athlon
64 3800+ processor;
– GASP coded in C++ and run on a Pentium4 3 GHz workstation.
We report only aggregated results for 2D-KP on the small-size instances of the first set. The
best results of H
B
present an average gap of 1.24% from the optimal solutions, while H
HI
solves all the instances to the optimum, except gcut13 (for which no optimal solution is known
in the literature) and gcut02. H
AP
finds all the known optima (and the best known value for
gcut13), while H
BW
fails only in instance wang20. GASP solves to optimality all instances for
which the optimum is available and determines the best known value for gcut13 with a mean
computational effort less than 5 seconds.
103 Recent Advances in Multi-Dimensional Packing Problems
14 New Technologies: Trends, Innovations and Research
H
B
H
HI
H
AP
H
BW
GASP GASP
(best) (average)
Type I 1.64 1.24 0.95 1.03 0.98 1.01
558.00 46.41 10.00 3600.00 4.15 4.82
Type II 1.70 1.32 1.06 1.09 1.06 1.07
668.00 48.18 12.16 3600.00 3.77 3.68
Type III 1.66 1.35 0.94 0.95 0.94 1.01
685.00 63.80 16.61 3600.00 5.39 6.25
Average 1.67 1.30 0.98 1.02 0.99 1.03
637.00 52.79 12.92 3600.00 4.44 4.92
Table 4. 2D-KP Comparative Results
A more challenging comparison can be made considering the large-size instances. The results
are summarized in Table 4, which displays the mean values of the results obtained by the
three heuristics by instance type. Columns 1 to 4 report the results of H
B
, H
HI
, H
AV
, and
H
BW
, respectively, while the remaining columns display the the best and average solution
values obtained by GASP over the 10 repetitions. We report the percentage gaps from the
best known results and the Unified Computational Times. Notice that for the computational
times of H
BW
, the authors provided only the time limit and not the computational times they
needed to reach their best results.
According to these results, the best heuristic is still H
AP
with a mean optimality gap of
0.98%. H
B
is no longer competitive, while H
BW
has competitive results, but with a heavy
computational effort. We notice that H
BW
is still the best heuristic for the variant of the
problem where only guillotine cuts are considered. GASP performs practically as well as
H
AP
with a much smaller computational effort, actually reducing computing times by up to
two orders of magnitude compared to the other methods. GASP also shows high performance
stability when the random seeds are varied.
The results of the comparison for 3D-KP are presented in Table 5 aggregated by instance type.
We display the mean percentage gaps with respect to the upper bound obtained by solving the
mono-dimensional knapsack problem with the items of the 3D-KP instance with item weights
equal the their respective volumes and a knapsack maximum weight equal to the volume of
the 3D knapsack. This bound is known to be quite poor in quality, but it is the current best for
3D knapsack where items can be rotated. As for 2D-KP, two results are provided for GASP:
the best and mean over 10 runs, respectively. Computational times were fixed to 120 seconds
for H
EP
and 10 seconds for GASP.
These results indicate that GASP performs better than H
EP
, both in quality and computational
efficiency, even when the mean GASP value over 10 random runs is considered. The small
gap between best and average results also shows that GASP is stable with respect to random
repetitions.
3.3 Container loading problem
Given a set of box items i = 1, ..., n with sizes w
i
, l
i
, and h
i
and a container of fixed
dimensions W, L, and H, the Three-Dimensional Container Loading problem(3D-CLP) consists in
orthogonally packing a subset of the items into the container maximizing the used fraction of
104 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 15
n H
EP
GASP (best) GASP (average)
20 19.7 15.2 15.3
40 19.3 18.0 18.3
60 15.7 14.8 15.1
Total 18.2 16 16.2
Table 5. 3D-KP Comparative Results
its volume. Usually, items can be rotated, but some restrictions on the feasible rotations can be
present. According to Wäscher et al. (2007), the 3D-CLP can be classified as Three-Dimensional
Rectangular Single Large Object Placement problem (3D-SLOPP).
The problem arises in important practical cases, particularly in logistics and distribution
where containers, trucks, rail cars, etc. must be loaded with freight. It can be seen as a
special case of the Three-Dimensional Knapsack Problem, where the profits of the items are
their volumes. Yet, due to the large number of items that can be loaded into a container
and to the fact that the item profit is linked to the item size, the methods developed for the
Multi-Dimensional Knapsack Problemfail in most cases of interest. Consequently, the 3D-CLP
has been studied as a separate problem in the literature.
The first heuristic for the 3D-CLP was proposed by George & Robinson (1980). The authors
developed a wall-building procedure, which was later improved by several authors, e.g.,
Bischoff & Marriot (1990) and, more recently, Moura & Oliveira (2005). A different approach,
based on column generation, was proposed by Gehring & Bortfeldt (1997), which provided
the starting point of a series of meta-heuristics by the same authors: Tabu Search (Bortfeldt &
Gehring, 1998), a hybrid algorithm(Bortfeldt &Gehring, 2001), as well as their best algorithm,
a parallel hybrid local search combining Simulated Annealing and Tabu Search (Mack et al.,
2004). Parreño et al. (2008) presented a reactive GRASP, which uses a constructive-block
heuristic similar, for its usage of the space, to the Residual-Space idea of Crainic et al. (2008).
The authors improved their GRASP in Parreño et al. (2010). The method implemented the
same constructive heuristic presented in Parreño et al. (2008), but introduced five types
of neighborhoods, mixed in a VNS-based meta-heuristic, which yielded the best results in
the literature. Pisinger (2002) also presented a wall-building approach yielding interesting
results, but tested on a different set of instances than the other contributions, making a direct
comparisons difficult. Finally, because 3D-CLP is a special case of 3D-KP, any available code
for 3D-KP with rotations can be used as well. Consequently, we also consider the GASP
version developed for Multi-Dimensional Knapsack problems presented in Perboli (2011).
The experiments were performed using the standard benchmark instances for the 3D-CLP
generated by Bischoff & Ratcliff (1995). The whole set of instances is made up of 14 classes,
namely BR1 to BR14, of 100 instances each, but only sets BR1 to BR7 were tested on all the
algorithms presented previously. The number of box types increases from 3 in BR1 to 20 in
BR7, thus covering a wide range of situations. The number of items of each type decreases
froman average of 50.2 items per type in BR1 to 5.60 in BR7. For each itemtype, the maximum
number of items available is known. The total volume of the items is on average 99.5% of the
capacity of the container.
We compare the results of RG
PAO
, the Reactive GRASP by Parreño et al. (2008), TS
BG
, the Tabu
Search by Bortfeldt & Gehring (1998), and H
MO
, the GRASP approach by Moura & Oliveira
(2005) truncated after 5000 and 50000 iterations (indicated as 5k and 50k, respectively).
105 Recent Advances in Multi-Dimensional Packing Problems
16 New Technologies: Trends, Innovations and Research
Instance set TS
BG
H
MO
RG
PAO
5k RG
PAO
50k VNS
PAO
GASP
BR1 93.23 93.26 93.27 93.66 94.93 93.32
BR2 93.27 93.56 93.38 93.90 95.16 93.78
BR3 92.86 93.71 93.39 94.00 94.99 93.65
BR4 92.40 93.30 93.16 93.80 94.71 93.70
BR5 91.61 92.78 92.89 93.49 94.33 92.00
BR6 90.86 92.20 92.62 93.22 94.04 92.05
BR7 89.65 91.20 91.86 92.94 93.53 93.42
Mean volume 91.98 92.86 92.94 93.57 94.53 93.13
Mean time 38 205 8 77 28 10
Mean UCT 9 126 10 99 36 10
Table 6. 3D-CLP Comparative Results
Algorithm TS
BG
run on a Pentium II 400 MHz, using a mean computation effort of 316
seconds, H
MO
on a Pentium IV at 2.4 GHz, with an average time of 69 seconds, while RG
PAO
run on a 1.5 GHz Pentium mobile with a mean effort of 7.83 seconds. GASP was coded in
C++ and run on a Pentium4 3000 MHz workstation. For each instance, 10 repetitions were
performed changing the seed of the random generator, with a time limit of 10 seconds.
Comparative performance results are reported in Table 6 as average values of the solutions
obtained on the 100 instances. Column 1 displays the instance set, while Columns 2 to
4 display the results of TS
BG
, H
MO
, RG
PAO
, VNS
PAO
, and GASP, respectively. The last
three rows display the mean used container volume (in %), the mean computational time
as reported in the literature, and the mean Unified Computational Time (no ratio is available
for the computer used by TS
BG
). The results indicate that, without any particular adaptation,
GASP compares advantageously in solution quality with previous state-of-the-art algorithms
for the 3D-CLP problem, while significantly reducing the computational effort. In particular,
GASP is on average more effective than H
MO
and RG
PAO
, requiring some 10 times less
computational effort than RG
PAO
, which leads in solution quality by a very narrow margin.
4. General remarks
Many solution methods were proposed for Multi-Dimensional Packing problems, the two
methodological frameworks emerging as the most used being Tabu Search and Genetic
Algorithms. The latter needs usually an hybridization with specific optimization procedures
managing the peculiarities of each problem setting.
Most methods in the literature generally aim for better solution quality without much care to
how general or flexible it is. To evaluate this aspect, we adopt the two additional performance
criteria stated by Cordeau et al. (2002) for evaluating Vehicle Routing heuristics, namely
simplicity and flexibility. The former relates to ease of understanding and coding of an
algorithm, while the latter focuses on the possibility to easily introduce additional constraints.
From the simplicity point of view, TSPACK (Lodi et al., 2004b) and GASP (Perboli et al., 2011)
stand out due to their simple structures and ease of implementation. Notice though, that,
similarly to many packing meta-heuristics, TSPACK mixes in the neighborhood structure
the issues of packing feasibility, which follows from the packing representation, and its
optimality, which relates to the particular problemsettings. This greatly reduces the generality
of the method. This contrasts with the modular structure of GASP, which let the authors to
106 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 17
successfully address different packing problems (Bin Packing, Knapsack, Container Loading,
etc.). On the other hand, methods such as TS
2
PACK (Crainic et al., 2009) present an interesting
structure, but the quite complicated packing representation given by the interval graphs make
them harder to understand and manage.
Evaluating flexibility, we see methods, e.g., TSPACK for the Multi-Dimensional Bin
Packing, and H
BW
(Bortfeldt & Winter, 2009) and H
EP
(Egeblad & Pisinger, 2009) for the
Multi-Dimensional Knapsack Problem, which have been successfully tested on different
variants of the same problem, showing a good flexibility at this level. On the other hand,
up to now, GASP is the only method that has been satisfactory tested on different packing
problem classes.
Turning to the packing representation, the most elegant approach is the interval graph
representation by Fekete & Schepers (1997). Unfortunately, it is also the less flexible when one
has to deal with additional constraints like items in fixed positions and guillotine cuts (Perboli,
2002). Moreover, its performances strongly depends on the data structures used to update the
representation. Corner Points by Martello et al. (2000) are probably the easiest to understand
and implement. The Extreme Points by Crainic et al. (2008) offer a better exploitation of the
bin volume and represent a good compromise between simplicity and accuracy.
The last point to focus on is the public availability of these methods. For most of them, the
code is not public. To our best knowledge, the only methods that are publicly available are
TSPACK, the Branch & Bound by Martello et al. (2000) and the heuristic for container loading
by Pisinger (2002). All these codes can be downloaded from the web sites of the authors.
5. Conclusions
In this chapter we presented a detailed and up-to-date survey of solution methods
for Multi-Dimensional Packing problems. We first focused on the common issue of
packing problems, i.e., the representation of the packing. We then considered the main
Multi-Dimensional Packing problems and discussed the efficiency and accuracy of the
available solution methods.
We identified for each problem setting the methods that perform best. We also observed that
most methods are tailored for one problem setting only. The only method that emerges as a
general framework is GASP, which has been successfully applied to different variants of the
problems presented in this chapter.
6. Acknowledgments
While working on this project, T.G. Crainic was the NSERC Industrial Research Chair
on Logistics Management, ESG UQAM, and Adjunct Professor with the Department of
Computer Science and Operations Research, Université de Montréal, and the Department of
Economics and Business Administration, Molde University College, Norway. Partial funding
was provided by the Natural Sciences and Engineering Council of Canada (NSERC), through
its Industrial Research Chair and Discovery Grants programs.
This project has been partially supported by the Ministero dell’Istruzione, Università e
Ricerca (MIUR) (Italian Ministry of University and Research), under the Progetto di Ricerca
di Interesse Nazionale (PRIN - Research Project of National Interest), 2009: "Models and
algorithms for the Optimization in Logistics".
107 Recent Advances in Multi-Dimensional Packing Problems
18 New Technologies: Trends, Innovations and Research
7. References
Alvarez-Valdes, R., Parreno, F. & Tamarit, J.-M. (2007). A tabu search algorithm for a
two-dimensional non-guillotine cutting problem, European Journal of Operational
Research 183: 1167–1182.
Baldacci, R. & Boschetti, M. A. (2007). A cutting plane approach for the two-dimensional
orthogonal non guillotine cutting stock problem, European Journal of Operational
Research 183: 1136–1149.
Baldi, M., Perboli, G. & Tadei, R. (2011). The three-dimensional knapsack problem with
balancing constraints, Publication CIRRELT-2011-51, Centre Interuniversitaire de
Recherche sur les Réseaux d’Entreprise, la Logistique et le Transport, Université de
Montréal, Montréal, Canada.
Beasley, J. E. (1990). Or-library: distributing test problems by electronic mail, Journal of the
Operational Research Society 41: 1069–1072.
Beasley, J. E. (1985a). Algorithms for two-dimensional unconstrained guillotine cutting,
Journal of the Operational Research Society 36: 297–306.
Beasley, J. E. (1985b). An exact two-dimensional non-guillotine cutting stock tree search
procedure, Operations Research 33: 49–64.
Beasley, J. E. (2004). A population heuristic for constrained two-dimensional non-guillotine
cutting, European Journal of Operational Research 156: 601–627.
Berkey, J. O. & Wang, P. Y. (1987). Two dimensional finite bin packing algorithms, Journal of
the Operational Research Society 38: 423–429.
Bischoff, E. E. & Marriot, M. D. (1990). A comparative evaluation of heuristics for container
loading, European Journal of Operational Research 44: 267–276.
Bischoff, E. E. & Ratcliff, M. S. W. (1995). Issues in the development of approaches to container
loading, Omega 23: 377–390.
Bortfeldt, A. & Gehring, H. (1998). A tabu search algorithm for weakly heterogeneous
container loading problems, OR Spectrum 20: 237–250.
Bortfeldt, A. & Gehring, H. (2001). A hybrid genetic algorithm for the container loading
problem, European Journal of Operational Research 131: 143–161.
Bortfeldt, A. & Winter, T. (2009). A genetic algorithm for the two-dimensional knapsack
problem with rectangular pieces, International Transactions in Operational Research
16: 685–713.
Boschetti, M. A., Hadjiconstantinou, E. & Mingozzi, A. (2002). New upper bounds for
the twodimensional orthogonal cutting stock problem, IMA Journal of Management
Mathematics 13: 95–119.
Caprara, A. &Monaci, M. (2004). On the 2-dimensional knapsack problem, Operations Research
Letters 32: 5–14.
Christofides, N. & Whitlock, C. (1977). An algorithm for two-dimensional cutting problems,
Operations Research 25: 30–44.
Chung, F. K. R., Garey, M. R. & Johnson, D. S. (1982). On packing two-dimensional bins, SIAM
- Journal of Algebraic and Discrete Methods 3(1): 66–76.
Cordeau, J.-F., Gendreau, M., Laporte, G., Potvin, J.-Y. & Semet, F. (2002). A guide to vehicle
routing heuristics, Journal of the Operational Research Society 53(5): 512–522.
Crainic, T., Perboli, G. &Tadei, R. (2008). Extreme point-based heuristics for three-dimensional
bin packing, INFORMS Journal on Computing 20: 368–384.
Crainic, T., Perboli, G. & Tadei, R. (2009). TS2PACK: A two-level tabu search for the
three-dimensional bin packing problem, European Journal of Operational Research
195: 744–760.
108 New Technologies – Trends, Innovations and Research
Recent Advances in Multi-Dimensional Packing Problems 19
den Boef, E., Korst, J., Martello, S., Pisinger, D. & Vigo, D. (2005). Erratum to ”the
three-dimensional bin packing problem”: Robot-packable and orthogonal variants
of packing problems, Operations Research 53(4): 735–736.
Egeblad, J. & Pisinger, D. (2009). Heuristic approaches for the two- and three-dimensional
knapsack packing problem, Computers & Operations Research 36: 1026–1049.
Faroe, O., Pisinger, D. &Zachariasen, M. (2003). Guided local search for the three-dimensional
bin packing problem, INFORMS Journal on Computing 15(3): 267–283.
Fekete, S. P. &Schepers, J. (1997). Anewexact algorithmfor general orthogonal d-dimensional
knapsack problems, ESA ’97, Springer Lecture Notes in Computer Science 1284: 144–156.
Fekete, S. P. & Schepers, J. (2004a). A combinatorial characterization of higher-dimensional
orthogonal packing, Mathematics of Operations Research 29(2): 353–368.
Fekete, S. P. & Schepers, J. (2004b). A general framework for bounds for
higher-dimensional orthogonal packing problems, Mathematical Methods of Operations
Research 60(2): 311–329.
Gehring, H. & Bortfeldt, A. (1997). A genetic algorithm for solving the container loading
problem, International Transactions in Operational Research 4: 401–418.
George, J. A. &Robinson, D. F. (1980). Aheuristic for packing boxes into a container, Computers
& Operations Research 7(3): 147–156.
Gilmore, P. C. & Gomory, R. E. (1965). Multistage cutting problems of two and more
dimensions, Operations Research 13: 94–119.
Hadjiconstantinou, E. & Christofides, N. (1995). An exact algorithm for general, orthogonal,
two-dimensional knapsack problems, European Journal of Operational Research
83(1): 39–56.
Hadjiconstantinou, E. & Iori, M. (2007). A hybrid genetic algorithm for the two-dimensional
single large object placement problem, European Journal of Operational Research
183: 1150–1166.
Imahori, S., M.Yagiura & Ibaraki, T. (2003). Local search algorithms for the rectangle packing
problem with general spatial costs, Mathematical Programming, Series B 97: 543–569.
Joncour, C., Pêcher, A. & Valicov, P. (2010). MPQ-trees for orthogonal packing problem,
Electronic Notes on Discrete Mathematics 36: 423–429.
Korte, N. & Möhring, R. (1989). An incremental linear-time algorithm for recognizing interval
graphs, SIAM J. Comput. 18: 68–81.
Lai, K. K. & Chan, J. W. M. (1997a). Developing a simulated annealing algorithm for the
cutting stock problem, Computers & Industrial Engineering 32: 115–127.
Lai, K. K. & Chan, J. W. M. (1997b). An evolutionary algorithm for the rectangular cutting
stock problem, International Journal of Industrial Engineering 4: 130–139.
Leung, T. W., Yung, C. H. & Troutt, M. D. (2001). Applications of genetic search and simulated
annealing to the two-dimensional non-guillotine cutting stock problem, Computers &
Industrial Engineering 40: 201–214.
Liu, D. & Teng, H. (1999). An improved BL-algorithm for genetic algorithm of the orthogonal
packing of rectangles, European Journal of Operational Research 112: 413–420.
Lodi, A., Martello, S. & Monaci, M. (2002). Two-dimensional packing problems: a survey,
European Journal of Operational Research 141: 241–252.
Lodi, A., Martello, S. & Vigo, D. (1999). Heuristic and metaheuristic approaches for a class of
two–dimensional bin packing problems, INFORMS Journal on Computing 11: 345–357.
Lodi, A., Martello, S. & Vigo, D. (2004a). Models and bounds for two–dimensional level
packing problems, Journal of Combinatorial Optimization 8: 363–379.
109 Recent Advances in Multi-Dimensional Packing Problems
20 New Technologies: Trends, Innovations and Research
Lodi, A., Martello, S. & Vigo, D. (2004b). Tspack: A unified tabu search code for
multi-dimensional bin packing problems, Annals of Operations Research 131: 203–213.
Mack, D., Bortfeldt, A. & Gehring, H. (2004). A parallel hybrid local search algorithm
for the container loading problem, International Transactions in Operational Research
11: 511–533.
Martello, S., Pisinger, D. & Vigo, D. (2000). The three-dimensional bin packing problem,
Operations Research 48(2): 256–267.
Martello, S., Pisinger, D., Vigo, D., den Boef, E. & Korst, J. (2007). Algorithm 864: General
and robot-packable variants of the three-dimensional bin packing problem, ACM
Transactions on Mathematical Software 33, Article No 7: 1–12.
Martello, S. &Toth, P. (1990). Knapsack Problems - Algorithms and computer implementations, John
Wiley & Sons, Chichester, UK.
Martello, S. & Vigo, D. (1998). Exact solution of the finite two dimensional bin packing
problem, Management Science 44(44): 388–399.
Moura, A. & Oliveira, J. F. (2005). A grasp approach to the container loading problem, IEEE
Intelligent Systems 20: 50–57.
Murata, H., Fujiyoshi, K., Nakatake, S. & Kajitani, Y. (1996). Vlsi module placement based on
rectangle-packing by the sequence-pair, IEEE Trans. Comput. Aided Des. 15: 1518–1524.
Parreño, F., Alvarez-Valdes, R. & Oliveira, J. F. (2008). A maximal-space algorithm for the
container loading problem, INFORMS Journal on Computing 20(3): 412–422.
Parreño, F., Alvarez-Valdes, R. & Oliveira, J. F. (2010). Neighborhood structures for the
container loading problem: a VNS implementation, Journal of Heuristics 16: 1–22.
Perboli, G. (2002). Bounds and heuristics for the Packing Problems, PhD thesis, Politecnico di
Torino, Turin, Italy.
Perboli, G. (2011). An efficient metaheuristic for multi-dimensional knapsack problems,
DAUIN Tech. Rep., Department of Control and Computer Engineering, Politecnico
di Torino, Turin, Italy.
Perboli, G., Crainic, T. G. & Tadei, R. (2011). An efficient metaheuristic for multi-dimensional
multi-container packing, Proceedings of seventh annual IEEE Conference on Automation
Science and Engineering (IEEE CASE 2011), pp. 1–6.
Pisinger, D. (2002). Heuristics for the container loading problem, European Journal of
Operational Research 141: 382–392.
Pisinger, D. & Sigurd, M. M. (2007). Using decomposition techniques and constraint
programming for solving the two-dimensional bin packing problem, INFORMS
Journal on Computing 19: 36–51.
SPEC (2006). SPEC CPU2006 benchmarks. http://www.spec.org/cpu2006/results/.
URL: http://www.spec.org/cpu2006/results/
Wang, P. Y. (1983). Two algorithms for constrained two-dimensional cutting stock problems,
Operations Research 31: 573–586.
Wäscher, G., Haussner, H. & Schumann, H. (2007). An improved typology of cutting and
packing problems, European Journal of Operational Research 183: 1109–1130.
110 New Technologies – Trends, Innovations and Research
Part 2
Nanotechnologies

6
Nano Research Trends of Critical
Scientific Fields Across Leading Worldwide
Geo-Economic Players and
Their Spatial Interactions
Mario Coccia
1,2
, Ugo Finardi
1,3,*
and Diego Margon
1
1
National Research Council of Italy, CERIS-CNR, Moncalieri- Torino
2
Georgia Institute of Technology, School of Public Policy, Atlanta
3
Università di Torino, Dipartimento di Chimica I.F.M.
1,3
Italy
2
USA
1. Introduction
Nanotechnologies are one of the NBIC “converging technologies” (Nanotechnologies,
Biotechnologies, Informatics and Cognitive Sciences) that are foreseen to change the world in
the next future (Roco 2008; Linstone 2011). Nanoscience and, in particular, nanotechnologies
are a new “technological system” (Freeman and Soete, 1987, p. 67). Nanoscience studies are
flourishing in several countries and scientists tend, more and more, to publish on some critical
research topics such as recently invented nanomaterials, new techniques that are suitable to
study and characterize them, preparation techniques and substances used to produce
nanomaterials and nanostructured objects, properties and technological uses of
nanostructured materials and so on (cf. Islam and Miyazaki, 2010; Islam and Miyazaki, 2009;
Bainbridge and Roco, 2006; Coccia et al. 2011). The importance of nanotechnologies and
nanoscience has begun to go beyond the bare entourage of laboratories and research centres
and is nowadays well present everywhere industrial innovation takes place (Goddard III et al.,
2007). In fact, nanotechnological innovations are critical in several industries such as
microelectronics, chemistry, public health, environment, etc. (see Bainbridge and Roco, 2006;
Pilkington et al., 2009; Tegart, 2009; Glenn, 2006; van Merkerk and van Lente, 2005).
The spreading of nanotechnology in basic sciences and in applied research has also caused
the insurgence of great interest towards their study by economics of science and innovation
(cf. Bozeman et al., 2007, Rogers, 2010, Coccia, 2011, Coccia 2012). In fact, there is a vital
interest to analyze the technological trajectories of nanotechnology and the specificity of
countries in nanoscience production and its application in order to forecast research trends
and future effects onto industrial dynamics across countries (cf. Salerno et al., 2008;
Bainbridge and Roco, 2006; de Miranda Santo et al., 2006; Avenel et al., 2007).

*
Correspondig Author

New Technologies – Trends, Innovations and Research

114
The purpose of this paper is to analyze the current technological trajectories and interaction
in nanoscience and nanotechnology studies across worldwide economic players. In
particular, the main research questions addressed are:
 Which are the current driving research fields where nanotechnologies have been
developing?
 Which is the behaviour of leading geo-economic areas in the production of nanoscience
and nanotechnology knowledge?
 Which is the intensity of scientific collaborations across leading geo-economic players?
The study here analyzes the codified scientific production in this vital “technological
system” to show how different geo-economic regions (such as the North America and
Europe) have acted and reacted towards nanotechnology studies, and how they have been
behaving over time in the scientific knowledge production and international collaboration in
Nanoscience and Nanotechnologies (NSTs). This research can provide main findings in
order to understand the current worldwide research trends in NSTs. This is important to
support modern innovation policies aimed at improving the development of such
converging technologies able to support future patterns of economic growth.
This paper presents in section 2 a theoretical framework about nanotechnologies and
nanosciences; section 3 describes the methodology of research, whereas section 4 analyzes
the results and section 5 discusses lessons learned.
2. Theoretical background
“Nanoscience is the result of interdisciplinary cooperation between physics, chemistry,
biotechnology, material sciences and engineering towards studying assemblies of atoms and
molecules” (Renn and Roco, 2006, p. 154)
1
.
The “birth certificate” of NSTs, at least from the conceptual point of view, is considered the
renowned speech given at the American Physical Society meeting held at California Institute
of Technology by Richard P. Feynman (1960), where the 1964 Nobel Prize Laureate uttered
the famous sentence “There is plenty of room at the bottom” talking about the opportunities
for science and technology given by the vast expansion of scientific and technological
research towards the nanometric dimensional range and describing molecular machines
built with atomic precision. The first use of the word “nano-technology” instead has to be
assigned to Taniguchi (1974) of Tokyo Science University, who used it in an article on ion-
sputtering machining.
Since then, the spreading and growth of NSTs has been marked by inventions and findings
in terms of new nanostructured materials, investigation and characterization techniques,
and new nano-objects. By the operational point of view, one of the most common opinion is
that NSTs did originate in 1981 with the creation of Scanning Tunnelling Microscope (STM)
in the IBM laboratories in Zurich, by 1986 Nobel Prizes Laureates for Physics Gerd K. Binnig
and Heinrich Rohrer. From the point of view of nanostructured materials 1985 marks the
discovery of Buckyball (Buckminsterfullerene) by Kroto and Smalley (the discovery will

1
Cf. also Roco, 2007, pp. 3.1-3.26.
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

115
gain them the Nobel Prize in Chemistry in 1996, see Kroto et al., 1985); 1990 the discovery of
Silica mesoporous materials by Yanagisawa and co-workers at Waseda University in Tokio
(Yanagisawa et al., 1990); 1991 the discovery of Carbon nanotubes by Iijima (1991) at NEC
Corp. By the point of view of new nanostructured objects, it is remarkable the work
performed by Eigler and Schweizer (1990) who did spell the IBM logo in individual atoms
on a nickel surface. Several scientific journals having the stem “nano” on their title are
published nowadays.
NSTs represent mostly an approach to science, technology and innovation rather than a
specific sector by itself. For instance, the website of the American National Nanotechnology
Initiative
2
states
3
:
“Nanoscience involves research to discover new behaviours and properties of materials
with dimensions at the nanoscale which ranges roughly from 1 to 100 nanometres (nm).
Nanotechnology is the way discoveries made at the nanoscale is put to work.
Nanotechnology is more than throwing together a batch of nanoscale materials — it requires
the ability to manipulate and control those materials in a useful way.
Nanotechnology is the understanding and control of matter at dimensions between
approximately 1 and 100 nanometers, where unique phenomena enable novel applications.
Encompassing nanoscale science, engineering, and technology, nanotechnology involves
imaging, measuring, modelling, and manipulating matter at this length scale [...] Unusual
physical, chemical, and biological properties can emerge in materials at the nanoscale. These
properties may differ in important ways from the properties of bulk materials and single
atoms or molecules.”
By one side the definition discriminates between science and technology, which is
sometimes hard to tell. But on the other side, it describes precisely and briefly the
fundamental characters of NSTs: they act in a well defined dimensional field and this is
substantial and cannot be disregarded; purpose is discovering new behaviours and
properties distinctive of materials when nanostructured. From this point onwards,
technologies have the purpose of transforming the new knowledge in innovation.
As we can define NSTs as an approach towards matter, when we discuss the transfer of
nanoscience into technological innovation, as far as the “transversal” character of NSTs has
been defined, it is clear that we cannot talk about “application sectors” of NSTs. This, not
because nanotechnologies cannot be applied to industrial innovation and to the production
of goods, but, on the contrary, because the list of sectors is virtually endless.
The technological application of NSTs has been first of all in niche industries, mostly
knowledge-intensive and with high-added-value products, such as the production of
catalysts for industrial production (cf. Zecchina et al., 2007; Evangelisti et al., 2007) or
biomaterials produced for bone substitution inside the human body (cf. Bertinetti et al., 2006;
Celotti et al., 2006) and so on. In these cases, the distance existing between basic/purpose-

2
See: http://www.nano.gov/Nanotechnology_BigThingsfromaTinyWorldspread.pdf (accessed July
2010); http://www.nano.gov/html/facts/whatIsNano.html; accessed July 2010.
3
Cf. also Siegel et al., 1999.

New Technologies – Trends, Innovations and Research

116
free research and technological innovation is almost not existing, or very narrow, and the
high added value of goods justifies the economic engagement of the scientific research.
Other edge industries where the use of nanotechnologies is established are those of
biotechnologies and electronics: Bioelectronics. In this last case the downscaling of circuitry
– until and below limit of 45 nm (nanometers) – has mostly benefited of the extreme frontier
of manipulation technologies in order to reach a higher miniaturization.
NSTs are not only transversal to possible industrial applications, but also to scientific
sectors: e.g. material sciences, chemical and physical sciences, and material engineering.
Different traditional scientific fields have in general a different approach towards NSTs, as
well described by Balzani (2005) who gives his own definition of sciences and technologies,
and underlines the different approaches adopted towards NSTs by different categories of
scientists. The typical approach of physicists and engineers is the so-called top-down
approach, where the matter is manipulated instrumentally – e.g. with the techniques of
photolithography – in order to obtain the desired results: in this way the dimensional barrier
of 100 nanometers has been a hard one to overcome.
The typical approach of chemists is exactly reverse to the previous one: a bottom-up
approach where objects lying in the molecular dimensional domain – thus around and
slightly below the nanometer – can be used as “bricks” to build nanostructured objects with
bigger dimensions, such as the molecular computers with high scientific and technological
content in the quest for an innovating application.
Nanotechnologies are nowadays fully inserted in the paths of “creative destructions”
generated by technical knowledge in industries (Bozeman et al., 2007). NSTs are at the
convergence of several scientific and technological fields and affect the economic system by
the emergence of new industries (Bainbridge and Roco, 2006). Moreover, university
spinouts in NSTs are gaining importance and are playing a critical role for regional
development (Libaers et al., 2006). NSTs are also in a cutting-edge position in order to
enhance new systems for environmental control and remediation, though some envisage
dangers from their use (Rickerby and Morrison, 2007).
Scientometrics studies are effective approaches to analyze the emergence and
development of research fields in nanotechnology (Braun et al., 1997; Rogers, 2010). Salerno
et al. (2008) argue that: “Bibliometric analysis of publications […] can help have a synthetic
picture of the best players at a worldwide level, their lines of inquiries and their
relationships, that is, they could help to cope with the extremely fragmented knowledge,
actors and applications involved in the evolution of the field” (p. 1220). Leydesdorff and
Zhou (2007), basing their work on Journal Citation Report data shows that “nano”
journals have more complex content than other journals – from the point of view of
citations – and their position is at the interface between physics and chemistry. In fact,
Leydesdorff (2008) also shows the growing interdisciplinary effects of NSTs. Kostoff et al.
(2006; 2007; 2007a) describe an overview on the NSTs literature and show the continuous
evolution and growth in NSTs, driven by Asian countries. Schultz and Joutz (2010)
perform a patent analysis on USPTO nanotechnology patents. Several patents clusters are
identified using citations; this leads to affirm that a handful of very general
nanotechnologies are developing, with the potential for a wide economic impact.
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

117
Nanostructured materials and nanotech production processes are being developed for use
in a wide range of sectors (p. 167). Also Shea et al. (2011) analyze the same data source
with a descriptive statistic analysis, and show similar results. Finardi (2011) uses citations
of journal articles in patents to calculate the time elapsed between scientific activities and
patenting of technology in NST. Findings show a time distance of 3-4 years between the
two activities, while other similar fields show very different behaviour.
It is then obvious from this theoretical background that a deep scientific analysis of
research trends and interaction in the scientific production of NSTs across leading
worldwide players is an important topic to be developed in order to understand the
current technological trajectories that may support future spatial patterns of economic
growth.
3. Strategy of research
This paper uses Scopus database: “Scopus is the largest abstract and citation database of
peer-reviewed literature and quality web sources with smart tools to track, analyze and
visualize research. It’s designed to find the information scientists need […] Scopus provides
superior support of the literature research process” (Scopus, 2010)
4
.
Scopus has been preferred to other analogous web-databases because:
 It encompasses a wider set of data: “With over 18,000 titles from more than 5,000
publishers, Scopus offers researchers a quick, easy and comprehensive resource to
support their research needs in the scientific, technical, medical and social science fields
and, more recently, also in the arts and humanities”
5
.
 It has the broadest available coverage, with more than half of the content originating
from Europe, Latin America and the Asia Pacific region
6
.
 It has a wide set of data retrieval instruments, useful in performing Data Mining.
 It exploits a system of classification of titles under categories: “Titles in Scopus are
classified under four broad subject clusters (Life Sciences, Physical Sciences, Health
Sciences and Social Sciences & Humanities) which are further divided into 27 major
subject areas and 300 minor subject areas. Titles may belong to more than one subject
area”
7
.
Data mining from Scopus (2010) was performed using the following methodology:
a. the search of “nano*”
8
on “Article Title, Abstract, Keyword” is made;
b. on the selected records a further refinement is performed using the “Refine results”
frame, selecting only those records containing one or more of the following keywords:
“Nanostructured materials”, “Nanotechnology” or “Nanostructures”.

4
http://info.scopus.com/about/ (accessed 11 June 2010); See also http://info.scopus.com/why-
scopus/academia/ (accessed June 18
th
, 2010).
5
http://info.scopus.com/scopus-in-detail/content-coverage-guide/ (accessed June 18
th
, 2010).
6
http://info.scopus.com/scopus-in-detail/facts/ (accessed July 1
st
, 2010).
7
http://info.scopus.com/scopus-in-detail/content-coverage-guide/journalclassification/(accessed
June 18th, 2010).
8
“*” is the usual dummy meaning “any series of character after the ones written”

New Technologies – Trends, Innovations and Research

118
In particular, Data Mining is performed on:
 Time Horizon from 1996 to 2008 in order to analyze the temporal research trends and
scientific interactions. Within the range 1996-2008 we have the opportunity to retrieve
all information analyzed, whereas this is not possible for year before 1996 (when Scopus
starts gathering full data) and after the 2008 (as Data Mining was performed in January
2010).
 Key geo-economic areas: selected areas have been USA and Canada, South Korea,
Japan, China and Europe
9
. These geo-economic and politic areas are the main
worldwide players in the production of nanotechnology and nanoscience studies.
After that quantitative data have been retrieved, we have main information about several
characteristics of scientific products in NSTs. In particular, we show the affiliations of authors
(i.e. main research institutions and/or labs where the research is carried out by scholars) and
the subject areas
10
of nanoscience and nanotechnology studies published on leading scientific
journals. Our samples are based on the 149,324 scientific products (e.g. Articles, Proceedings,
etc.) on nanotechnology studies with their affiliations (about 96% of main research centres
operating in NSTs) retrieved as above described per countries and years. As papers
concerning the nanotechnology studies are published on journals that are classified per 28
subject areas
10
, the 149,324 scientific products have almost 400,000 occurrences of subject
areas. In general, the number of occurrences of subject areas by journals is greater than the
total number of scientific products (i.e. papers)
11
. The occurrences of articles represent a
view of subject areas in nanotechnology studies and how much attention they have received
in the scientific literature.
The vast sample of papers classified by Scopus in main subject areas has been aggregated in
five “Macro Subject Areas”: Material Science, Chemistry and Medicine, Physics and Earth
Sciences, Engineering; all marginal areas of nanotechnology studies (less than 5% of the
sample) have been included under the category “Others” (Information and Mathematics
Sciences, Social and Economic Sciences, Energy, Environmental Science). Table 1A in
Appendix shows the number of scientific products (mainly papers) per each Macro Subject
Area. This aggregation has been important to show the temporal and spatial pattern of
nanotechnology research trends across countries. The more detailed analysis per keywords
has not been considered first of all because of the high number of generic keywords like
“Synthesis”, “Chemistry”, “Priority journal”, “Crystallization”, “Methodology” etc.
Moreover single keywords do not refer necessarily to a single research field, making such an

9
In “Europe” the selected countries are: Albania, Austria, Belarus, Belgium, Bosnia, Bulgaria, Croatia,
Czech Republic, Estonia, Finland, France, Germany, Greece, Holland, Hungary, Ireland, Italy, Latvia,
Lithuania, Macedonia, Moldova, The Netherlands, Norway, Poland, Portugal, Romania, Russia, Serbia,
Slovakia, Slovenia, Spain, Sweden, Switzerland, Ukraine, and United Kingdom.
10
Scopus classifies journals in major subject areas, such as “Energy”, “Chemistry”, “Engineering”, etc.
Journals can be allocated to multiple subject areas as appropriate to their scope. We use all subject areas
containing papers on nanotechnology studies. Interestingly, the average number of subject areas that
journals in the “Energy” papers belong to (2.09) is higher than the average value of all science (1.37),
indicating that they exhibit a strong degree of interdisciplinarity.
11
For instance a paper about the nanotechnology published on the journal Scientometrics, is one paper
with 3 subject areas, since Scientometrics is classified with three subject areas (computer science
applications, social sciences and library and information sciences).
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

119
analysis less meaningful. Also the categorization of research domains in “nanomaterials”
and “nanoelectronics” has not been considered because of their inner overlaps:
nanomaterials are heavily applied in nanoelectronics; therefore considering this
categorization is not fruitful for investigating the real nanotechnology research trajectories
and could bring to ambiguous results and misleading research trends. Vice versa, the
aggregate sets applied in this research provide more accurate and robust results about the
temporal and spatial research trends.
Another main scientometric analysis performed is based on the scientific interaction in
nanotechnology production across geo-economic areas. We have considered in each
geographical area, for its scientific output, the foreign affiliations in nanotechnology
studies in order to see the mutual scientific interaction for nano scientific research
production.
The main limit imposed by Scopus search engine is the maximum of 160 items (the most
represented ones) for each data mining. Other limits could be the fact that NSTs are not
present as an autonomous subject area in Scopus (limit overcome with our Data Mining)
and not all papers/proceedings in nanotechnology studies are captured and included in
Scopus dataset. Nevertheless this is also a weakness point for other web-based data
collections. The information analysis of our samples is carried out by statistical and graphs
analysis considering some critical research fields and geo-economic areas in order to show
driving research trends and interaction in nanotechnology studies.
4. Empirical analysis
This paper analyzes five main geo-economic areas in the production of nanotechnology,
based on research centres and their scientific output present into Scopus (2010). For what
about the structure of domestic research centres, their aggregate number has been
calculated assigning the respective geo-economic area (of the primary physical base) to all
occurrences of affiliations present in our databases producing at least a scientific product
in nanotechnologies. The highest number of research labs in nanotechnology over 1996-
2008 period is in Europe and North America (i.e. USA and Canada), see Figure 1. Europe
and North America have in 2008 about 150 research centres operating in nanotechnology
fields. Japan has a lower number of research centres if compared with previously
described leading geo-economic areas, with roughly 100 units, with a stable cumulative
temporal number in the range 107-117. China and South Korea are the two geo-economic
areas where the number of nanotechnology research centres has been increasing, reducing
in 2008 the high gap presents in 1996 in comparison with the level of Europe and North
America
12
: in particular, China has more than 130 nanotechnology research centres
operating in 2008 (Table 2A in Appendix shows the cumulative number of these research
centres over 1996-2008, across geo-economic areas, and their scientific outputs in the last
15 years).
Figures 2-6 show the main research fields of nanoscience studies from 1996 to 2008 across
worldwide geo-economic areas. As the absolute numbers of scientific products across geo-

12
Cf. de Miranda Santo et al. (2006) pp. 1022ff.

New Technologies – Trends, Innovations and Research

120
economic areas are not suitable values for reliable spatial and temporal comparisons (as
research trends are similar), we apply percent values to analyze the mutual temporal
dynamics within research fields in NSTs. These trends show some common patterns:
although the nanotechnology studies in material science have an higher scientific
production in comparison with other macro subject areas (see table 1A), the internal
dynamics among macro subject areas shows mainly a relative reduction over time and
space of studies in nanomaterial sciences (decreasing returns to production), whereas the
studies of nanotechnology applied in Chemistry and Medicine have been increasing. In
addition, the highest relative increase of nanoscience studies in Chemistry and Medicine,
measured by coefficients of regression lines, is in China (=2.2) and South Korea (=1.95),
whereas the lowest magnitude is in Japan (=1.4). These results indicate that some
nanotechnology research domains which have generated main inventions of several
nanomaterials are mature research fields, whereas nowadays studies of nanotechnology
in Chemistry and Medicine have been growing because modern research centres focus
their scientific research on critical innovations in more applied sectors of NSTs. This
means that some nanotechnology trajectories have been passing from invention to
innovation phase.
Nano-sciences studies in “Physics and Earth Sciences” have roughly a relative steady
declining trend across geo-economic areas. Studies of nanotechnology in Engineering
sciences have also a steady trend across the areas, except for Japan that shows an unstable
increasing temporal trend. The results are confirmed by Figure 7, for all geo-economic areas.
As the driving nanotechnology studies in “Chemistry and Medicine” have been increasing
in the last 15 years with a relative high rate of growth, due to the high number of
applications (innovations) in several research fields, the inner dynamics have been divided
in two periods (1996-2002 and 2002-2008) in order to capture the temporal paths across
countries. Figure 8 shows a relative critical role, over 1996-2002 period, by Europe and USA-
Canada, followed by Japan (Third position). If this analysis is repeated over 2002-2008
period (see Figure 9), nanotechnology studies in Chemistry and Medicine carried out in
China have been increasing, predominating over the trend of Japan
13
(Figures 1A and 2A in
Appendix show the absolute and percent values of scientific products concerning
nanotechnology studies applied in Chemistry and Medicine across geo-economic areas).
Figure 10 shows the driving Subject Areas of nanotechnology studies within the macro
subject area “Chemistry and Medicine”: e.g. Chemical engineering, Biochemistry,
Pharmaceutics, etc.; these subject areas confirm the innovation phase of the dynamics of
some nanotechnology trajectories.
As far as the nanotechnology studies in “Material sciences” are concerned, the leading
countries are mainly Europe and China over 1996-2008 period (Figure 11), although the
relative role of China has been increasing over 2002-2008 (Figure 12). Other macro areas, i.e.
“Physics and Earth Sciences” and “Engineering”, show the leadership of Europe and USA-
Canada. For the sake of briefness some figures are not reported.

13
de Miranda Santo et al. (2006) confirm the great contribution of China to scientific research in
nanoscience and nanotechnology in the group of competitor countries (p. 1024).
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

121
0
20
40
60
80
100
120
140
160
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
N
u
m
b
e
r

o
f

l
a
b
s
China Europe Japan South Korea Usa -Canada

Fig. 1. Research Centres operating in nanotechnology across countries, 1996-2008 period

China
MS
C & M
PES
ENG
OTH
C & M linear regression
y = 2.20 x + 4.97
R
2
= 0.89
0.0
10.0
20.0
30.0
40.0
50.0
60.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
MS - Material Science
C & M - Chemistry and Medicine
PES - Physics and Earth Sciences
ENG - Engineering
OTH Others (Information and Mathematics Sciences, Social and Economic Sciences; Energy,
Environmental Science )
Lineare (C & M - Chemistry and Medicine)

Fig. 2. Research trend measured by number of papers in nanotechnology studies (% values)
classified per macro subject areas – China

New Technologies – Trends, Innovations and Research

122
Europe
MS
C & M
PES
ENG
OTH
C & M linear regression
y = 1.68 x + 11.42
R
2
= 0.80
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
MS - Material Science
C & M - Chemistry and Medicine
PES - Physics and Earth Sciences
ENG - Engineering
OTH - Others (Information and Mathematics Sciences, Social and Economic Sciences; Energy,
Environmental Science )
Lineare (C & M - Chemistry and Medicine)

Fig. 3. Research trend measured by number of papers in nanotechnology studies (% values)
classified per macro subject areas – Europe
Japan
MS
C & M
PES
ENG
OTH
C & M linear regression
y = 1.40 x + 10.77
R
2
= 0.82
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
50.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
MS - Material Science
C & M - Chemistry and Medicine
PES - Physics and Earth Sciences
ENG - Engineering
OTH Others (Information and Mathematics Sciences, Social and Economic Sciences; Energy,
Environmental Science )
Lineare (C & M - Chemistry and Medicine)

Fig. 4. Research trend measured by number of papers in nanotechnology studies (% values)
classified per macro subject areas - Japan
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

123
South Korea
MS
C & M
PES
ENG
OTH
C & M linear regression
y = 1.95 x + 5.23
R
2
= 0.80
0.0
10.0
20.0
30.0
40.0
50.0
60.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
MS - Material Science
C & M - Chemistry and Medicine
PES - Physics and Earth Sciences
ENG - Engineering
OTH Others (Information and Mathematics Sciences, Social and Economic Sciences; Energy, Environmental
Science )
Lineare (C & M - Chemistry and Medicine)

Fig. 5. Research trend measured by number of papers in nanotechnology studies (% values)
classified per macro subject areas - South Korea
USA & CANADA
MS
C & M
PES
ENG
OTH
C & M linear regression
y = 1.72 x + 13.32
R
2
= 0.78
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
50.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
MS - Material Science
C & M - Chemistry and Medicine
PES - Physics and Earth Sciences
ENG - Engineering
OTH - Others (Information and Mathematics Sciences, Social and Economic Sciences; Energy,
Environmental Science)
Lineare (C & M - Chemistry and Medicine)

Fig. 6. Research trend measured by number of papers in nanotechnology studies (% values)
classified per macro subject areas - USA & Canada

New Technologies – Trends, Innovations and Research

124
China, Europe, Japan, South Korea, Usa and Canada
MS
C & M
PES
ENG
OTH
y = 1.69 x + 10.94
R
2
= 0.84
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
MS - Material Science
C & M - Chemistry and Medicine
PES - Physics and Earth Sciences
ENG - Engineering
OTH - Others (Information and Mathematics Sciences, Social and Economic Sciences; Energy, Environmental Scienc
Lineare (C & M - Chemistry and Medicine)


Fig. 7. Research trend measured by number of papers in nanotechnology studies (% values)
classified per macro subject areas – All geo-economic areas

0.0
10.0
20.0
30.0
40.0
50.0
60.0
1996 1997 1998 1999 2000 2001 2002
%

v
a
l
u
e
s
Usa-Canada South Korea Japan Europe China


Fig. 8. Research trend per geo-economic areas measured by number of papers in
nanotechnology studies classified in Chemistry and Medicine over 1996-2002 (% values)
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

125
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
2002 2003 2004 2005 2006 2007 2008
%

v
a
l
u
e
s
Usa-Canada South Korea Japan Europe China

Fig. 9. Research trend per geo-economic areas measured by number of papers in
nanotechnology studies classified in Chemistry and Medicine over 2002-2008 (% values)
Medicine , 5.31
Immunology and Microbiology ,
0.83
Pharmacology, Toxicology and
Pharmaceutics , 3.61
Biochemistry, Genetics and
Molecular Biology , 13.55
Chemical Engineering , 23.04
Chemistry , 52.73

Fig. 10. Percent value of main research fields of nanotechnology studies applied in
Chemistry and Medicine
Another main result is shown in figure 13 about the mutual scientific interaction across geo-
economic areas in nanotechnology studies. Although each geo-economic area has a vast

New Technologies – Trends, Innovations and Research

126
production of scientific output within domestic nanotechnology research centres (about
90%), the residual is carried out in collaboration with foreign scholars and research centres.
The results are: labs of Europe and USA-Canada have a high capacity of attraction of foreign
scholars in the scientific research on nanotechnology and nanoscience, measured by joint
affiliations in papers (see the simple bars above the x-axis in figure 13), whereas South Korea
and China are the two geographic areas having the highest number of scientific
collaborations with other scientific players in nanotechnology studies.
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
50.0
1996 1997 1998 1999 2000 2001 2002
%

China Europe Japan South Korea Usa-Canada

Fig. 11. Research trend per geo-economic areas measured by number of papers in
nanotechnology studies classified in Material science over 1996-2002 (% values)
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
50.0
2002 2003 2004 2005 2006 2007 2008
%

China Europe Japan South Korea Usa-Canada

Fig. 12. Research trend per geo-economic areas measured by number of papers in
nanotechnology studies classified in Material science over 2002-2008 (% values)
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

127
-1500
-500
500
1500
China Europe Japan South Korea Usa-Canada
D
E
L
T
A


Note: DELTA is the difference between (scientific products in nanotechnology study produced in
domestic research centres of the country A with foreign institutions) and (scientific products produced
by other geo-economic areas in collaboration with research centres of the country A); positive delta
means high attraction capacity in nanotechnology research by the specific country, vice versa negative
delta means country with intensive collaborations in nanotechnology research with foreign labs.
Fig. 13. Research attraction capacity of foreign scholars in nanotechnology research per geo-
economic areas 1996-2008 period

New Technologies – Trends, Innovations and Research

128
0.0
10.0
20.0
30.0
40.0
50.0
60.0
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
P
e
r
c
e
n
t
a
g
e

(
%
)
China Europe Japan South Korea USA & Canada

Fig. 14. Scientific products in NSTs per million people across geo-economic areas over 1996-
2008
5. Discussion
The main results of this research are:
 Europe and USA-Canada have the highest number of nanotechnology research centres,
although the key role of China has been increasing over time, surpassing Japan.
 Nanotechnology studies in Material Science over 1996-2008 period have a higher
scientific production in comparison with other macro subject areas, however there is a
relative production increase in the research fields of “Chemistry and Medicine” and a
relative production decrease in “Material Sciences”.
 The driving geo-economic areas of nanotechnology studies in “Chemistry and
Medicine” are Europe and North America, whereas the relative highest rate of growth
is in China and South Korea
14
.
 Main nanotechnology research fields applied in “Chemistry and Medicine” are:
Chemistry (~53%), Chemical Engineering (~23%), Biochemistry, Genetics and
Molecular Biology (~14%).
 Europe and North America in nanotechnology research have a high attraction capacity
of scholars from other geo-economic areas, whereas the country with the highest
number of collaborations in nanotechnology studies with leading countries is South
Korea (over 1996-2008).

14
However, these results based on a linear trends are only an approximation such that should be further
examined if they have to be used for forecasting purpose.
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

129
Why Europe and USA-Canada have higher production in nanotechnology studies?
The determinant can be due to the higher rate of investments in Public R&D in NSTs, that
according to Roco (2005) in 2004 were about $1,100M in the USA (3.7 $/Capita)
15,
~$1,050M
in EU-25 (2.3 $/Capita), ~$950M in Japan (7.4 $/Capita), ~$250M in China (0.2 $/Capita)
and ~$300M in Korea (6.2 $/Capita). According to Huang et al. (2004) the United States have
over 60 percent of world nanotechnology patents.
Why relative NSTs research trend in “Chemistry and Medicine” has been increasing, while
“Material Sciences” studies has been decreasing?
Results on the temporal relative decrease of NSTs studies in “Material science” and increase
in “Chemistry and Medicine” can be due to the technology trajectory that have been passing
from the invention phase of new nanomaterials to the innovation phase focused on
innovative applications in biochemistry, medicine, genetics, etc. In other words, NSTs is a
dynamic “new technological system” (Freeman and Soete, 1987, p. 67): some inventions
might have become radical and incremental innovations applied in several fields such as
chemical engineering and medicine. Islam and Miyazaki (2010) argue that: “US has gained
much strength in bionanotechnology research relative to other domains, and the other
regions (e.g. the EU, Japan, China, South Korea and India) have gained their research
strength in nanomaterials, nanoelectronics and nanomanufacturing and tools” (p. 229). In
addition, this new “technological system” has different inner nanotechnology trajectories
that by cross-fertilization have been generating new “converging technologies” (Bainbridge
and Roco, 2006) that are in the first phase of the S-shaped curve of growth (Roco, 2007), i.e.
before the point of inflection: this phase is characterized by high level of exponential growth
that will generate new radical and incremental innovations in not-too-distant future. Roco
(2007) also conjectures that the dynamics of nanotechnology outcomes will pass the point of
inflection after the year 2020 or thereabouts.
Figure 14 confirms that the development curve of nanotechnology production is not linear,
but S-shaped over 1996-2008 period, characterized by a disequilibrium pattern of growth. In
particular, figure 14 shows the relative higher number of scientific outputs per million
people in South Korea and Japan. A critical point is 2002 where the increasing trend of South
Korea has been prevailing on Japan and other geo-economic players. In addition, table 1
shows that R&D investment in nanotechnology as $/capita is 6.2 in South Korea, lower than
Japan (7.4). However, NSTs outcome in South Korea is of 27.92 scientific products per
million people, a higher value than Japan (22.30). This gap is higher if the scientific
performances of 2008 are considered: 41.98 scientific products (in nanotechnology) per
million people in South Korea vs. 19.93 in Japan. Therefore these results show that the
specificity of national sub-set of nanotechnology in South Korea has more efficiency in
comparison with Japan and other geo-economic areas.

15
“The 2011 Budget provides $1.8 billion for the National Nanotechnology Initiative (NNI), reflecting
steady growth in the NNI investment. The cumulative NNI investment since 2001, including the 2011
request, now totals almost $14 billion. Cumulative investments in Environmental, Health and Safety
(EHS) research since 2005 now total over $480 million. Cumulative investments in education and in
research on ethical, legal, and other societal dimensions of nanotechnology since 2005 total over $260
million” (US National Nanotechnology Initiative: http://www.nano.gov/html/ about/funding.html,
accessed 8 June 2010).

New Technologies – Trends, Innovations and Research

130
Countries
Specific.
Nanotech
R & D 2004
($ / Capita)*
Nanotechnology
scientific products
per million people
2004
Nanotechnology
scientific products
per million people
2008
 %
USA 3.7 11.28 15.07
33.60
Europe 2.3 6.62 7.65
15.56
Japan 7.4 22.30 19.93
-10.63
China 0.2 2.40 3.80
58.33
South-Korea 6.2 27.92 41.98
50.36
* Source: Roco (2007), pp. 3.1-3.26
Table 1. Research Investments and scientific performance in nanotechnology studies across
countries
This research shows main worldwide research trends of NSTs studies, though the results
could have some limits. The main one is that Scopus retrieves the first 160 results for each
item (Source, Affiliation, Keyword, etc.); in addition, Scopus is a relatively new instrument
for scientific literature classification and not all nanotechnology research might be included
(though this limit is common with other web-based datasets).
Although “nanotechnology is still in an early phase of development” (Renn and Roco, 2006, p.
153), these results show the current growing applications of nanotechnology in some key
scientific sectors, such as Chemistry and Medicine
16
, which may imply some ethical and social
issues that Governments might need to face in the next future in order to support a sustainable
development of pattern of technological innovation and economic growth as well.
Renn and Roco (2006, p. 154) argue:
As with other new technology, nanotechnology evokes enthusiasm and high expectations:
for new progress in science and technology, new productive applications and economic
potential on one hand; and for concerns about risks and unforeseen side effects on the other.
Renn and Roco (2006) also claim the general risks associated with nanotechnology
applications, showing that the nanotechnology innovation proceeds ahead of the policy and
regulatory contexts: “Governance gap is […] especially significant for the several ‘active’
nanoscale structures and nanosystems that […] have the potential to affect not only the
human health and the environment but also aspects of social lifestyle human identity and
cultural values” (p. 153, original emphasis). Robinson (2009) describes the notion of
“Responsible Research and Innovation of nanotechnology as an opportunity to develop support
tools for exploring potential co-evolutions of nanotechnology and governance
arrangements. This involved the inclusion of pre-engagement analysis of potential co-
evolutions in the form of scenarios into interactive workshop activities with the aim of
enabling multi-stakeholder anticipation of the complexities of co-evolution” (p. 1222,
original emphasis).

16
According to de Miranda Santo et al. (2006) “many areas will suffer impacts caused by Nanoscience
and Nanotechnology […] as health, chemistry and petrochemicals, computing, Energy, agribusiness,
metallurgy, textiles, environmental protection, among other” (p. 1020).
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

131
No doubt that information analysis and foresight studies for research trends and scientific
collaboration in NSTs are a hard work since this technological system is characterized by
“interdisciplinarity” and “pervasiveness” (Salerno et al., 2008, p. 1206, 1208, and 1220,
passim) in the current disequilibrium phase of growth. In presence of these scientific and
analytical issues, further research about these research trends is needed to strengthen this
important topic in economics of innovation in order to design provident innovation policy
and governance practices supporting these “new converging innovations” (cf. Bainbridge
and Roco, 2006), within the technological system of nanotechnology, aimed at driving
sustainable paths of growth for modern economies.
6. Acknowledgements
The authors thank Prof. S. Coluccia and Dott. L. Bertinetti (University of Torino, Italy) for
helpful comments and suggestions as well as Prof. S. Rolfo of CERIS-CNR for supporting
this research field. Ugo Finardi acknowledges the continuous support of Prof. Salvatore
Coluccia and the present support of Prof. Livio Battezzati (both from University of Torino,
Italy). Mario Coccia thanks the Ceris-CNR staff. The present work is the extension and re-
elaboration of our paper “Current trends in nanotechnology research across worldwide geo-
economic players” published on “The Journal of Technology Transfer” (doi: 10.1007/s10961-
011-9219-6); the Editor-in-chief of The Journal of Technology Transfer, Prof. Al Link, is
acknowledged. The authors in parentheses (MC: Mario Coccia, UF: Ugo Finardi and DM:
Diego Margon) have made substantial contributions to the following tasks of research:
Conception (MC); Design (MC and UF); theoretical framework (UF); acquisition of data (UF
and DM); modeling and analysis of data (MC); elaboration data and graphs (DM),
interpretation of data (MC and UF); drafting of the manuscript (MC and UF); critical
revision of the manuscript for important intellectual content (MC); statistical analysis (MC),
supervision (MC). Usual disclaimer applies.
Mario Coccia is an economist at the National Research Council of Italy (Ceris-CNR), Georgia
Institute of Technology (Atlanta, USA), and visiting professor of industrial organization at the
University of Piemonte Orientale (Italy). He has been research fellow at the Max Planck
Institute of Economics (Germany), visiting professor at the Polytechnics of Torino (Italy) and
University of Piemonte Orientale “A. Avogadro”, visiting researcher at the University of
Maryland (College Park, USA), Institute for Science and Technology Studies at the University
of Bielefeld (Germany) and University of Yale. He has written extensively on Economics of
Innovation and Science, Technometrics, Technological and Economic Forecasting; his research
publications include more than one hundred and fifty papers in seven disciplines.
Ugo Finardi holds a MSc in Industrial Chemistry and a Ph.D. in Materials Sciences and
Technology. He is at present Research Assistant at the Department of Inorganic, Phisical and
Materials Chemistry at the University of Torino and Fellow of Ceris-CNR. He performs
research in the fields of Innovation Studies and Management of Research, with a particular
focus on research and industrialization of new materials, technology transfer and regional
systems of innovation.
Diego Margon is a technician at the National Research Council of Italy (Ceris-CNR). He is
specialized in data collection and data analysis applying statistical software packages, and
has published several technical reports about technological topics.

New Technologies – Trends, Innovations and Research

132
7. Appendix A
Macro Subject Area Subjects Area (S.A.) Total papers in S.A.
Total papers in
Macro S.A.
%
Material Science Materials Science 117,808 29.46
117,808
Chemistry and
Medicine
Biochemistry, Genetics and
Molecular Biology
14,471 3.62
Chemical Engineering 24,617 6.16
Chemistry 56,329 14.09
Dentistry 212 0.05
Health Professions 376 0.09
Immunology and
Microbiology
889 0.22
Medicine 5,677 1.42
Veterinary 42 0.01
Neuroscience 336 0.08
Nursing 30 0.01
Pharmacology, Toxicology
and Pharmaceutics
3,855 0.96
106,834
Physics and Earth
Sciences
Earth and Planetary Sciences 1,555 0.39
Physics and Astronomy 88,418 22.11
89,973
Engineering Engineering 65,421 16.36
65,421
Others
Information and
Mathematics
Sciences

Mathematics 2,061 0.52
Computer Science 5,794 1.45
Decision Sciences 86 0.02
7,941
Social and
Economic
Sciences
Arts and Humanities 266 0.07
Business, Management and
Accounting
562 0.14
Economics, Econometrics
and Finance
82 0.02
Multidisciplinary 2,412 0.60
Energy

Psychology 75 0.02
Social Sciences 680 0.17
4,077
Energy 3,921 0.98
Environmental
Science
3,921
Agricultural and Biological
Sciences
770 0.19
Environmental Science 3,086 0.77
3,856
TOTAL 399,831 399,831 100.00
Note: Scopus classifies journals in major subject areas, e.g. “Energy”. Journals can be allocated to
multiple subject areas as appropriate to their scope. The subject areas contain scientific products
concerning nanotechnology studies.
Table 1A. Scientific output in NSTs studies over 1996-2008 per subject areas and macro
subject areas
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

133
Year
China Europe Japan South Korea USA-Canada
Labs
Scientific
products*
Labs
Scientific
products*
Labs
Scientific
products*
Labs
Scientific
products*
Labs
Scientific
products*
1996 59 210 128 675 117 430 20 37 128 673
1997 91 312 134 856 122 483 28 51 132 700
1998 97 414 139 874 125 519 33 68 133 670
1999 105 467 137 1135 118 645 48 124 133 841
2000 113 612 142 1234 109 621 55 159 130 878
2001 115 780 144 1414 116 848 73 260 142 1294
2002 114 1185 140 2122 109 1214 82 425 149 2264
2003 112 2001 144 3404 107 1993 80 864 137 3696
2004 123 3070 148 4313 112 2836 86 1330 142 3607
2005 132 4476 143 5167 113 3607 84 1705 141 4375
2006 132 5760 147 5280 118 3780 90 2460 143 4601
2007 135 3324 147 3556 112 1834 89 1363 140 3301
2008 133 4864 151 4980 115 2534 89 2000 149 4819
Total
1996-
2008
1,461 27,475 1,844 35,010 1,493 21,344 857 10,846 1,799 31,719
* Scientific products are papers, proceedings, etc.
Table 2A. Cumulative NSTs research labs and their scientific products in nanotechnology
studies over 1996-2008 across geo-economic areas

0
200
400
600
800
1000
1200
1400
1600
1800
1996 1997 1998 1999 2000 2001 2002
A
b
s
o
l
u
t
e

V
a
l
u
e
s
Usa-Canada South Korea Japan Europe China


Fig. 1A. Research trend per geo-economic areas measured by number of scientific products
concerning NSTs studies classified in Chemistry and Medicine over 1996-2002 (absolute
values)

New Technologies – Trends, Innovations and Research

134
0
5000
10000
15000
20000
25000
2002 2003 2004 2005 2006 2007 2008
A
b
s
o
l
u
t
e

V
a
l
u
e
s
Usa-Canada South Korea Japan Europe China

Fig. 2A. Research trend per geo-economic areas measured by number of scientific products
concerning NSTs studies classified in Chemistry and Medicine over 2002-2008 (absolute
values)
8. References
Avenel E., Favier A.V., Ma S., Mangematin V., Rieu C., “Diversification and hybridization in
firm knowledge bases in nanotechnologies”, Research Policy, vol. 36, n. 6, pp. 864-870
Bainbridge W.S., Roco M.C. (Eds.) (2006), Managing nano-bio-info-cogno innovations,
converging technologies in society, Springer, Berlin.
Balzani V. (2005), “Nanoscience and Nanotechnology: A personal View of a Chemist”, Small,
vol. 1, n. 3, pp. 278-283.
Bertinetti L., Tampieri A., Landi E., Ducati C., Midgley P.A., Coluccia S., Martra G. (2006),
“Surface structure, hydration, and cationic sites of nanohydroxyapatite: UHR-TEM,
IR, and microgravimetric studies”, Journal of Physical Chemistry C, vol. 111, n. 10, pp.
4027-4035.
Bozeman B., Laredo P., Mangematin V. (2007), “Understanding the emergence and
deployment of “nano” S&T”, Research Policy, vol. 36, n. 6, pp. 807-812.
Braun T., Schubert A., Zsindely, s. (1997), “Nanoscience and nanotechnology on balance”,
Scientometrics, vol. 38, n. 2, pp 321-325.
Celotti G., Tampieri A., Sprio S., Landi E., Bertinetti L., Martra G., Ducati C. (2006),
“Crystallinity in apatites: how can a truly disordered fraction be distinguished
from nanosize crystalline domains?”, Journal of Materials Science-Materials in
Medicine, vol. 17, n. 11, pp. 1079-1087.
Coccia M. (2012) “Evolutionary dynamics of the production of nanotechnology research
across worldwide economic players” Technological Analysis and Strategic
Management, Forthcoming
Coccia M. (2011) “Driving scientific forces for current and future micro-technological
revolutions and social transformations” Mimeo at Georgia Institute of Technology
(Atlanta, USA).
Nano Research Trends of Critical Scientific Fields Across
Leading Worldwide Geo-Economic Players and Their Spatial Interactions

135
Coccia M., Finardi U., Margon D. (2011), “Current trends in nanotechnology research across
worldwide geo-economic players”, Journal of Technology Transfer, DOI
10.1007/s10961-011-9219-6
de Miranda Santo M., Massari Coelho G., Maria dos Santos D., Fellows Filho L. (2006), “Text
mining as a valuable tool in foresight exercises: A study on nanotechnology”,
Technological Forecasting and Social Change, vol. 73, n. 8, October, pp. 1013-1027.
Eigler D.M., Schweizer E.K. (1990), “Positioning single atoms with a scanning tunnelling
microscope”, Nature, n. 344, pp. 524-526.
Evangelisti C., Vitulli G., Schiavi S., Vitulli M., Bertozzi S., Salvadori P., Bertinetti L., Martra
G. (2007), “Nanoscale Cu supported catalysts in the partial oxidation of
cyclohexane with molecular oxygen”, Catalysis Letters, vol. 116, n. 1-2, pp. 57-62.
Feynman R.P. (1960), “There's plenty of room at the bottom”, Engineering and Science vol.
23, Feb., pp. 22-36.
Finardi U. (2011), “Time relations between scientific production and patenting of
knowledge. The case of nanotechnologies”, Scientometrics, vol. 89, n. 1, pp. 37-50
Freeman C., Soete L. (1987), Technical Change and Full Employment, Basil Blackwell, Oxford
(UK).
Glenn J.C. (2006), “Nanotechnology: Future military environmental health considerations”,
Technological Forecasting and Social Change, vol. 73, n. 2, February, pp. 128-137.
Goddard III W., Brenner D., Lyshevski S., Iafrate G. (Eds.) (2007), Handbook of Nanoscience,
Engineering and Technology, Second Edition, Taylor and Francis Group.
Huang Z., H. Chen, Roco M. C., 2004, “Longitudinal patent analysis for nanoscale science
and engineering in 2003: country, institution and technology field analysis based on
USPTO patent database” Journal of nanoparticle research, vol. 6, n. 4, pp. 325-354.
Iijima S. (1991), “Helical microtubules of graphitic carbon”, Nature, n. 354, pp. 56-58.
Islam N. and Miyazaki K. (2009), Nanotechnology innovation system: Understanding hidden
dynamics of nanoscience fusion trajectories, Technological Forecasting & Social
Change, vol. 76, n. 1, pp. 128 - 140
Islam N., Miyazaki K. (2010), “An empirical analysis of nanotechnology research domains”,
Technovation, vol. 30, n. 4, pp. 229-237.
Kostoff R N., Stump J.A., Johnson D., Murday J.S., Lau C.G.Y., Tolles W.M. (2006), “The
structure and infrastructure of the global nanotechnology literature”, Journal of
Nanoparticle Research, vol. 8, n. 3-4, pp. 301-321.
Kostoff R.N., Koytcheff R.G., Lau C.G.Y. (2007), “Global nanotechnology research metrics”,
Scientometrics, vol. 70, n. 3, pp. 565-601.
Kostoff R.N., Koytcheff R.G., Lau C.G.Y. (2007a), “Global nanotechnology research literature
overview”, Technological Forecasting & Social Change, vol. 74, n. 9, pp. 1733-1747.
Kroto H.W., Heath J.R., O'Brien S.C., Curl R.F., Smalley R.E. (1985), “C60:
Buckminsterfullerene”, Nature, n. 318, pp. 162-163.
Leydesdorff L. (2008), “The delineation of nanoscience and nanotechnology in terms of
journals and patents: A most recent update”, Scientometrics, vol. 76, n. 1, pp. 159-167.
Leydesdorff L., Zhou P. (2007), “Nanotechnology as a field of science: its delineation in
terms of journals and patents”, Scientometrics, vol. 70, n. 3, pp. 693-713.
Libaers D., Meyer M., Geuna A. (2006), “The Role of University Spinout Companies in an
Emerging Technology: The Case of Nanotechnology”, The Journal of Technology
Transfer, vol. 31, n. 4, pp. 443-450.
Linstone H.A. (2011), “Three eras of technology foresight”, Technovation vol. 31, n. 2-3, pp. 69 - 76

New Technologies – Trends, Innovations and Research

136
Pilkington A., Lee L.L., Chan C.K., Ramakrishna S. (2009), “Defining key inventors: A
comparison of fuel cell and nanotechnology industries”, Technological Forecasting
and Social Change, vol. 76, n. 1, January, pp. 118-127.
Renn O., Roco M.C. (2006), “Nanotechnology and the need for risk governance”, Journal of
Nanoparticle Research, vol. 8, n. 2, pp. 153-191.
Rickerby D.G., Morrison M. (2007), “Nanotechnology and the environment: A European
perspective”, Science and Technology of Advanced Materials, vol. 8, n. 1-2, pp. 19-24.
Robinson D.K.R. (2009), “Co-evolutionary scenarios: An application to prospecting futures
of the responsible development of nanotechnology” Technological Forecasting and
Social Change, vol. 76, n. 9, pp. 1222-1239.
Roco M.C. (2005), “International perspective on government nanotechology funding in
2005”, Journal of Nanoparticle Research, vol. 7, n. 6, pp. 707-712.
Roco M.C. (2007), “National Nanotechnology Initiative. Past, Present, Future”, in W. Goddard
III, D. Brenner, S. Lyshevski & G. Iafrate (Eds), Handbook of Nanoscience, Engineering
and Technology, Second Edition, Taylor and Francis Group, Chp. 3, pp. 1-26.
Roco M.C. (2008), “Possibilities for global governance of converging technologies”, Journal of
Nanoparticle Research, vol. 10, n. 1, pp. 11 – 29
Rogers, J.D. (2010) “Citation analysis of nanotechnology at the field level: implications of
R&D evaluation” Research evaluation, vol. 19, n.4, pp. 281-290.
Salerno M., Landoni P., Verganti R. (2008), “Designing foresight studies for Nanoscience
and Nanotechnology (NST) future developments”, Technological Forecasting and
Social Change, vol. 75, n. 8, October, pp. 1202-1223.
Schultz L.I. and Joutz F.L. (2010), “Methods for identifying emerging General Purpose
Technologies: a case study of nanotechnologies”, Scientometrics vol. 85, n. 1, pp. 155 - 170
Scopus (2010), http://www.scopus.com, accessed April 2010.
Shea C.M., Grindle G. and Elmslie B. (2011), “Nanotechnology as a general-purpose
technology: empirical evidence and implications”, Technology Analysis & Strategic
Management, vol. 23, n. 2, pp. 175 – 192
Siegel R.W., Hu E., Roco M.C. (1999), Nanostructure Science and Technology, Springer,
Dordrecht, The Netherland.
Taniguchi N. (1974), “On the Basic Concept of Nano-Technology”, Proceedings International
Conference Production Engineering, Part II, Japan Society of Precision Engineering,
Tokyo.
Tegart G. (2009), “Energy and nanotechnologies: Priority areas for Australia's future”,
Technological Forecasting and Social Change, vol. 76, n. 9, November, pp. 1240-1246.
US National Nanotechnology Initiative (2010), http://www.nano.gov/, accessed June 2010.
van Merkerk R.O., van Lente H. (2005), “Tracing emerging irreversibilities in emerging
technologies: The case of nanotubes”, Technological Forecasting and Social Change,
vol. 72, n. 9, November, pp. 1094-1111.
Yanagisawa T., Shimizu T., Kuroda K., Kato C. (1990), “The preparation of
Alkyltrimethylammonium-Kanemite Complexes and Their Conversion to
Microporous Materials”, Bulletin of the Chemical Society of Japan, vol. 63, n. 4, pp.
988-992.
Zecchina A., Groppo E., Bordiga S. (2007), “Selective Catalysis and Nanoscience: An
Inseparable Pair”, Chemistry- A European Journal, vol. 13, n. 9, pp. 2440–2460.
Part 3
Robotics

7
Improving Accuracy and Flexibility
of Industrial Robots Using Computer Vision
Petar Maric and Velibor Djalic
University of Banja Luka, Faculty of Electrical Engineering
Bosnia and Herzegovina
1. Introduction
A high level of positioning accuracy is an essential requirement in a wide range of
industrial robots’ applications. Robot calibration is a process by which robot positioning
accuracy can be improved. During a manipulator control system design, and periodically
in the course of task performing, manipulator geometry calibration is required. Nowadays
robot calibration plays an increasingly important role in robot production as well as in
robot implementation and operation within computer-integrated manufacturing where
the simulated robot must reflect the real robot geometry (Elatta, et al. 2004; Khalil &
Dombre, 2004; Perez, et al. 2009).
Until the end of twentieth century algorithms for manipulator calibration using open
kinematic chain were developed. The main constraint in practical implementation of these
algorithms was request for accurate measurement of manipulator end-effector. A variety of
measurement techniques ranging from coordinate measuring machines, proximity
measuring systems, theodolites, and laser tracking interferometer systems have been
employed for calibration tasks. These systems were very expensive, tedious to use or with
low working volume (Driels, 1994; Khalil, et al. 1995; Vincze, et al. 1994).
To overcome the above limitations, mobile closed kinematic chain method has been
proposed that obviates the need for pose measurement by forming a manipulator into a
mobile closed kinematic chain (Bennett & Hollerbach, 1991). Using the closed kinematic
chain reduces the number of parameters, which can be determined, and the speed of
conversion.
Compared to the mechanical measuring devices, the camera system is low cost, fast,
automated, user-friendly, non-invasive and can provide high accuracy (Zhuang & Roth,
1994). That is why in the last ten years re-focus is on the research on calibration with open
chain manipulators with application of computer vision. If two calibrated cameras observe
the same scene point, its 3D coordinates can be computed as the intersection of two of rays
originated from that scene point (principle of stereo vision). In that case, the position of the
point in the 3D scene can be calculated from the disparity of two image points. The reliable
solution of this correspondence problem is a key step in any stereo vision, and automatic
manipulator calibration. Automatic solution of the correspondence problem is under
extensive exploration. Until now there is no solution, in general case. The inherent

New Technologies – Trends, Innovations and Research 140
ambiguity of the correspondence problem can in practical cases be reduced using several
constrains.
This chapter focuses on the procedure that allows automated calibration of manipulator. 3D
coordinates of manipulator’s end-effector are automatically, accurately and reliably
determined using stereo cameras for each position of manipulator. Procedure is a
combination of algorithms which are based on Scale Invariant Feature Transform (SIFT),
Canny and area based correlation. Analysis, experimental confirmation and illustration
were given as a proof that this problem cannot be resolved using only one of mentioned
algorithms. Based on analysis of these algorithms, their different characteristics (advantages
and disadvantages) were combined to get completely automated and precise determination
of manipulator’s end-effector using stereo cameras system. The completed procedure is an
unique algorithm which can be easily deployed in process of classical industrial
manipulator and structural flexible manipulator control. Therefore, accuracy and flexibility
of industrial robots can be improved without additional costs.
2. Robot manipulator kinematic calibration
The calibration of the geometric parameters is based on estimating the parameters
minimizing the difference between a function of the real robot variables and corresponding
mathematical model. The geometrical parameters estimation based on the differential model
is the most popular one. Many authors presented open-loop methods that estimate the
kinematic parameters of manipulators performing on the basis of joint coordinates and the
Cartesian coordinates of the end-effector measurements (Jackson, et al. 1995; Maric &
Potkonjak, 1999; Renders, et al. 1991). It is assumed that there is a measuring device that can
sense the position (sometimes orientation) of an end-effector Cartesian coordinates.
Measurement of robot manipulator end-effector pose (i.e. position and orientation) in the
reference coordinate system is unquestionably the most critical step towards a successful
open-loop robot calibration. A variety of measurement techniques ranging from coordinate
measuring machines, proximity measuring systems, theodolites, and laser tracking
interferometer systems to inexpensive customized fixtures have been employed for
calibration tasks (Vincze, et al. 1994; Driels, 1994.). These systems are very expensive,
tedious to use or with low working volume. In general, the measurement system should be
accurate, inexpensive and should be operated automatically. The goal is to minimize the
calibration time and the robot unavailability.
2.1 Manipulator geometry modeling
Generally kinematic model-based calibration is considered as a global calibration method
that improves robot’s accuracy across the whole volume of robot space. A kinematic model
is a mathematical description of manipulator geometry. The model gives relation between
the geometric parameters, the joint variables and end-effector position. Many kinematic
models have been proposed to perform robot calibration. The most popular method has
been established by Denavit and Hartenberg (D-H method). For this reason we will use this
notation. The method is based on homogeneous transformation matrices, and establishing
coordinate systems on each joint axis. From prior description of kinematic model, the basic
coordinate systems will be defined as follows (Fig. 1.):

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 141

Fig. 1. Coordinate systems assignment for robot modelling
O
B
X
B
Y
B
Z
B
– base coordinate system of the manipulator
O
E
X
E
Y
E
Z
E
– end-effector (tool) coordinate system of the manipulator (we denote the origin
O
E
as the endpoint of the robot)
O
i
X
i
Y
i
Z
i
(i=1, n) – coordinate system fixed to the i
th
link (O
n
X
n
Y
n
Z
n
– coordinate system fixed
to the terminal link) of the manipulator.
The original D-H representation of a rigid link depends on geometric parameters. Four
parameters a,d,α and θ denote manipulator link length, link offset, joint twist and joint angle,
respectively. Composite 4x4 homogeneous transformation matrix A
i-1,i
known as the D-H
transformation matrix for adjacent coordinate system i and i-1, is:

(
(
(
(
¸ ¸
cosθ -cosα sinθ sinα sinθ a cosθ
i i i i i i i
sinθ cosα cosθ -sinα cosθ a sinθ
i i i i i i i
A =
i-1,i
0 sinα cosα d
i i i
0 0 0 1

(1)

The homogeneous matrix A
B,i
which specifies the location of the i
th
coordinate system with
respect to the base coordinate system is the chain product of successive coordinate
transformation matrices A
i-1,i
, and expressed as:
. (2)

Particularly, for i=n we have A
B,n
matrix which specifies the position and orientation of the
end-effector of the manipulator with respect to the base coordinate system. Matrix A
B,n
is a
function of the 4n geometrical parameters which are constant for fixed robot geometry, and
n joint coordinates that change their value when manipulator moves.
Moreover, a robot is not intended to perform a single operation at the workcell, it has
interchangeable different tools. In order to facilitate the programming of the task, it is more
practical to have transformation matrix defining the tool coordinate system with respect to
the terminal link coordinate system A
n,E
.
...
, ,1 1,2 1,
A A A A
B i B i i
=
÷

New Technologies – Trends, Innovations and Research 142
Thus, the transformation matrix A
w,E
can be written as:
. (3)

Since the world coordinate system can be chosen arbitrarily by the user, six parameters are
needed to locate the robot base relative to the world coordinate system. From independence
to some manipulator parameters it follows that consecutive coordinate systems are
represented at most by four independent parameters.
Since the end-effector coordinate system can be defined arbitrarily with respect to the
terminal link coordinate system (O
n
X
n
Y
n
Z
n
), six parameters are needed to define the matrix
A
n,E
. If we extend the robot notation to the definition of the end-effector coordinate system,
it follows that the end-effector coordinate system introduces four independent parameters.
For more details the reader can refer to (Khalil, 2004).
Based on (1), (2), (3) dependence between joint coordinates and geometrical parameters, and
endpoint location of the tool can be written as:
(4)

where x, q, g
0
denotes end-effector position vector expressed in the world coordinate system,
vector of the joint variables, and vector of the geometric parameters, respectively. Dimension
of the vector x is 6 if measurement can be made on the location and orientation of the end-
effector. However, most frequently only location of the endpoint is measured, and therefore
dimension of a vector x is 3. Dimension of the vector q is equivalent to the number of DOF
(Degree of Freedom) of manipulator. Dimension of the vector g
0
is at most 4n+6.
2.2 Geometric parameters estimation based on the differential model
The calibration of the geometric parameters is based on estimating the parameters
minimizing the difference between a function of the real robot variables and corresponding
mathematical model. Many authors (Jackson, et al. 1995; Khalil, 1991; Maric & Potkonjak,
1999; Renders, et al. 1991) presented open-loop methods that estimate the kinematic
parameters of manipulators performing on the basis of joint coordinates and the Cartesian
coordinates of the end-effector measurements. The joint encoder’s outputs readings are joint
coordinates. It is assumed that there is a measuring device that can sense the position
(sometimes orientation) of an end-effector Cartesian coordinates.
A mobile closed kinematic chain method has been proposed that obviates the need for pose
measurement by forming a manipulator into a mobile closed kinematic chain (Bennett &
Hollerbach, 1991; Khalil, et al. 1995). Self motion of the mobile closed chain places
manipulator in a number of configurations and the kinematic parameters are determined
from the joint position readings alone.
The calibration using the end-efector coordinates (open-loop method) is the most popular
one. The model represented by equation (4) is nonlinear in g
0
, and we must linearize it in
order to apply linear estimators. The differential model provides the differential variation of
the location of the end-effector as a function of the differential variation of the geometric
...
, , , ,
A A A A
w E w B B n n E
=
0
( , ) x f q g =

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 143
parameters. Difference between the measurement (x) and calculated end-effector location
(x
m
) represents minimized criteria function. Let Δx = x - x
m
, and Δg = g
0
- g be the pose error
vector of end-effector and geometric parameter error vector, respectively (g – vector of
geometric parameters estimation). From equation (4), the calibration model can be
represented by the linear differential equation
(5)

where:
g is the (p x 1) vector of geometric parameters estimation
Δx = x - x
m
is the (r x 1) pose error vector of end-effector
Δg = g
0
- g is the geometric parameters error vector
J
g
is the (r x p) sensitivity matrix relating the variation of the endpoint position with respect
to the geometric parameters variation (calibration Jacobian matrix) (Maric & Potkonjak,
1999; Khalil, et al. 1991).
To estimate Δg we apply equation (5) for a number of manipulator configurations. It gives
the system of equations:

(6)

where is:

,

,
(7)
and E is the error vector which includes the effect of unmodeled non-geometric parameters:

.
(8)

Equation (6) can be used to estimate iteratively the geometric parameters. This equation is
solved to get the least-squares error solution to the current parameters estimate. The least-
squares solution can be obtained from:
(9)

At the each iteration, geometric parameters are updated by adding Δg to the current value of g:
-
m
x J g x x
g
A = A =
X g E A = uA +
1
2
k
x
x
X
x
( A
(
A
(
A =
(
(
A
(
¸ ¸

1 1
2 2
( , )
( , )
( , )
g
g
k k
g
J q g
J q g
J q g
(
(
(
u =
(
(
(
¸ ¸

1
2
k
e
e
E
e
(
(
(
=
(
(
(
¸ ¸

1
( )
T T
g X
÷
A = A u u u

New Technologies – Trends, Innovations and Research 144

(10)

By solving equations (9) and (10) alternately, the procedure is iterated until the Δg
approaches to zero.
Calibration of manipulator is an identification process, and hence, one should take a careful
look at the identifiability of the model parameters (Benett & Hollerbach, 1991; Khalil, et al.
1991). A general method to determine these parameters have been proposed in (Benett &
Hollerbach, 1991). Determination of the identifiable (base) geometric parameters is based on
the rank of the matrix Φ. Some parameters of manipulator related to the locked passive
joints may become unidentifiable in the calibration algorithm due to the mobility
constraints. It reduces number of identifiable parameters in general for the closed-loop
kinematic chain approach, compared with open-loop case.
As the measurement process is generally time consuming, the goal is to use set of
manipulator configurations that uses limited number of optimum points on the parameters
estimation. Furthermore, goal is to minimize the effect of noise on the parameters
estimation. The condition number of the matrix Φ gives a good estimate of the persistent
excitation (Khalil, 2004). Therefore, much work has been done on finding the so-called
optimal excitation. The task of selecting the optimum manipulator configurations to be used
during the calibration is discussed and solutions are proposed in (Bay, 1993; Benett &
Hollerbach, 1991; Khalil, et al. 1995). It is worth noting that most of geometric calibration
methods give an acceptable condition number using random configurations. The paper (Sun
& Hollerbach, 2008) presents an updated algorithm to reduce the complexity of computing
and observability index for kinematic calibration of robots. An active calibration algorithm
is developed to include an updated algorithm in the pose selection process.
3. Computer vision
Computer vision has developed significantly over the last ten years and now has become
standard automation component. It represents qualitative bounce in the area of metrology
and sensing because it provides us with a remarkable amount of information about our
surroundings, without direct physical contact (Torreão, 2011).
Calibration of cameras is necessary first step in vision system using. Camera calibration is
the process of determining the internal camera (geometric and optical) characteristics and
the 3D position and orientation of the camera frame relative to a world coordinate system.
If the camera calibration is performed then for every scene point in a world coordinate
system it is possible to determine the position of its image point in image plain.
Inverse perspective transformation is very important for computer vision application in
industrial automation. If two calibrated cameras observe the same scene point, its 3D
coordinates can be computed as the intersection of two of rays originated from that scene
point. The epipolar geometry is a basis of a system with two cameras (principle of stereo
vision).
A special relative position of the stereo cameras is called rectified configuration. In that case
the position of the point in the 3D scene can be calculated from the disparity of two image
points.
g g g = + A

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 145
3.1 Camera model
This section describes the camera model. Fig. 2. illustrates the basic geometry of the camera
model. The camera performs transformation from the 3D projective space to the 2D
projective space. The projection is carried by an optical ray originating (or reflected) from a
scene point P. The optical ray passes through the optical center Oc and hits the image plane
at the point p.

Fig. 2. The basic geometry of the camera model
Prior describing the perspective transformation and camera model, let us define the basic
coordinate systems. The coordinate frames are defined as follows:
O
w
X
w
Y
w
Z
w
- world coordinate system (fixed reference system), where O
w
represents the
principal point. The world coordinate system is assigned in any convenient location.
O
c
X
c
Y
c
Z
c
- camera centered coordinate system, where O
c
represents the principal point on
the optical center of the camera. The camera coordinate system is the reference system used
for camera calibration, with the Z
c
axis the same as the optical axis.
O
i
X
i
Y
i
Z
i
- image coordinate system, where O
i
represents the intersection of the image plane
with the optical axis. X
i
Y
i
plane is parallel to X
c
Y
c
plane.
Let (x
w
, y
w
, z
w
) are the 3D coordinates of the object point P in the 3D world coordinate
system, and (u,v) position of the corresponding pixel in the digitized image. A projection of
the point P to the image point p may be represented by a 3x4 projection matrix (or camera
matrix) M (Tsai, 1987; Zhuang, 2008):

.
(11)

Matrix:
| |
p K R T P MP = =

New Technologies – Trends, Innovations and Research 146
(12)
is called the internal (intrinsic) camera transformation matrix. Parameters α, β, u
0
and v
0
are
so called internal distortion-free camera parameters.
R and T, a 3x3 orthogonal matrix representing the camera’s orientation and a translation
vector representing its position, are given by:

, ,
(13)
respectively. The parameters r
11
, r
12
, r
13
, r
21
, r
22
, r
23
, r
31
, r
32
, r
33
, t
x
, t
y
, t
z
are external (extrinsic)
parameters and represent the camera’s position referred to the world coordinate system
Projection in an ideal imagining system is governed by the pin-hole model. Real optical
system suffers from a number types of distortion. The first one is caused by real lens
spherical surfaces and manifests itself by radial position error. Radial distortion causes an
inward or outward displacement of a given image point from its ideal (distortion free)
location. This type of distortion is mainly caused by flawed radial curvature curve of the
lens elements. A negative radial displacement (a point is imaged at a distance from the
principle point that is smaller than predicted by the distortion free model) of the image
point is referred to as barrel distortion. A positive radial displacement (a point is imaged at
a distance from point that is larger than the predicted by the distortion free model) of the
image point is referred to as pin-cushion distortion. The displacement is increasing with
distance from the optical axis. This type of distortion is strictly symmetric about the optical
axis. Fig. 3. illustrates the effect of radial distortion.

Fig. 3. Effect of radial distortion illustrated on a grid
The radial distortion of a perfectly centered lens is usually modelled using the equations:

,
(14)

0
0
0
0
0 0 1
u
K v
o
| =
(
(
(
¸ ¸
11 12 13
21 22 23
31 32 33
r r r
R r r r
r r r
=
(
(
(
¸ ¸
t
x
T t
y
t
z
=
(
(
(
¸ ¸
2 4
( ...)
1 2
x x k r k r
r i
A = + +

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 147

,
(15)
where r is the radial distance from the principal point of the image plane, and k
1
, k
2
,… are
coefficients of radial distortion. Only even powers of the distance r from the principal point
occur, and typically only the first, or the first and the second terms in the power series are
retained.
The real imagining systems also suffer from tangential distortion, which is at right angle to
the vector from the center of the image. That type of distortion is generally caused by
improper lens and camera assembly. Like radial distortion, tangential distortion grows with
distance from the center of distortion and can be represented by equations:

,
(16)

.
(17)
Fig. 4. illustrates the effect of tangential distortion.

Fig. 4. Effect of tangential distortion
The reader is referred to (Tsai, 1987; Sonka, et al. 2008; Weng, et al. 1992) for more
elaborated and more complicated lens models.
Note that one can express the distorted image coordinates as a power series using
undistorted image coordinates as variables, or one can express undistorted image
coordinates as a power series in the distorted image coordinates. The r in the above
equations can be either based on actual image coordinates or distortion-free coordinates.
Bearing in mind the radial and tangential distortion, correspondence between distortion-free
and distorted pixels image coordinates can be expressed by:

,
(18)

.
(19)
2 4
( ...)
1 2
y y k r k r
r i
A = + +
2 4
( ...)
1 2
x y l r l r
t i
A = ÷ + +
2 4
( ...)
1 2
y x l r l r
t i
A = + +
x x x x
r t i d
= + A + A
y y y y
r t i d
= + A + A

New Technologies – Trends, Innovations and Research 148
The parameters representing distortion of an image are: k
1
, k
2
, …, l
1
, l
2
,… The distortion
tends to be more noticeable with wide-angle lenses than telephoto lenses. Electro-optical
systems typically have larger distortions than optical systems made of glass.
3.2 Camera calibration
Camera calibration is considered as an important issue in computer vision applications
(particularly in robotics). With the increasing need for higher accuracy measurement in
computer vision, it has also attracted research effort in this subject. Task of camera
calibration is to compute the camera projection matrix M from a set of image-scene point
correspondences. By correspondences it means a set where p
i
is a
homogeneous vector representing image point and P
i
is a homogeneous vector representing
scene point, at the i
th
step. Equation (11) gives an important result: the projection of a point P
to an image point p by a camera is given by a linear mapping (in homogeneous coordinates):

.
(20)
The matrix M is non-square and thus the mapping is many-to-one. All scene points on a ray
project to a single image point.
To compute M, system of homogeneous linear equations has to be solved

,
(21)
where s
i
are scale factors.
Camera calibration is performed by observing a calibration object whose geometry in 3D
space is known with very good precision. The calibration object usually consists of two or
three planes orthogonal to each other. These approaches require an expensive calibration
apparatus. Accurate planar targets are easier to make and maintain than three-dimensional
targets. There is a number of techniques which only requires the camera to observe a planar
pattern(s) shown at a few different orientation (Fig. 5.). The calibration points are created by
impressing a template of black squares (usually chess-board pattern) or dots on top of white

Fig. 5. Illustration of experimental setup for camera calibration using coplanar set of points
( ) { }
,
1
m
p P
i i
i=
p MP =
s p MP
i i i
=

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 149
planar surface (steel or even a hard book cover (Zhuang, 2008)). The corners of the squares
are treated as a calibration points. Because the corners are always rounded, it is
recommended to measure the coordinate of a number of points along the edges of the
square away from the corners, and then extrapolate the edges to obtain position of the
corners which lie on the intersection of adjacent edges.
Due to the high accuracy performance requirement for camera calibration, a sub-pixel
estimator is desirable. It is a procedure that attempts to estimate the value of an attribute in
the image to greater precision than that normally considered attainable within restrictions of
the discretization. Since the CCD camera has relatively low resolution, interest in a sub-pixel
method arises when one applies CCD-based image systems to the computer integrated
manufacturing (Kang, et al. 2008, Perez, et al. 2009).
Camera calibration entails solving for a large number of calibration parameters, resulting in
the large scale nonlinear search. The efficient way of avoiding this large scale nonlinear
search is to use two-stage technique, described in (Tsai, 1987). This type methods, in the
first stage, use a closed-form solution for most of the calibration parameters, and in the
second stage iterative solution for the other parameters.
In (Weng, et al. 1992) a two-stage approach was adopted with some modification. In the first
step, the calibration parameters are estimated using a closed-form solution based on a
distortion-free camera model. In the second step, the parameters estimated in the first step
are improved iteratively through a nonlinear optimization, taking into account camera
distortion. Since the algorithm that computes a closed-form solution is no iterative, it is fast,
and solution is generally guaranteed. In the first step, only points near the optical axis are
used. Consequently, the closed-form solution isn’t affected very much by distortion and is
good enough to be used as an initial guess for further optimization. If an approximate
solution is given as an initial guess, the number of iterations can be significantly reduced,
and the globally optimal solution can be reliably reached.
3.3 Stereo vision
Calibration of one camera and knowledge of the coordinates of one image point allows us to
determine a ray in space uniquely (back-projection of point). Given a homogeneous image
point p, we want to find its original point P from the working space. This original point P is
not given uniquely, but all points on a scene ray from image point p. Here, we will consider
how to compute 3D scene point P from projections p
i
in the several cameras, or projections p
i

in one camera at different positions (different images are denoted by superscript i). Assume
that m views are available, so that we have to solve linear system

, i=1,…,m.
(22)

This approach is known as triangulation (it can be interpreted in terms of similar triangles).
Geometrically, it is a process of finding the common intersection of m rays given by back-
projection of the image points by the cameras. In the reality, image points p
i
are corrupted
by noise, and the rays will not intersect and the system would have no solution. We might
compute P as the scene point closest to all of the skew rays.
s p M P
i i i
=

New Technologies – Trends, Innovations and Research 150
If two calibrated cameras observe the same scene point P, its 3D coordinates can be
computed as the intersection of two of such rays. The epipolar geometry is a basis of a
system with two cameras (principle of stereo vision). It is illustrated on Fig. 6.

Fig. 6. The epipolar geometry
Let
1
c
O ,
2
c
O represents the optical centres of the first and second camera, respectively. The
same consideration holds if one camera takes two images from two different locations. In
that case
1
c
O represents optical centre of the camera when the first image is obtained, and
2
c
O represents the optical centre for the second image. p
1
and p
2
denote the images of the 3D
point P. The base line is the line joining the camera centres
1
c
O and

2
c
O . The baseline
intersects the image planes in the epipoles e
1
and e
2
. Alternatively, an epipole is the image of
the optical centre of one camera in the other camera. Any scene point P and the two
corresponding rays from optical centres

1
c
O and

2
c
O define an epipolar plane. This plane
intersects the image plane in the epipolar line. It means, an epipolar line is the projection of
the ray in one camera into the other camera. Obviously, the ray

1
c
O P represents all possible
positions of P for the first image and is seen as the epipolar line l
2
in the second image. The
point p
2
in the second image that corresponds to p
1
must thus lie on the epipolar line in the
second image l
2
, and reverse. The fact that the positions of two corresponding image points
are not arbitrary is known as the epipolar constraint. This is a very important statement for
the stereo vision. The epipolar constraint reduces the dimensionality of the search space for
a correspondence between p
1
and p
2
in the second image from 2D to 1D.
A special relative position of the stereo cameras is called rectified configuration. In this case
image planes coincide and line

1
c
O

2
c
O is parallel to them, as shown on Fig. 7.

Fig. 7. The rectified configuration of two cameras

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 151
The epipoles e
1
and e
2
go to infinity, and epipolar lines coincide with image rows, as a
consequence. For the rectified configuration, if the internal calibration parameters of both
cameras are equal, it implies that corresponding points can be sought in 1D space along
image rows (epipolar lines).
The optical axes are parallel, which leads to the notion of disparity that is often used in
stereo vision literature. Top view of two cameras stereo configuration with parallel optical
axes is shown in Fig. 8. World coordinate system is parallel to cameras’ coordinate systems.
The principal point O
w
of the world coordinate system is assigned on the midway on the
baseline. The coordinate z
w
of point P represents its distance from the cameras (z
w
= 0), and
can be calculated from the disparity d = u
1
- u
2
. Values u
1
- u
2
are measured at the same
height (same rows of images). Noting that:
, , (23)
we have:
. (24)
The remaining two coordinates of the 3D point P can be calculated from equations:

Fig. 8. Top view of two cameras with parallel optical axes rectified configuration

1 2 1
w
w
-B(u + u ) Bv
x = , y =
2d d
(25)
The position of the point P in the 3D scene can be calculated from the disparity d. It is a
question, how the same point can be found in two images if the same scene is observed from
two different viewpoints. The solution of this correspondence problem is a key step in any
stereo vision. Automatic solution of the correspondence problem is under extensive
exploration. Until now there is no solution in general case. The inherent ambiguity of the
1 2
B
x
w
u
f z
w
+
=
2 2
B
x
w
u
f z
w
÷
=
Bf
z
w
d
=

New Technologies – Trends, Innovations and Research 152
correspondence problem can in practical cases be reduced using several constrains. A vast
list of references about this task can be found in the (Sonka, et al. 2008).
The geometric transformation that changes a general cameras configuration with non-
parallel epipolar lines to the parallel ones is called image rectification. More deep
explanation about computing the image rectification can be found out in (Sonka, et al. 2008).
4. Robot calibration using computer vision
Measurement of robot manipulator end-effector pose (i.e. position and orientation) in the
reference coordinate system is unquestionably the most critical step towards a successful
open-loop robot calibration. A variety of measurement techniques ranging from coordinate
measuring machines, proximity measuring systems, theodolites, and laser tracking
interferometer systems to inexpensive customized fixtures have been employed for
calibration tasks. These systems are very expensive, tedious to use or with low working
volume (Driels, 1994; Khalil, et al. 1995; Vincze, et al. 1994). In general, the measurement
system should be accurate, inexpensive and should be operated automatically. The goal is to
minimize the calibration time and the robot unavailability.
To overcome the above limitations, advances in robot calibration allow the start using a
computer vision to calibrate a robot. Compared to those mechanical measuring devices, the
camera system is low cost, fast, automated, user-friendly, non-invasive and can provide
high accuracy (Zhuang & Roth, 1994).
There are two types of setups for vision-based robot pose measurement. The first one is to
fix cameras in the robot environment so that the cameras can see a calibration fixture
mounted on the robot end-effector while the robot changes its configuration. The second
typical setup is to mount a camera or a pair of cameras on the end-effector of the robot
manipulator (Albada, et al. 1994; Meng & Zhuang, 2007; Motta, et al. 2001; Motta &
McMaster, 2002).
The stationary camera configuration requires the use of stereo system placed at fixed
location. It is not possible compute 3D scene point P position from only one projection p, on
the camera plane. The stereo system has to be placed in location that maintains necessary
field-of-view overlap. The proper camera position needs to be selected empirically. The
stereo system must be calibrated before manipulator calibration. The manipulator is placed
in a number of configurations. From pair of images the location (position and orientation) of
the calibration board is computed for every configuration (Fig. 9.). At the each
configuration, geometric parameters are updated by adding Δg (calculated in accordance
with equation (10)) to the current value of g.
If it is enough to measure only the end-effector pose (usually tool’s tip) for robot calibration,
then it is not necessary to use a calibration plate. Based on pair of images of manipulator
tool 3D position of its tip is calculated. In this case, the main problem is automatic detection
of points matching the manipulator’s tip on both images.
This type of setups have two distinct advantages. First, it is non-invasive. The cameras are
normally installed outside of the robot workspace, and need not be removed after robot

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 153
calibration. Second, there is no need to identify the transformation from the camera to the
end-effector, although this transformation is easy to compute in this case.

Fig. 9. A manipulator calibration using stationary camera configuration
The major problem existing in all stationary camera setups is system accuracy. The accuracy
improves with the decrease of distance between stereo system and object point. An
approximated estimate of the errors in the point coordinates, for a simplified case is given
by
e=d (Δl/f)
where e is the maximum 1D error in the point coordinates due to image quantization error, d
is the distance from the point to the stereo system, Δl is the half of the 1D physical size of the
image pixel.
In a case of stereo system with parallel optical axes one more problem exists. It is the
small field of view by both cameras. In order to have larger scene area overlapped by the
both cameras each camera has to be titled towards the geometrical center line of the two
cameras.
The moving camera approach (a camera on the end-effector) can resolve the conflict
between high accuracy and large field-of-view of the cameras as the cameras only need to
perform local measurements. The global information on the robot end-effector pose is
provided by a stationary calibration fixture (Fig. 10.). In general, eye in hand robot
calibration can be classified into two-step and one-step method.
Let us start with the two-step stereo camera setup case. The stereo cameras are rigidly fixed
to the end-effector of the manipulator, as shown in Fig. 9. In the first step the stereo cameras
are calibrated. After camera calibration (internal and external camera parameters are
known), the 3D position of any object point (from its images) can be computed with the
respect to the camera coordinate system. Since camera coordinate system is fixed with

New Technologies – Trends, Innovations and Research 154
respect to the end-effector coordinate system, it moves with the manipulator from one
calibration configuration to another. On that way the position of cameras becomes known in
world coordinate system at each manipulator configuration. Thus the homogeneous
transformation A
w,C
can be calculated for every configuration. For a known transformations
A
w,B
and A
E,C
it follows: A
w,C
(q,g) = A
w,B
A
B,E
(q,g)A
E,C
. Thus the geometric parameters of the
manipulator can be identified from the set of transformations A
w,C
(q,g).




Fig. 10. A manipulator calibration with hand-mounted cameras
In a monocular camera setup, a camera is rigidly fixed to the moving end-effector. In
accordance with procedure presented in section 2.2 internal and external parameters of the
camera are calculated by observing a planar target(s). In the next phase, the robot is moved
from one configuration to another. The external camera parameters are calculated at each
manipulator configuration, with the fixed value of internal parameters. It means that the
position of the end-effector is computed for each manipulator configuration. The
manipulator geometric parameters can be estimated using obtained positions.
In a one-step method, both the camera parameters as well as manipulator geometric
parameters are identified simultaneously. This method can be divided into stereo camera
and monocular camera setup. The paper (Zhuang & Roth, 1994) focuses on the one-step
method, and compares it with two-step method.
In the moving camera approach, as the cameras are mounted on the robot end-effector, this
method is invasive. The second disadvantage of this method is that normally computes the
position of the camera instead the end-effector. Thus a remaining task is to identify the
transformation from the camera system to the tool system, which is a non-trivial task (Meng
& Zhuang, 2007; Tsai & Lenz, 1989).

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 155
5. The procedure for automated calibration of manipulators using computer
vision
Visual stereo systems are increasingly used as standard components of a computer
integrated manufacturing (Tian, et al. 2010). The cameras are normally installed outside of
the robot workspace. Keeping in mind what was previously stated, the automatic
calibration procedure using a fixed stereo system is presented. There is request for
automatic manipulator calibration without operators’ intervention and without additional
equipment.
First step is to use visual system for correct detection of manipulator’s end-effector. Thus, it
is recommended to set marker on the end-effector of the manipulator. Marker design is very
important step in marker detection problem using SIFT algorithm. Recommended planar
marker (black - white), (shown on Fig. 11. – (1)) meets several assumptions: it is very easy to
create and set on manipulator’s end-effector, it is suitable for automatic recognition,
characteristic point in the center of marker is defined very precisely, etc. The first step in the
automatic calibration of manipulator is marker recognition at any point of robot workspace.
This task is a typical problem of object recognition. It is needed to find a marker on image
(Fig. 12.) by using a training image of marker. Training images of different markers’ pattern
are shown on Fig. 11.

Fig. 11. Training images of marker
Test results of marker recognition with different plain texture using SIFT algorithm
shown that proposed marker has 6 matches which is the best result. Further, Marker11
has 3 matches, Marker4 and Marker7 have 2 matches and Marker5 has 1 match, but it is
not enough number of matching for marker detection using SIFT algorithm. The other
markers do not have matches. Comparing different marker patterns it was shown that
proposed marker is very simple to implement and reliability of its automatic detection is
the highest.
Automatic recognition of marker on image of robot is general problem of object recognition.
Object recognition in cluttered real-world scenes requires local image features that are

New Technologies – Trends, Innovations and Research 156
unaffected by nearby clutter or partial occlusion. The features must be at least partially
invariant to illumination, 3D projective transforms, and common object variations.
However, the features must also be sufficiently distinctive to identify specific objects among
many alternatives. The difficulty of the object recognition problem is due in large part to the
lack of success in finding such image features. However, recent research on the use of dense
local features has shown that efficient recognition can often be achieved by using local
image descriptors sampled at a large number of repeatable locations (Matthew & Lowe,
2002).
SIFT (Lowe, 2004) is an algorithm used for detection and description of local image features
in the area of computer vision. This algorithm extracts points of interest of desired object for
any type of object on the image, which correspond to the centre of characteristic features.
Using results of the algorithm, the object can be located on image with plenty of other
objects, and is also suitable for matching of correspondent points which can be useful for 3D
scene reconstruction. Primary goal of the SIFT algorithm is identification of image feature
locations on image scale space, invariant compared to: size of the object, translation,
rotation, obstruction, variations of illumination, 3D object projective transformation and
deformation. Object models are presented as 2D locations of SIFT features that are invariant
to affine transformations.
SIFT algorithm is very robust and it became industrial standard in area of computer vision
thanks to its invariance on early mentioned effects. Bearing in mind its good features, SIFT
was used for marker detection on manipulator’s workspace image.

Fig. 12. Characteristic points on the images and their matching as a result of the SIFT
algorithm
The outcome of marker detection using SIFT algorithm is illustrated on Fig. 12. On the
same figure detected characteristic points and result of their matching are also shown.
Conclusions derived from the properties of SIFT algorithm are confirmed by experiments
(also illustrated on Fig. 12.). On the basis of marker pattern SIFT detects several

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 157
characteristic points on image of robot, which are in the area of marker. Invariance of SIFT
algorithm on mentioned inconsistencies with the marker is confirmed. It should be noted
that large area of robot workspace is white. This is a huge drawback keeping in mind that
the parts of marker are white too. From this standpoint it can be argued that SIFT
algorithm gives a satisfactory detection reliability of requested object area. On the other
hand, plain texture of marker (i.e. insufficiently density of local features) makes
correspondence of points from a training image and image that is being searched not
sufficiently accurate (Mikolajczyk & Schmid, 2004). Hence, the marker center cannot be
precisely detected using SIFT. Some corresponding points on the image that is being
searched fall outside of marker area.
For these reasons, SIFT algorithm cannot be used for accurately determination of reference
point which is in the center of window. It is necessary to use another method to determine
marker borders and reference point in the center of window. For this purpose, using
characteristic image features obtained by SIFT, one can determine the area of marker and
then apply the Canny edge detector on that image segment.
Area closed to marker is reliably detected on image which is being searched using SIFT (see
Fig. 13.). In so obtained area, recognition algorithms which are based on detection of edges are
not critical. Also, the marker is of specific form, so the reliability of the edge detection can be
increased. Keeping in mind previously, it is recommended to use Canny algorithm on image
segment obtained by SIFT. The result of Canny algorithm, which is applied on the image
segment shown on Fig. 13.a, is illustrated on Fig. 13.b. Several experiments confirm accurate
detection of marker edges using Canny algorithm (illustrated on Fig. 13.). Based on the
detected edges of marker it is easy and simple to determine position of marker referent point.
Canny edge detection

Fig. 13. a) The area of marker ; b) Marker edges determined using Canny edge detector
After accurate detection of marker position (and referent point on the marker) it is necessary to
determine 3D position of marker referent point (manipulator end-effector or tool) in one
camera’s image plane using the same scene from the second camera. To solve this task it is
necessary to determine again correspondent points on the both camera images. The
assumptions on correspondence requirements are significantly different in this phase of
calibration procedure. Manipulator images obtained from cameras of stereo system contain
marker image recorded simultaneously. From there, follows that in process of correspondence
determination invariance on object size, rotation, illumination and deformations of object is
not required. Light invariance of occlusion and 3D projective transforms is necessary too. Since
the object images (markers) are translated along epipolar line on both images of stereo system,
invariance of translator is not allowed. Translation (disparity) on two images is the basic
information that should be accurately determined using the stereo system.

New Technologies – Trends, Innovations and Research 158
In the paper (Maric & Djalic, 2011) algorithm based on the most similar intensity area
correlation has been proposed. The algorithm assumes that more pixels have similar
intensity (color), without special texture. Therefore, correlation of two pixels does not
provide sufficient information because of the existence of more similar candidates. Thus,
correlation of more adjacent pixels which are forming the windows of hxw pixels is
determined. When stereo system with parallel optical axes is used, the epipolar lines of both
cameras lie on the same height on both images, as shown on Fig. 14.

Fig. 14. Windows position of two corresponding points
Window of hxw pixels is formed. The window central pixel represents the marker referent
point on one of two images from stereo system (eg. Left image). This window is used as
referent area to be searched on the second image (i.e. right image). On the second image the
same size window is observed on the same height as on the first image. By changing
window disparity d the second window is sliding along u axe. Measure of two windows
intensity likelihood, i.e. criteria function, is calculated as sum of squared differences of all
pixels intensities in both windows.
(26)

The value of disparity d, for which is obtained minimal value of criteria function, gives the
position of window which is the best correlated with the reference window. Therefore, the
corresponding windows are on the same height on both images, but shifted along u axe for:
(27)
Tests were conducted on modular Robix manipulator. Robotic structural system Robix RCS-6
is a combination of light industrial properties and educational ease to use robots. It is modular
system that allows the manipulator configuration formed by six rotational joints. The RCS-6 is
primarily intended for use by schools and universities and it can be a productive and useful
tool. Joint drives are DC motors. To manipulate with Robix manipulator external access to
control functions of Rascal Control Software is possible through DLL (dynamic link library) in
any programming language. The robot has a repeatability of 5 mm.
For the purpose of calibration, system with two cameras is set parallel on all axis. Fixed
stereo camera system was used for image recording. Cameras used for implementation of
( ) ( )
( )
2
, , Im , Im ( , )
,
c u v d u h v w u h d v w
L R
h w
¿ = + + ÷ + + + (
¸ ¸
( ) ( ) , min , , disparity u v c u v d =

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 159
stereo cameras system are off-the-shelf Logitech C120 with adjustable focus and set to
recording on 1280x1024 resolution (Kosic, et al., 2010). Stereo system baseline, as distance
between cameras (optical axes) is 13 cm.
The algorithm was tested with markers (on Fig. 11.) placed on the end-effector of a modular
Robix manipulator. Fig. 12. and Fig. 15. present images from left and right cameras,
respectively. Fig. 16. shows graphical representation of criteria function for disparity change
along epipolar line, from minimum to maximum value. It is obvious, as it shown on Fig. 16.
that a reliable method of determining the corresponding points is obtained by using marker
and selected criteria function. Selected criteria function has a pronounced global minimum.

Fig. 15. Result of marker detection on the image from right stereo cameras system using area
based correlation algorithm

Fig. 16. Graphical representation of criteria function

New Technologies – Trends, Innovations and Research 160
For successfully finding of corresponding points, choice of window size (hxw pixels) is
crucial. In the classical problem of correspondence, if the window size is too small, it
increases probability of occurrence of a large number of candidates for correspondence.
This increases probability of wrong selection of corresponding points. On the other hand, if
the window size is too large, there is a possibility for error because of a constant value of
disparity within the window. Therefore, there is no single recommendation for the best
window size. In special cases, even an adaptive window size is suggested, but such
algorithms are generally very complex, compute demanding and not widely accepted in
practice.
In accordance with previous demonstration, windows size will depend on the size of the
marker when it is necessary to determine markers correspondence on two images. Marker is
an area with nearly two constant intensities (color). Assumptions about the window size
effects (relative to marker size) on the reliability of the correspondence procedure have been
analysed and tested in (Maric & Djalic, 2011). Physical marker dimensions are 5x5 mm
which corresponding to 21x21 pixel size. Window size has been altered from 5x5 to 37x37
pixels. Value of criteria function is divided with number of pixels that belongs to window.
In this way, the criteria function represents the average inconsistency for every pixel of two
windows. Diagram of minimum value change of criteria function with change of window
size is illustrated on Fig. 17.

Fig. 17. Graphical representation of criteria function for disparity change along epipolar line
The illustration confirms that the best results are achieved by adopting that window size is
close to marker size.
Parallel manipulators are emerging in the industry. These manipulators have main property
of having their end-effectors connected with several kinematic chains to their base, rather
than one for the standard serial manipulators. This allows parallel manipulators to bear
higher loads, at higher speed and often with a higher repeatability. However, the large
number of links and passive joints often limits their performances in terms of accuracy. A
kinematic calibration is thus needed. Even though, kinematic model of parallel manipulator

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 161
is different to model of serial one, the calibration methods and procedures presented above
for the serial manipulators can be used for the parallel manipulators (Renaud, et al. 2006).
6. Improving flexibility of industrial robots
To respond to the rapid changes of product design, manufacturers need a more flexible
fabrication system. To increase flexibility of production system, first step is improving
flexibility of machine serving robots.
To increase flexibility of the industrial robots (with conventional fixed-anatomy
manipulators), the handling system is equipped with tool change system. Today’s industry
is mainly using industrial robots with automatic tool change. Automatic tool change
increases robot’s productivity and flexibility. However, conventional fixed-anatomy
manipulators, equipped with automatic tool change system, do not satisfy the requirements
to adapt such robot to variable tasks and environments.
In recent years, modular reconfigurable manipulators were developed to fulfil the
requirements of the flexible production system. It is composed of interchangeable links and
joint modules of various sizes and shapes. By reconfiguring the modules, different
manipulators can be created to meet a variety of tasks requirements using standard
mechanical and electrical interfaces. Serial and parallel modular reconfigurable
manipulators are under development. New modular reconfigurable manipulators can be
easily reassembled into a variety of configurations and different geometries (Bi, et al. 2003;
Chen, et al. 2003; Yim, et al. 2003).
Every reconfiguration of anatomy of manipulator causes change in geometry of its
kinematic chain. It is necessary to establish model’s form and exacts parameter values. This
is realized according to automatic identification method as described by presented
algorithm.
To achieve high level of flexibility in complex production systems manipulator’s flexibility
is not enough (especially with cooperative work and the changing environment). Flexibility
of the other parts of the flexible production cells is needed too.
During the course of manufacturing processes it is necessary to fix, locate and position the
work piece or product. This is referred to as fixturing. For a production system to be fully
flexible, all of its components have to be flexible, including the fixtures. The reconfigurable
fixtures have the ability to be changed (reconfigured), to suit different parts and products.
The reconfigurable fixture sets the product interface point to correct position by the use of
external measuring device. By the external measuring device it is possible position key
features of the product to be constrained and build the fixture top-down instead of bottom-
up. Several reconfigurable fixtures have been developed (Jonsson & Ossbahr, 2010).
To reposition a fixture different approaches have been tested. It can be done manually, by
actuators and using the robots.
The external measuring system adds cost. NC (Numerical Control) machine can be used for
measurement, but it is time consuming process, and the cycle time of manufacturing process
is needed to allow this type of operation.

New Technologies – Trends, Innovations and Research 162
For a more automated reconfiguration it is recommended to use robots for repositioning and
computer vision system to measure the position of pick up interface that will hold the part.
Furthermore, using robot and computer vision already presented in manufacturing, opens up
economically the best solution since it doesn’t constitute an extra cost. Proposed algorithm is
supporting accurate and effective tasks execution designed by principles of full flexibility.
During the execution of main program for the management of flexible production cell accuracy
of executed movements is monitored based on marker’s position at the top of the tool(s) and
fixator. In the case of small geometry change, parameters of proper model are automatically
recalibrated in real-time. For details, see explanation in (Maric & Potkonjak, 1999).
Machining setup verification is widely used before starting the actual machining operation.
It is particularly time consuming in the case of high flexible manufacturing systems. The
paper (Tian, et al. 2010) presents a computer vision system to quickly verify the similarity
between the actual setup and its digital model. That enables integration of CAD (Computer-
aided design) and CAM (Computer-aided manufacturing), and higher flexibility of
manufacturing system.
7. Conclusion
In this chapter algorithm for automatic identification of kinematic model of manipulator’s
geometry in order to increase its accuracy and flexibility is presented. Marker and stereo
system with parallel optical axes are used for measurement of 3D position of tool’s tip
and/or fixtures of work pieces. To achieve complete automation, accuracy improvement
and reliability in parameters’ estimates evaluation combination of well-known algorithms
for image processing (SIFT, Canny and Area based Correlation) is proposed. Illustrations
given in text confirm compliance of conducted analysis, expected features of the algorithm
and results of experiments. Algorithm is analyzed in the laboratory, so it is necessary to do
additional verification in industrial environment. Hence, it is necessary to continue with
analysis of level of algorithm invariance in adverse exploitation conditions. This primarily
refers to larger object density in workspace (occlusion and collision), poor lighting and
extreme marker rotation. Furthermore, it is necessary to conduct analysis in reliability and
accuracy after which one can determine orientation of industrial manipulators’ end-effector
using proposed procedure.
8. References
Albada, G. Lagerberg, J. & Visser, A. (1994). Eye in Hand Robot Calibration, Int. J. Industrial
Robot, Vol. 21 No. 6, pp. 14-17.
Bay, S. (1993). Autonomous parameter identification by optimal learning control, IEEE
Control Systems Magazine, Vol. 13, No. 3, pp. 56-61.
Bennett, D. & Hollerbach, J. (1991). Autonomous calibration of single-loop closed kinematic
chains formed by manipulators with passive endpoint constraints, IEEE Trans. on
Robotics and Automation, Vol. 7, No. 5, pp. 597–606.
Bi, Z. Gruver, W. & Zhang, W. (2003). Adaptability of Reconfigurable Robotic Systems, Proc.
of the IEEE International Conference on Robotics and Automation, Vol. 2, pp. 2317-2322.

Improving Accuracy and Flexibility of Industrial Robots Using Computer Vision 163
Chen, W. Yang, G. Ho, E. & Chen, I. (2003). Iterative-Motion Control of Modular
Reconfigurable Manipulators, Proc. of the IEEE/RSJ Int. Conf, on Intelligent Robots and
Systems, pp. 1620-1625.
Driels, M. (1994). Automated partial pose measurement system for manipulator calibration
experiments, IEEE Trans. on Robotics and Automation, Vol. 10, No. 4, pp. 430–440.
Elatta, A. Gen, L. Zhi, F. Daoyuan, Y. & Fei, L. (2004). An Overview of Robot Calibration,
Information Technology Journal, Vol. 3, No. 1,pp. 74-78, ISSN 1682-6027.
Jackson, E. Lin, Z. & Eddy, D. (1995). A global formulation of robot manipulator kinematic
calibration based on statistical considerations, Proc. of IEEE Conf. on Systems Man
and Cybernetics, pp. 3328-3333.
Jonsson, M. & Ossbahr, G. (2010). Aspects of reconfigurable and flexible fixtures, in
Production Engineering Research, Springer, pp. 333-339.
Kang, D. Ha, J. & Jeong, M. (2008) Detection of Calibration Patterns for Camera Calibration
with Irregular Lighting and Complicated Backgrounds, Int. J. of Control, Automation,
and Systems, Vol. 6, No. 5, pp. 746-754.
Khalil, W. & Dombre, E. (2004). Modeling, Identification and Control of Robots, Kogan Page
Science, ISBN ISBN-10: 190399666X.
Khalil, W. Garcia, G. & Delagarde J. (1995). Calibration of geometrical parameters of robots
without external sensors, Proc. IEEE Int. Conf. On Robotics and Automation, Vol. 3,
pp. 3039-3044.
Khalil, W. Gautier, M & Enguehard, Ch. (1991) Identifiable parameters and optimum
configurations for robots calibration, Robotica, Vol. 9, pp. 63-70.
Lowe, D. (2004). Distinctive Image Features from Scale-Invariant Keypoints, International
Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110.
Maric P. & Potkonjak V. (1999). Geometrical Parameters Estimation for Industrial
Manipulators Using Two-step Estimation Schemes, J. of Intelligent and Robotic
Systems, Vol. 24, pp. 89-97.
Maric, P & Djalic, V. (2011). Choice of Window Size in Calibrating the Geometry of
Manipulators Based on the Regions Correlation, Electronics, Vol. 15, No. 1, pp. 45-53.
Matthew, B. & Lowe, D. (2002). Invariant Features from Interest Point Groups, Proc. of British
Machine Vision Conference, pp. 656-665.
Meng, V. & Zhuang, H. (2007). Autonomous robot calibration using vision technology, Int. J.
Robotics and Computer-Integrated Manufacturing, No. 23, pp. 436–446.
Mikolajczyk, K. & Schmid, C. (2004). Scale & Affine Invariant Interest Point Detectors,
International Journal of Computer Vision, Vol. 60, No.1, pp. 63–86.
Motta, J. & McMaster, R. (2002). Experimental Validation of a 3-D Vision-Based
Measurement System Applied to Robot Calibration, J. of the Braz. Soc. Mechanical
Sciences Copyright, Vol. 24, pp. 220-225.
Motta, J. Carvalho, G. & McMaster, R. (2001). Robot calibration using a 3D vision-based
measurement system with a single camera, Int. J. Robotics and Computer Integrated
Manufacturing, Vol. 17, No. 6, pp. 487-497.
Perez, Ulises, Cho, Sohyung, Asfour & Shihab (2009). Volumetric Calibration of Stereo
Camera in Visual Servo Based Robot Control, International Journal of Advanced
Robotic Systems, Vol. 6, No. 1, ISSN 1729-8806, pp. 35-42.

New Technologies – Trends, Innovations and Research 164
Renaud, P. Andreff, N. Lavest, J. & Dhome, M. (2006). Simplifying the Kinematic Calibration
of Parallel Mechanisms Using Vision-Based Metrology, IEEE Trans. on Robotics and
Automation, Vol. 22, No.1, pp. 12-22.
Renders, J. Rossignol, E. Besquetand M. & Hanus, R. (1991). Kinematic calibration and
geometrical parameter identification for robot, IEEE Trans. on Robotics and
Automation, Vol. 7, No. 6, pp. 721–732.
Sonka, M. Hlavac V. & Boyle, R. (2008). Image Processing, Analysis, and Machine Vision,
Thomson, ISBN-10: 049508252X.
Sun, Y. & Hollerbach J. (2008), Active Robot Calibration Algorithm, Proc. of ICRA, pp. 1276-
128.
Tian, X. Zhang, H. Yamazaki, K. & Hansel, A. (2010). A study on three-dimensional vision
system for machining setup verification, Int. J. Robotics and Computer-Integrated
Manufacturing, No. 26, pp. 46–55.
Torreão, J. (Ed.). (2011). Advances in Stereo Vision, InTech, ISBN 978-953-307-837-3, Rijeka,
Croatia.
Tsai, R. & Lenz, R. (1989). A New Technique for Fully Autonomous and Efficient 3D
Robotics Hand/Eye Calibration, IEEE Trans. on Robotics and Automation, Vol. 5, No.
3, pp. 345-358.
Tsai, R. (1987). Versatile Camera Calibration Technique for High-Accuracy 3D Machine
Vision Metrology Using Off-the-shelf TV Cameras and Lenses, IEEE J. Robotics and
Automation, Vol. RA-3, No. 4, pp. 323-344.
Vincze, M. Prenninger, J & Gander, H. (1994). A laser tracking system to measure position
and orientation of robot end effectors under motion, Int. J. Robotics Research, Vol.13,
No. 4, pp. 305-314.
Weng, J. Cohen, P. & Herniou, M. (1992). Camera Calibration with Distortion Models and
Accuracy Evaluation, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.
14, No. 10, pp. 965-981.
Yim, M. Roufas, K. Duff, D. Zhang, Y. & Homans, S. (2003). Modular Reconfigurable Robots
in Space Applications”, Journal Autonomous Robots, Kluwer Academic Publishers
Hingham, Vol 4, No. 2-3, pp. 225-237.
Zhuang, H. & Roth, Z. (1994). On Vision-Based Robot Calibration, Proc. of SOUTHCON 94,
pp. 104-109.
Zhuang, Z. (2008). A Flexible New Technique for Camera Calibration, Technical Report MSR-
TR-98-71.
Part 4
Telecommunication

8
A Framework for VoIP Testability
and Functionality Extension with
Interactive Content Delivery
Janez Stergar, Janez Klanjšek and Sibila Vadlja
University of Maribor, Faculty of Electrical Engineering
and Computer Science
Slovenia
1. Introduction
The telephony as we know it has changed when VoIP emerged in 2004. In the third quarter of
2000 the second generation of IP enabled phones came to the market with full QoS capability.
Performance was quadrupled and the basic voice functionality was extended with the addition
of a large screen with HTTP/XML driven capability. That was a major market breakthrough
extending VoIP phones with the possibilities of interactive content delivery.
Classic telephony systems are being rapidly replaced by IP Telephony (IPT) in corporate
and home environments. Especially IPT has gained wide acceptance in the industry offering
new ways to exchange information with rich media communication capabilities. Therefore
stationary telephony is migrating into the internet. Voice over packet switching networks
can significantly reduce the per-minute cost, resulting in reduced long distance costs.
Therefore many dial-around-calling schemes already relay on VoIP backbones to transfer
voice. There is even more potential extending VoIP with interactive content using tailored
applications for the end user.
Successful deployment of any new technology solution requires thorough understanding of
the function of various components involved and the interaction among them. The
architects and engineers who are tasked with implementing the IPT solution must ensure
that the proposed architecture meets all the requirements and is also scalable in the future
(Kaza & Asadullah, 2005).
Therefore an IP Telephony framework will be introduced with the goal to demonstrate
typical limitations and an IP Telephony delivery platform for interactive content
applications. The VoIP framework can deliver IP Telephony services and internet access at
the same time based on one core device. We implemented VoIP technology with content
delivery support using a common unified communication device in combination with
typical networking devices such as a router and switch. The platform is capable of carrying
voice data and multimedia traffic with QoS management. The system consists of diverse IP
phones with dissimilar capabilities and a PoE entry access switch (delivering power for the

New Technologies – Trends, Innovations and Research

168
IP phones). Also a router is used as a gateway with IP telephony services operation system
to simulate the WAN network environment. The framework is designed for IP phone
applications development and testing. Therefore a discussion will be included where critical
implementation parameters in a real-time environment are evaluated and tested (jitter,
frame dropping, priority queuing, etc.).
The extra value of IP telephony is the support for applications extending the voice capability
of IP phones with interactive content delivery. These are the so called IP phone services.
These enable representation of multimedia content from a server locally or anywhere in the
Internet cloud. Applications are delivered from a server and rendered on IP phones using
HTTP protocol. The deployment of a typical application regarding the XML capabilities will
be discussed and the workflow of an application realization presented. Also limits for image
streaming will be evaluated.
2. IP Telephony components
When deploying the IP Telephony solutions of almost any prospering IPT solutions
provider on the market the key areas to emphasize are (Kaza & Asadullah, 2005):
- Network Infrastructure,
- Call Processing,
- Call Manager Directory Services,
- IP Telephony endpoints,
- Call Admission Control,
- Legacy Fax Messages,
- Media Resources and
- Applications.
2.1 Network infrastructure
Network infrastructure plays a key role in building multiservice networks e.g. Cisco
AVVID. Integration of data and voice traffic puts strong requirements on packet loss, delay
and jitter (variable delay of VoIP packets). LAN/WAN components with QoS mechanisms
support are indispensable when designing IPT networks as well faster convergence in case
of network failures to avoid destructive delay, jitter and frame dropping.
Voice traffic in addition to the existing data traffic increases the bandwidth a critical issue
because of the high-speed LAN switching technologies availability. However, when
transporting the voice traffic across the WANs, one has to ensure that adequate bandwidth
is available to support the additional bandwidth required to transport voice calls. If the
WAN links do not have adequate bandwidth their bandwidth has to be increased to support
the additional voice traffic. After the bandwidth is assured, QoS mechanisms have to be
properly configured on operating systems of LAN/WAN networking devices to prioritize
voice traffic with adequately bandwidth allocation.
Network infrastructure plays a key role in building multiservice networks e.g. Cisco AVVID
(Architecture for Voice, Video and Integrated Data) is the foundation of converged
enterprise communication networks.

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

169
2.2 Call processing
The core of an IP Telephony solution is usually a software call manager. For Cisco devices this
software component is called Cisco CallManager. The specially designed IOS software with
embedded CallManager handles all the call-processing requests received from various clients
in the IP Telephony network. For the Cisco IP Telephony AVVID solution CallManager
software runs on a compatible router or the Microsoft Windows Server operating systems. Call
manager is installed on the Cisco Media Convergence Server (MCS) for medium to large-scale
networks but can be also operated from a router (CallManager Express) or a specific device for
smaller unified networks e.g. the Cisco UC 500. The selection of the hardware platforms
depends on the size of the network in which IP Telephony is going to be deployed, including
its high-availability and performance requirements (typically 300-7500 devices per dedicated
server for medium to large-scale systems). For large-scale systems a clusterization of servers is
inevitable. Call Manager servers are grouped to form clusters to support more devices (IP
phones, gateways, etc.). For the current Cisco CallManager version, a Call Manager cluster can
have up to eight Call Manager servers running the call manager service.
2.3 Call Manager directory services
Call Manager stores system and device configurations in a Microsoft SQL database. The
application scripts and the subsequent information are stored in a Lightweight Directory
Access Protocol (LDAP) compliant directory the so-called DC Directory (DCD):
- User authentication and authorization
- Extension Mobility profiles
- Personal Assistant profiles
- Internationalization information
- Personal Address Book (PAB)
- Spoken name
- Fast dial
- Call Forward All information
The DCD process replicates the information among the members of the cluster. This process
is similar to Microsoft SQL replication.
2.4 IP Telephony endpoints
In an IPT network, endpoints are the devices that accept or initiate a VoIP session. Typical
endpoints that are used are: IP Phones, Soft Phones e.g. Cisco IP Communicator, Wireless IP
Phones, Voice gateways which connect the IPT network to the PSTN or a PBX, Survivable
Remote Site Telephony (SRST) which provides the fallback support for the IP phones that
are connected behind a router running a suitable operating system software version that
supports SRST and in a specific Cisco environment the so called Call Manager Express -
CME which delivers key system functionality for small and midsize branch offices using
Cisco IP stationary, wireless and software phones.
2.5 Call admission control
In VoIP networks, Call Admission Control (CAC) does the bandwidth management. CAC
ensures that enough bandwidth is available before granting permission to a gateway for

New Technologies – Trends, Innovations and Research

170
placing the call across the IP WAN. When deploying IPT solutions with multiple locations,
there are two choices for implementing the CAC: Call Manager locations-based CAC and
Gatekeeper CAC. Call Manager locations-based CAC is one mechanism to limit the calls
sent across an IP WAN in a single Call Manager cluster deployment, whereas the
Gatekeeper CAC provides call admission and call routing between the Call Manager
clusters in distributed call processing deployments.
2.6 Legacy FAX messages
There is still a large portion of long-distance minutes based on legacy fax traffic. One of the
most important functionalities in the transition to converged networks is therefore support
for fax communications. As network implementations increasingly provide for e-mail
attachments and web-downloadable documents, fax communication nonetheless is still a
significant method of immediate document delivery worldwide. Three methods to transmit
legacy fax traffic across the IP network are common: the Pass-Through mode, where the
gateways do not distinguish a fax call from a voice call, the Cisco proprietary Fax Relay
mode, where the gateways terminate the T.30 fax signalling and the plain old T.38 Fax Relay
mode.
2.7 Media resources
The function of media resource devices is to mix the multiple streams into a single output
stream, converting the data stream from one compression type to another, and so forth. The
media resources can be hardware or software. The limitation of software media resources is
that they can't combine the streams that use different compression techniques. Hardware
media resources have the same features as software media resources with an additional
advantage of mixing the streams that use different compression types. Characteristic media
resources are conferencing, transcoding, and MoH (Music on Hold) which provides music
or announcements when the users are put on hold.
2.8 Applications
There is a wide range of applications that can be deployed in an IPT network. These
applications are optional, and their deployment adds more features and capabilities to the
overall IPT network. Design and deployment of the applications, such as Customer
Response Solution and IP Phone services is a very important topic for new converged IPT
networks. Cisco offers many proprietary services e.g. IVR, IPCC Express, Cisco Unity, Cisco
Emergency Responder, Cisco Conference Connection and so on. In the following we will try
to emphasize the non-proprietary solutions and therefore focus on IP Phone Services.
3. IP Telephony deployment architectures
By using the Call Manager software it is possible to bypass the plain old PBX and replace it
with IP Telephony over a next generation converged network. The Call Manager application
software provides call-control functionality and, when used in conjunction with IP
hardware/software phones, can provide PBX functionality in a distributed and scalable
manner. The deployment solution models of Cisco IPT can be categorized into one of the
following architectures (Kaza & Asadullah, 2005):

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

171
- Single-site deployment
- Centralized call processing with remote branches
- Distributed call-processing deployment
- Clustering over the IP WAN
Selection of the deployment model depends on implementation requirements, such as the
size of the network, features, and availability of the WAN bandwidth.

Fig. 1. Single - Site IPT Model
3.1 Single-site model
In this deployment model, Call Manager applications such as voice mail, IP-IVR,
AutoAttendant (AA), Transcoding, and conferencing resources are located at the same
physical location (Fig. 1).
All the IP phones are located within this single site. The PSTN is used to route the off-net calls.
3.2 Centralized call processing model
In this deployment model all the call processing is done at the central site. This is suitable
for implementations in which the majority of the workforce is concentrated at a single site
and small numbers of employees work at the remote branches (Fig. 2).

Fig. 2. Centralized Call-Processing Model

New Technologies – Trends, Innovations and Research

172
At each remote branch, SRST (Survivable Remote Site Telephony) routers ensure that call
processing is preserved in case of WAN link failure. The voice traffic travels via the IP WAN
and falls back to the PSTN if not enough bandwidth is available across the WAN link, by
using the Automated Alternate Routing (AAR) feature available in the Call Manager
software application.
This deployment model is cost effective and provides many benefits, such as a unified dial
plan, less administrative overhead, and potential savings on communications costs as the
remote branches calls use the IP WAN as first choice. The only limitation is that the remote
sites will have limited features available in the case of a WAN failure.
3.3 Distributed call processing model
In the distributed call-processing deployment architecture, Call Manager software and
applications are located at each site. Device weights and dial plan weight calculations
determine the number of IP phones supported at each site.
In the figure (Fig. 3) a distributed call-processing model is depicted in which headquarters
and branch Y IP phones are served by separate Call Manager clusters and branch X is served
by the Cisco CallManager Express (CME) feature that is enabled on the router. CME
solution is suitable for a small branch.
3.4 Large scale architecture – Clustering over the IP WAN
The Cisco IPT solution allows organizations to build disaster recovery sites by separating
the single Call Manager cluster across the WAN. Call Manager servers in a cluster update
the configuration information via the Microsoft SQL replication process. To ensure
successful SQL replication and propagation of other critical information in real time, the
round-trip time (RTT) between any Call Manager servers in the cluster should not exceed 40
ms. Many other requirements have to be satisfied before selecting this deployment model.

Fig. 3. Distributed Call Processing Deployment

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

173
When using the clustering over the IP WAN deployment model, voice gateways, media
resources, and voice mail have to be deployed locally at each site. Essential services such as
DHCP, DNS, and TFTP that are critical for the functioning of IP phones and other IPT
endpoints also require local implementation. This configuration avoids dependency on a
single site for crucial resources. Clustering over the WAN can support two types of
deployments:
- Local failover deployment model where each site contains a primary Call Manager
subscriber and at least one backup subscriber. All the servers are part of the same Call
Manager cluster.
- Remote failover deployment model where each site contains at least one primary Call
Manager subscriber and might or might not have a backup subscriber. Branch X and
branch Y as depicted on the previous models have only primary subscribers, and the
backup subscriber is not located in each site.
4. The framework for VoIP testability and functionality extension
In this chapter we will introduce the educationally tailored IP Telephony platform based on
the Centralized Call Processing Model (Fig. 4). The IPT platform is intended as a test
framework for QoS parameters evaluation, configuration, critical examination of the VoIP
network parameters as well as delivery and multimedia media applications platform for the
Cisco IP Phones.
1 2 ABC 3 DEF
4 5 JKL 6 MNO GHI
7 8 TUV 9 WXYZ PQRS
* 0 OPER #
7 960 CISCOIP PHONE
i messages directories
setti ngs services
1 2 ABC 3 DEF
4 5 JKL 6 MNO GHI
7 8 TUV 9 WXYZ PQRS
* 0 OPER #
7960 CISCO IP PHONE
i messages di rectories
set tings services

Fig. 4. The VoIP test framework
The introduced VoIP framework represents a development environment for IP phone
applications using XML to add the extra value with additional interactive content on phones
with touch screen display option. The architecture model is applicable in small and mid-
sized business environments. It supports up to 100 IP phones and includes all the

New Technologies – Trends, Innovations and Research

174
functionalities that small businesses need; voicemail, auto-attendant, e-mail integration, and
call attainability functions. Voice mail provides a voice messaging system for cases when
called persons are not available, auto-attendant provides voice narrated guide system and
greeting if required. Call attainability functions support the user call seeking service (Cisco
Systems, 2011).
The core of the system is a unified communication (UC) appliance and a VoIP services
enabled router with a customized operating system (VoIP software extensions with CME).
The UC appliance and router support the convergence of voice communication, data
communication, mobile phone support and video (multimedia) support. Additionally the
UC appliance offers a wireless module supporting connection of wireless IP phones. It runs
special software for VoIP control, the so called CallManager. The call manager handles
processes and routes the incoming calls comparable to a PBX system. It is actually an IP-PBX
system. The UC appliance functions as a router and a switch for all connected devices e.g. IP
phones, computers. The Call manager software offers centralized/distributed control of the
calls and routes the calls to the intended users (Cisco Systems, 2011). In the presented
platform a switch is used to connect and power IP phones in combination with a router. The
installed Call manager software works as an IP-PBX and simulates two separate VoIP
networks connected through a WAN via the H.323 trunk. The H.323 trunk encapsulates the
calls appropriately. Actually our platform is build of two separate local VoIP networks. The
ports that are used are all FastEthernet 100Mbps ports.
The IP phones represent the end devices of the presented network platform. These are entry
to high-end IP phones intended for home and business environments. Some models have a
HiRes color display and support up to eight call lines which can be configured with
different numbers and speed dial functions. Supported are traditionally soft keys which are
programmable with functionalities change based on the configuration. Templates can be
used to apply the same configuration to multiple phones. All the configurations are
centralized. The Call manager software is used for central point phone management with
complete control over the IP phones in the system. The IP phones load the configuration
settings using the Call manager software integrated trivial file transfer protocol (TFTP)
server (Cisco Press, 2006).
Similar UC management software as on the router is implemented on the UC appliance. It
has a graphical user interface and supports the control and creation of user IP telephony
system. Users can be incorporated into the voicemail system and the creation of voice
messages mailbox and e-mail notification of messages are also supported for every
individual user mailbox. Thus users can access voice messages from anywhere (Au et al.,
2005, Cisco Press, 2011c). The system is configured on network devices in console line in text
mode with exception of the entry level access switch which is managed through a GUI. The
call manager software takes care of network operation in VoIP network while unity express
software takes care of the users that are connected to the system.
4.1 VoIP communication and protocols
All main VoIP protocols can be used and analyzed in our system. The data examination
access to forwarding connections is implemented with port mappings. The network is
flexible so it can support various Session initiation protocol (SIP) IP phones from different

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

175
vendors as well as soft phones that run on computers. It can offer a SIP trunk or a H.323
trunk for WAN connection to a telephony service provider (Cisco Press, 2007; Hatting, S. et
al., 2010; H.323 Implementation, 2011). Students can observe operations in VoIP network of
SIP, H323 and other proprietary protocols (e.g. SKINNY protocol) with Wireshark network
analyzer (Wireshark, 2011). For voice transfer and direct communication between two users
Real time protocol (RTP) protocol is used.
IP phones and attached computers use Virtual local area networks (VLAN) and Dynamic
host configuration protocol (DHCP). VLANs are used to provide basic security and Quality
of service (QoS). Actually data traffic is separated from voice traffic applying two VLANs,
thus voice traffic does not interfere with data traffic. Interference of the two kinds of traffic
could cause degradation of voice communication service. With basic security
implementation we mean that any computer in the network cannot see the IP phones on OSI
L2 and compromise its identity. DHCP is used to automatically configure network
parameters such as IP address, network masks, and DNS address for both the IP phones and
computers connected to the network. That optimizes time necessary for adding new users
and reduces administrators work - there is no need to manually enter device configuration.
Every configuration for any device in the network can be done from a single point. That is
one of the tasks performed by the Call Manager software which runs on UC appliance (Deel,
D. & Nelson, M., 2004; Cisco Resources, 2011).
There is a trunk connection between the UC appliance and the router. A trunk is used to
carry multiple calls or data transfers over a single Ethernet link. The educational platform
supports SIP and H.323 trunks. We used a H.323 trunk. Using this kind of connection the
UC appliance and router act as two dial peers. The right parameters have to be configured
in the dial peer list to correctly route the incoming and outgoing calls. The dial peer list
presents similar information to the Call Manager software devices as a routing table does to
forwarding router. A dial plan is also implemented. It defines mapping of local phone
numbers into global phone numbers and translation of local phone numbers into global
phone numbers for and outgoing call and vice versa for an incoming call (similar to network
address translation). The system can also be designed that the UC appliance has only one
global telephone number which is used for registration at the telephony service provider.
The UC appliance then routes calls appropriately inside the local IP telephony network
(H.323 Implementation).
5. IP Phone services application development
Applications on stationery IP phones are an added value to IP telephony especially in the
business environment. A phone actually transforms to a tool displaying business
information, multimedia applications or entertainment applications that serve the user
needs. IP network enables that functionality. Applications also called IP phone services are
interactive services which relay on the IP phones keyboard or touch screen capability.
First we have to introduce how the IP phone actually invokes a service. The phone is drawn
into a service by virtue of a URL attached to a button on the phone the so called services
buttons. If the phone is equipped with a touch screen, the button functionalities are taken
over with hot spot fields on the screen of the IP phone. The button assignment is done in one
of the Cisco CallManager Administration screens. The services button by default is assigned

New Technologies – Trends, Innovations and Research

176
a URL that points to a Web page on the CallManager server (GetServicesMenu.asp). It
simply presents the user with a menu of services that have been configured for that
particular phone. Each of the different menu items is connected to another URL.
It is important to remember that the Services Menu usually seen on the local CallManager
system is only one of many possible ways to invoke services. The Cisco IP Phones can have
a URL attached to the directories button and the messages button. Most system
administrators will confine the use of those buttons to specialized services.
5.1 Basic concepts
Here we present the introduction to building blocks for development of applications on IP
phones. The HTTP integrated client on the IP phone enables the capability to deliver content
on the display of an IP phone. The content is gathered from a Web server with the HTTP
protocol. It has to be emphasized that applications are not part of a call service but actually are
data applications separated from VoIP communication. All they have in common with VoIP is
the shared IP network infrastructure (Fig. 4). Applications are displayed on the IP phone on
user demand. They are loaded from the Web servers on which they are resident. On the Web
server applications are developed, executed and changed. The interface for communication
between the Web server and the phone is the TCP/IP stack. Applications use TCP for reliable
transfer which is invoked by the HTTP and the proprietary SKINNY protocol. The SKINNY
protocol relays on top of the TCP stack intended for signalization and control of the IP phone
with the IP phone central unit on CME (Call Manager Express). The connection with the
central unit (CME, IP-PBX) is necessary because this is how the IP phone acquires its IP
address. The logical address is necessary for the connectivity between the IP phone and the
server. The SKINNY protocol uses the IP phone firmware for data link layer communication
with the central unit. The firmware is also responsible for the initiation of the HTTP request
calling the application on the server. Basically the data travels between the IP phone, IP PBX
(which serves as a proxy) and the Web server (Deel, D. & Nelson, M., 2004).
Let us examine the data flow more in detail: The firmware on the IP phone initiates a HTTP
request to the server (HTTP GET message). The request includes a URL (Uniform Resource
Locator) containing the address of the server and a file, script or program ID resident on that
server. Data that is requested is embedded in an XML object and sent to the Web server. The
Web server responds and also embeds data in XML objects. Then the data is sent back to the
IP phone with the HTTP protocol. The IP phone understands and is capable of parsing only
supported XML objects which are intended for displaying the dynamic or static content
(Deel, D. & Nelson, M., 2004, Cisco Developer Community – Resources, 2011).
5.1.1 Choosing a Web server and the programming language
There is an open choice of the Web server and Web programming languages. IP phone
understands only XML and the applications are processed on the server. This is why we
need to select a Web server and a compatible programming language. If we decide for a
Microsoft IIS then the optimal programming languages to use are C# and ASP. If we
decided for a JAVA server, the JAVA language is the most suitable. If we decided for the
Apache, then scripting languages e.g. PHP, Perl, Javascript are the advised programming
tools (Deel, D. & Nelson, M., 2004).

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

177
For the introduced test-bed we decided to use the Apache Web server with the PHP
scripting language. PHP enables a programmer to implement much work with small
amount of code making the writing of applications faster. In addition with implementation
of MySQL database, one has the capability of saving data from IP phones into a database or
displaying data from the database on the IP phones (Deel, D. & Nelson, M., 2004, Gilmore,
W. J., 2010). That enables far broader spectra of applications that can be designed. Apache is
the most popular Web server and it’s also open source. It represents approximately 60% of
all the active Web servers implemented on the internet (Apache Usage Statistics, 2011).
5.1.2 Web server usage options
Application can also communicate with distributed Web services on the internet from the
local Web server. A Web server can request content from the server on the internet. That
server responds with the requested data. The Web server then processes the received
information and sends it in XML form to the IP phones. Examples of such applications are
stock market information, weather information and even Google maps (Cisco Developer
Community – Resources, 2011).
The Web server is the core unit of applications development but also involves many
distributed processes. These processes can be database queries, other server queries or even
a connection to other users and multimedia devices – the so called backend processing on
the server. Only the simplest applications communicate solely with the development Web
server (Deel, D. & Nelson, M., 2004).
5.2 The IP phone and application server interaction
There are three major topics to emphasize in communications between the server and the IP
phone: The HTTP Protocol, the customized XML language and the applications on the IP
PBX.
5.2.1 The HTTP protocol
IP phones use a HTTP client, which communicates with the HTTP server. Nevertheless the
IP phone does not operate the same as a Web browser, because it is not capable of
processing demanding complex Web pages built with HTML. This is also the reason that it
does not understand the HTML language. IP phone is limited because of its lack of memory
and processor resources which is common for all embedded systems. The IP phone also has
an embedded HTTP server for sending information about the configuration, firmware,
name and status of the device. For more advanced functions it uses CGI, which enables
external program to change the configuration on the IP phone. Both client and the server
embedded in the IP phone use HTTP version 1.1 for communication with other entities in
the network (Cisco Systems, 2011a).
The communication between the IP phone and the server is accomplished with HTTP
messages. HTTP requests are sent by the phone. The request includes a header, a method, a
body of request and status messages about the capabilities of the phone. HTTP response is
sent back to the phone with similar content messages. The IP phone supports only a few
from the huge variety of HTTP headers because it is not capable of processing them all. It

New Technologies – Trends, Innovations and Research

178
would be too difficult to implement that variety into the embedded system. IP phone uses
solely the HTTP GET method. This method has an URI identifier, which is a path to the
server folder containing the data, a process which returns static or dynamic content as
depicted in Fig. 5.

Fig. 5. HTTP message exchange between the server and IP phone
We also have to mention the headers that the IP phone can read:
- Content-type: MIME type of data, so the phone knows what to process and parse. This
has to be set to Text/Xml so the phone starts to parse XML objects and correctly
displays them.
- Refresh: Page refresh in seconds. Enables the phone to refresh the application. Manually
or automatically.
- Expire: Expiration of the requested URL. With this header one determines how long
certain content is available. IP phone also has a history stack, which can store up to ten
pages. With this header the history can also be erased.
- Location: Intended for redirection.
- Set-Cookie: Saved information about the session. When the phone uses the application
again, the Web server can recognize that particular IP phone.
- Accept: Informs the Web server about the capabilities of the IP phone, its language and
charset.
One can define or read header parameters in the Web programming language code (Deel, D.
& Nelson, M., 2004).

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

179
5.2.2 The IP phone customized XML language
XML language is used to display the content on IP phones. It is a hierarchical language, it
has one main element and many sub elements with different attributes. An XML object is
predefined and IP phone proprietary. Inside of the XML object there is dynamic content
which is processed on the server and sent to the phone via the IP network. The Web server
does all the operational work related to the application. The contents of the application is
packed into XML objects and sent to the phone which parses and displays the contents.
One can only use predefined proprietary XML objects, tags and attributes. Those predefined
elements are the only elements the IP phone can parse. They are defined in the XML
schema, which is integrated with the phone through its firmware. This schema is defined by
the vendor (Cisco) and cannot be changed by the user. The schema is being changed and
updated regularly by the vendor. New functionalities are added and new ways of checking
the XML with phone firmware updates are possible. Rules and a dictionary of behaviours
and display of the objects are defined in XML schema. Therefore an overall knowledge of
XML for the development of applications on Cisco IP phones is not necessary. Only the rules
and Cisco custom XML objects knowledge is needed. XML objects are an envelope for the
content of applications (Web based programming language).
One is limited with the usage of objects with different phone models. Some models do not
accept all the available objects; they have a limited scope of functionality (Cisco Systems,
2011a). The server must have the “text/xml” MIME extension enabled otherwise the IP
phone will display just plain text. For the Web server files navigation URLs and URIs are
used.
The IP phone also has soft keys with which we normally navigate through application. They
use proprietary XML tags to describe their functionality. With the properly designed soft
keys we enhance the user experience. We can set the order, function and the text they show.
Every Cisco XML object also has its own default soft key defined which can be arbitrary
altered. Internal URI can be used on soft keys. Those URIs use various functions already
built in the phone, for example Dial or Transfer Call. External URI is actually a URL for a
Web server. Soft keys are defined inside Cisco XML objects.
All Cisco XML objects, elements, tags and attributes, their definition and usage, are
described in references (Deel, D. & Nelson, 2004; Cisco Systems, 2011a). It is not our
intention to describe them explicitly as that would exceed the topics of the article.
5.2.3 Enabling applications on the IP PBX
To make the application available to the IP phones we have to configure the IP PBX
correctly. IP-PBX serves as a proxy between the IP phone and the server and also sends all
parameters to the phone. The phone cannot be directly configured. IP-PBX is managed with
the proprietary Call manager software package. All one has to do is determine the path to
the Web server where the application resides. URL services parameter has to be changed in
the Call manager configuration. The Call manager then sends this parameter to the IP phone
which then knows where to find the developed applications. IP-PBX is also important
because IP phones and its users are registered to it. By registering they get an IP address and
consecutively a connection to the IP network. With the determination of the URL one is not

New Technologies – Trends, Innovations and Research

180
limited to the local network (the introduced network schema) but can choose any Web
server on the internet that hosts appropriate applications. The only condition is that the IP-
PBX is connected to the internet. That means that one can share the locally resident
applications with any Cisco IP-PBX and Cisco IP phones on the internet.
In the URL services parameter one can just write a single URL. Therefore the phone has
limited access to that particular application. To solve this problem we created a menu with
Cisco XML tag IPPhoneMenu where we defined many URLs that point to different
applications on different locations. Therefore the user can choose between different
applications (UC500.com, 2011; Cisco Systems, 2011).
5.3 The service architecture
As already mentioned Cisco IP phone service is not running on the IP phone locally despite
it could seem so. The phone is actually reduced to an I/O device, being used to render
screens of data and retrieve input from the user. The main processing usually takes place on
a Web server where the applications reside. The location of such a server can be locally but
can be also anywhere in the Internet. Most IP Phone services are centralized on a Web
server, but generally involve many distributed processes. The “locally positioned” Web
server receives the request from the IP phone and generally does some local parsing.
Nevertheless it relies on additional components, programs, and processes to do the back-
end work (such as database access, scripting, security, and interfacing with scripts and CGI).
Only the simplest services do all their processing on just one server.
5.4 Developing a phone service
To develop IP phone services for a Cisco IP Telephony environment, one needs access to the
proprietary CallManager application on the suitable networking device (usually a Cisco
router or a unified communication device). With XML over HTTP, one can provide targeted
functionality designed specifically for the device, while still keeping with the desire to
maintain but expand the core phone functionality. XML provides the horsepower through
Cisco IP Phone objects and tags to push the content. Additionally XML is user-readable,
machine-readable, and low-cost, plus it expands the use and value of the phone without
diminishing the phone functionality. Instead it enhances the phone to the extent that it can
be considered munch more than a legacy phone. The key to the concept is XML, a user- and
machine-readable way of structuring and encapsulating data; this simply provides the data
to the phone to use with its existing menu structure.
An application can send simple text to the IP phone, but when developing real services for the
IP phone system, the odds are that XML objects have to be created that the phone understands.
The extra flexibility and power we get with these objects make them almost mandatory. XML
enables the design engineer to provide data to the native functionality of the device. With
tailored sophisticated XML objects including the CiscoIPPhoneGraphicMenu and
CiscoIPPhoneImage enhanced services in IP phones have been enabled.
The main purpose of the introduced VoIP framework (Fig. 4) is the development of IP
phone applications which add extra value and usability. A user does not need a desktop
computer to browse the internet or read local intranet information. Even stock markets and

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

181
other dynamic changing data can be checked from an IP phone. Various applications can be
implemented with the use of XML, PHP and JavaScript and can then be tested for
responsiveness and usability. All applications are served from an http server which is
directly or indirectly connected to the UC appliance. One has to configure the IP address
and server folder location on the UC appliance and the applications are ready to be accessed
on IP phones. There are various solutions for server implementation e.g. Microsoft IIS or
Apache most frequently but also other free PHP servers.
As already mentioned services on IP Phones are applications that run rather on a dedicated
WEB server than on IP Phones and can be very useful in a home environment, branch
offices, faculties, etc. The range of possible application domain use is very large, from
straightforward applications e.g. unit conversion, messaging systems, opinion polls etc. to
more entertaining applications as quizzes, games, bulletin boards, photo albums.
Nevertheless severe applications for smoke detection or other alarms can be deployed as
well as support and control of other applications.
IP Phones have a HTTP compliant client that is used for services and directories
functionality. The HTTP client enables the phone to use a simple standard mechanism for
retrieving data and providing output to and from standard Web servers. Therefore the
method for providing content to the phones is straightforward. All IP Phones services used
in our platform use the client functionality for every request; the client is used to request
URLs and provides the HTTP GET functionality.
The main framework we have used for designing such applications is based on use of XML
i.e. IP Phone XML objects and tags, scripting language PHP and SQL for data management.
The proprietary IP Phone XML objects can be:
- ProprietaryIPPhoneMenu,
- ProprietaryIPPhoneText,
- ProprietaryIPPhoneInput,
- ProprietaryIPPhoneImage,
- ProprietaryIPPhoneError,
- ProprietaryIPPhoneResponse,
- ProprietaryIPPhoneDirectory,
- ProprietaryIPPhoneGraphicMenu,
- ProprietaryIPPhoneIconMenu,
- ProprietaryIPPhoneExecute, etc.
The listed proprietary objects are providing functionality to soft keys. When pressed, the
soft key invokes the associated action, with exceptions to ProprietaryIPPhoneResponse and
ProprietaryIPPhone-Execute (UC500.com, 2011). We have to emphasize what each of these
objects is capable of and what kind of functionality it can provide, e.g. how many characters
can be in a single ANSI string, how many instances a menu can have or maximum of pixels
in a bitmap. We decided to use PHP (Still, 2005; Holzner, 2007). PHP is necessary for more
advanced applications development. Most of dynamic applications need some data
managing system to deal with different data, for that we used an open source data base
management system (Gilmore, 2010).
Under the services button on the IP Phone we can install more than one application at the
same time using menus.

New Technologies – Trends, Innovations and Research

182
6. Designing practical applications
When designing applications special care must be taken to adapt the application to the right
phone model. This can be done with decisive structures in the application code. Also the
phone firmware can be helpful for adapting the content to the physical capabilities of the IP
phone. For demonstrational purposes we will present a multimedia application of
improvised video surveillance.

Fig. 6. The test environment for video surveillance application
6.1 The video surveillance prototype
The video surveillance is improvised because the IP phone is not capable of directly video
streaming via display. We have to take snapshots from the Web camera and send them to
the IP phone. Some models of IP phones have video call functionality built in but not for
application development purposes (Cisco Systems, 2011b). Many distributed process are
involved in the operation of the application. Here included are IP phone, IP-PBX, Web
server, database and an IP/Web camera (Fig. 6).
For the application we used a Cisco IP phone with color display, which has a 298x168 pix
space for displaying the picture. This is the biggest area display available on Cisco IP phone
models. We have to consider that the phone only displays pictures in PNG format. The
picture can only have 12 bits of color depth which means we have to resize and correctly
alter the picture to display it correctly. Because video cannot be streamed to the phone we
have to take picture snapshots at certain intervals. For that purpose we need a designated
program that will be able to snapshot the picture from the IP/Web camera, resize it and
send in on request to the IP phone. We used two programs for that purpose. The picture can
then be automatically or manually refreshed locally on the phone.

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

183
For acquiring snapshot pictures from the Web camera we used an open source program
Fwink. The settings of the program enabled us to take the picture at certain time intervals.
The picture is saved locally on the disk and is overwritten with every new snapshot. The
format of the picture snapshot is JPEG (Lundie, 2011). Then the Web server comes into play
– it converts the picture and sends it to the phone. Server is an Apache type with PHP and
MySQL (Apache Server, 2011). PHP script does the work on converting pictures to PNG
format and correctly altering the picture to be displayed on the phone (display capabilities).
Then its sends the picture in XML objects to the IP phone. MySQL database adds
functionality to the users with the option of saving certain pictures for a later review
(Gilmore, W. J., 2010, UC500.com, 2011).
The main goal of the server is converting the picture to the expected format. For that PHP
language is used. PHP needs an extension for a successful conversion. This extension enables,
that new objects are added to enhance the capabilities of writing applications. Therefore we
used the ImageMagick software suite. The package enables creation, editing and conversion of
bitmap pictures. It can read, write and convert in 100 different picture formats (Still, 2005). In
PHP ImageMagick is used as an extension through ImageMagick API. We included the
extension for the PHP and also for the ImageMagick software suite (Powers, 2008; Holzner,
2007). With the extension selection we have to be careful using the correct DLL library version
for the used PHP installation version. With the appropriate implementation the conversion of
the picture can be done with a few lines of code. That is the positive part of PHP language. It
gives us an opportunity to write applications quickly and do a lot of processing with a few
lines of code. With that in mind PHP offers a big leverage (Deel, D. & Nelson, M., 2004). The
actual operation of the application is depicted in Fig. 7.
In the following we will describe a code example to give an insight how applications can be
written. A script can be designed to automatically refresh the pictures on the phone. On top
of the script we added a header for refresh.
header("Refresh:3");
First we need to read the content from the folder where Fwink program saves the acquired
pictures from the Web/IP camera.
$image = file_get_contents($campath);

Fig. 7. The test environment for video surveillance application
Then we use the ImageMagick extension. First we create an object, and then we save the
picture in that object. Then we resize and convert the picture.

New Technologies – Trends, Innovations and Research

184
$im = new Imagick();
$im->readimageblob($image);
$im->resizeImage(296,166,Imagick::FILTER_LANCZOS,1);
$im->setImageFormat("png");
After that the picture is saved back to the disk into a server folder enabling the IP phone to
have access to the acquired picture.
$fp=fopen($imagepath,'w');
fwrite($fp,$im);
fclose($fp);
Now we have a converted and saved the picture on the Web server. Next we need to send it
back to the phone. Proprietary XML language objects now come into play, as the phone has
to display the image correctly. We used the CiscoIPPhoneImageFile XML object for that
purpose. This object is used for displaying PNG pictures on IP phones with a color display.
It is necessary that we include a header that instructs the phone of the content in XML
format. Only if that header is included in the script the phone will correctly parse XML
objects and display the picture.
header("Content-type: text/xml");
We use a “Title” tag for the name of the picture that will be displayed on the top of the
screen and “Prompt” for displaying information about the picture. With the tag “Location”
we determined the position of the picture on the phone display. With the “URL” tag we
specify where the application resides on the IP phone and where the picture is located.
Inside of our CiscoIPPhoneImageFile XML object we also define softkeys. As already
mentioned, softkeys are created for user application interaction; updates, application
closing, etc. (UC500.com, 2011).
<CiscoIPPhoneImageFile>
<Title>LDIS-G215</Title>
<Prompt>Laboratorij LDIS</Prompt>
<LocationX>0</LocationX>
<LocationY>0</LocationY>
<URL>http://192.168.10.11/image.png</URL>
<SoftKeyItem>
<Name>UPDATE</Name>
<URL>http://192.168.10.11/cam1.php</URL>
<Position>2</Position>
</SoftKeyItem>
<SoftKeyItem>
<Name>BACK</Name>
<URL>SoftKey:Exit</URL>
<Position>5</Position>
</SoftKeyItem>
</CiscoIPPhoneImageFile>
6.2 Application testing
Some tests were executed to determine the capabilities of the introduced improvised video
surveillance application. Multiple parameters had to be considered – the speed of the Web
camera snapshots and the speed of application execution. Additionally we had to consider

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

185
the speed of the PHP script – it executes the conversion of the image and also reads/writes
from/to the file. The last but most important was the IP phone processing time. We had to
evaluate the time of picture processing and XML objects parsing. The IP phone is an
embedded system and because of that limited with the memory and processor power (Deel,
D. & Nelson, M., 2004; Cisco Systems, 2011b). That also means the IP phone cannot refresh a
new picture as fast as a Web browser on a PC would. We had to consider all the limitations
to determine the optimal time values when the script is refreshed and had to evaluate the
minimum time interval between the sampled snapshots.
For the test reference we used a 10 seconds picture refresh interval with a 5 seconds refresh
of the script. This is how we were able to display every picture acquired. With this reference
we also evaluated the time limit of the script execution and picture refresh. In the following
table (Table 1) the results of the performed test acquired with the packet sniffer software are
presented (Wireshark, 2007).

Resolution Average time for loading the picture[s] Average time of script execution [s]
640x480 2,83 0,12
320x240 2,40 0,11
Table 1. Average application loading time
The average time of picture loading and all delays that contribute to the system are
covered. With the lower resolution the execution time is app. half a second (0,43 s). From
the column Average time of script execution, we see that the conversion itself on the
server adds only a small portion of time to display the picture. It was evident that the
speed of application execution on the IP phone is solely limited with its capability of
image processing.
There is also some unexpected delay in the system. In our examination we concluded that
this is contributed by the TCP protocol accessed with the upper layer HTTP. TCP offers
reliable transfer and consequently for every request from the IP phone a session is
established with the generally tree way handshake. Between the sessions acknowledgments
are sent for the segments of the picture between the Web server and the IP phone.
Additionally the session has to be closed after the communication ends. Two separate
sessions are established during the transfer of one image from the Web server – the first one
for the request of script execution and the second one for actually transferring a single
picture to the IP phone. That procedure contributes approximately a 0,7 s delay for every
single refresh on the IP phone display.
After all the tests in the introduced test environment we conclude that one can use a
minimum time interval of 5 seconds between the snapshots if every picture has to be
displayed on the phone. In a production environment careful testing is required. Because
the pictures need approximately 3,5 s (average time for loading the picture in addition to
TCP delay) to load a time higher than 3,5 s must be chosen for a refresh-rate. That
corresponds to 320x240 resolution pictures. For the higher 640x480 resolution we have

New Technologies – Trends, Innovations and Research

186
chosen 4 seconds because of the slightly increased load time (0,4 s). If we had chosen a time
lower than 3,5 s not every snapshot is displayed, some are skipped. Because of that a new
refresh happens before the previous one has finished and therefore the application may
occasionally crash. This is because TCP sessions take too long to complete. Consequently the
application times out (MaFiRa, 2011). The best scenario conducted in our test is shown in the
following table (Table 2).

Resolution Optimized sampling time [s] PHP Script optimized refresh time [s]
640x480 5 3,5
320x240 5 4
Table 2. Optimal time interval settings
If required we can still use lower than optimal time intervals between the snapshots. In that
case not all the sampled pictures will be displayed. Nevertheless more precise movement
detection is possible.
7. Conclusion
Our application works for all types of Web/IP cameras. With a help of an open source
program and Apache Web server we changed a Web camera into an IP camera.
Our application can be improved in many ways. Because there are many different types of
IP phones available with different capabilities we have to adapt applications to the receiving
phone model. For example some models do not have a color display. So we can write a
decisive structure to choose a different conversion for a different phone type. We implement
that with the help of the header the IP phone sends when initiating a session with the Web
server.
Pictures can also be saved by the user or automatically into a database. We can also
implement a movement detection feature some IP cameras offer and for example an image
on the phone only appears when motion in front of the camera is detected. Then only
changes and appropriate time tags are saved with the image to the database.
In the presented application we had to evaluate different values for the snapshots
acquiring time interval and refresh rate to correctly display every picture acquired. With
the PHP script we can synchronize those parameters which now operate individually.
With that we could achieve shorter intervals for displaying the snapshot images on the IP
phone.
We presented an example of a multimedia application that can be written for the IP phone.
We tried to demonstrate that an IP phone application involves many distributed processes.
The application is intended as a demo and together with the Web server, IP-PBX and other
equipment in our local IP & VoIP network forms a basis for educational application design
approach. It enables numerous possibilities in application development and proof of context
for prototype development in business environments.

A Framework for VoIP Testability and Functionality Extension with Interactive Content Delivery

187
8. References
Au, D.; Cho, B.; Haridas, R.; Hattingh, C.; Koulagi, R.; Tasker, M. & Xia, L., (2005). Cisco IP
Communications Express: CallManager Express with Cisco Unity Express (1. edition),
Cisco Press, ISBN 1-58705-180-X, Indianapolis, IN 46240 USA
Cisco Systems, (2006). (n.d.). Cisco Unified IP Phone 7970 Series for Cisco Unified CallManager
4.2(3), Document ID: OL-10776-01, 2006. September 2011, Available from:
http://www.cisco.com/en/US/products/hw/phones/ps379/products_user_guid
e_list.html
Cisco Systems, (2011c). (n.d.). Configure and Manage the CUE system auto Attendant,
Document ID: 63986, September 2011, Available from:
http://www.cisco.com/en/US/products/sw/voicesw/ps5520/products_configur
ation_example09186a00803f82eb.shtml
Cisco Systems, (2011). Cisco Unified Communications Manager Express System Administrator
guide, Cisco Press, San Jose, CA 95134-1706 USA, 2011.
Cisco Systems, (2011a). (n.d.). IP phone services –Resources, In: Cisco Developer Community –
Resources, June 2011, Available from:
http://developer.cisco.com/web/ipps/resources
Cisco Systems, (2011b). (n.d.). IP phone services –Forum, In: Cisco Developer Community-
Forums, June 2011, Available from:, Access:
http://developer.cisco.com/web/ipps/forums
Deel, D. & Nelson, M. and A. Smith, (2004). Developing Cisco IP Phone Services, ISBN: 978-
1587050602, Cisco Press, Indianapolis, IN 46240
Gilmore, W. J., (2010). Beginning PHP and MySQL: From Novice to Professional, 4.th Edition,
ISBN: 978-1430231141 Springer, New York, NY 10013
Hatting, S., Sladden, D., & Swapan, Z.A., (2010). SIP Trunking, Migrating from TDM to IP for
Business Communications, Cisco Press, ISBN 1-58705-944-4, Indianapolis, IN 46240
USA
Holzner, S., (2007). PHP: The Complete Reference, McGraw Hill, ISBN: 978-0071508544, New
York, NY 10121-2298
Kaza, R.; & Asadullah S., (2005). Cisco IP Telephony – Planning, Designing, Operation and
Optimization, Cisco Press, ISBN 1-58705-157-5, Indianapolis, IN 46240 USA
Lundie, C. (2011), Free webcam software, In: Fwink, July 2011, Available from:
http://www.lundie.ca/fwink/
MaFiRaWiki TCP/IP, (n.d.). July 2011, Available from: http://wiki.fmf.uni-lj.si/
wiki/TCP/IP
Powers, S., (2008). Painting the Web. O'Reilly Media, Inc., ISBN: 978-0596515096, Sebastopol,
CA 95472
Still, M., (2005). The Definitive Guide to ImageMagick, 1. Edition, APress, ISBN: 978-
1590595909, Berkeley, CA 94710
The Apache Software Foundation. (n.d.). Apache Usage Statistics, July 2011, Available from:
http://trends.builtwith.com/Web-Server/Apache
The Apache Software Foundation. (n.d.). The Apache HTTP Server project. July 2011,
Available from: http://httpd.apache.org/, july 2011

New Technologies – Trends, Innovations and Research

188
UC500.com. (n.d.). Custom XML IP phone services, March 2011, Available from:
http://uc500.com/en/custom-xml-ip-phone-services-uc500-and-cisco-unified-call-
manager-express
Wireshark (2011). (n.d.). Go Deep, July 2011, Available from: http://www.wireshark.org/
Part 5
Physics

9
Application of Radiosity Simulation
Methods for Lighting Researches
Ruzena Kralikova and Katarina Kevicka
Technical University of Košice, Faculty of Mechanical Engineering,
Department of Environmental Studies,
Slovakia
1. Introduction
The lighting of workplaces puts on light-technical solution the following requirements:
- sufficient horizontal and vertical lighting value for a particular type of the work
performed,
- appropriate distribution of brightness in the area,
- suppressing the creation of glare and protecting against it,
- satisfactory psychological impact of the colour of the light and colour of the
administration premises,
- appropriate colour change in the environment,
- stabile lighting,
- reasonable uniformity,
- suitable orientation of the impact of light on the desktop. (Smola, 2003)
In compliance with all the quantitative and qualitative parameters of illumination, we must
design a lighting system based on the principles of maximum performance. By selecting a
new generation of lamps, i.e. long life and high efficiency ones, we can economise on
electricity. Lighting systems with streamlined operation, regulation and management of
lighting may also significantly contribute to energy savings. (Silion & Puech 1994)
2. Methodological procedure of light-technical design
The project of a lighting system is a complex and laborious task that requires not only
technical knowledge, but also knowledge of architecture, production, and the physiology of
vision. The role of the designer is not only to select the type of solution; this task is often
complex and might be of a research character, leading to the development and manufacture
of the lighting systems testing, analysis, and finding the optimum lighting conditions of the
workplace and the area as a whole.
To develop a quality project of the lighting system, we should have construction in hand,
technological and health technical drawings of the lighting the object, and we should also be
familiar with the technology or the purpose of the premises. In addition to the quantitative
and qualitative parameters of the workplace, the lighting area or the surrounding area

New Technologies – Trends, Innovations and Research 192
should maintain well-observed and fault-free lighting system functions, the possibility of
comfortable handling the luminaries and lighting efficiency. (Smola at al., 2005) The design
of the lighting system is divided into light-technical, electric, and budget sections. The light-
technical part of the interior lighting consists basically of two main parts: technical reports
and the drawing section.
In addition to the documents belonging to the base set of the project documentation, it is
also necessary to produce drawings of the various elements of installation illumination,
drawings of complete assembly nodes, drawings of connections and typical control
components and drawings needed for the implementation of the proposed lighting.
The technical report includes:
- description of the illuminating area,
- demands on visual activity, and thus the determination of the category and work class,
- lighting values,
- qualitative lighting indicators (brightness distribution, direction of light, flare, lighting,
durability, color and color submissions, etc..),
- draft operation and maintenance of the lighting system, choice of lamps, etc.,
- computational methods employed and specific calculations of lighting,
- color adjustment immediate surroundings,
- assistant addressing, security, and replacement of emergency lighting,
- proposal for economic recovery.
The drawing section contains:
- footprints and cuts of lighting facilities,
- prescribed value of lighting on certain points and certain value quality parameters,
- electrical distribution, involvement and control of lighting systems,
- deployment lamps, their specifications and type of the light resources,
- isoline diagrams and marking control points by which the agent glare was assessed.
In addition to the documents belonging to the base set of the project documentation, it is
also necessary to produce drawings of the various elements of installation illumination,
drawings of complete assembly nodes, drawing of connections and typical control
components and drawings needed for the implementation of the proposed lighting.
3. Modelling of light-technical parameters
In the past, there existed three basic types of light-technical models (Budak et al., 2006):
- calculation (without taking into account the actual dimensions, by means of tables),
- accurate (in models in the 1:1 scale),
- mock-ups that generate a display similar to visual perception of the lighting system
designed.
Currently, a different approach is applied in the light-technical modeling, which is based on
computer visualization of the spatial scenes of the lighting system designed. With computer
visualization, whose goal is photo-realistic imagining, the propagation of light in space is
often described in detail and simulated. Modern visualization programs can reproduce

Application of Radiosity Simulation Methods for Lighting Researches 193
brightness, colour and surface structures of complex three-dimensional spaces in a quite
realistic way, since the calculations include inter-reflection of light between various surfaces
in space and quite a number of optical effects arising in daylight, in artificial or joint
lighting. Simulation methods are based on classical optical, thermodynamic, or light-
technical models of the spread of radiation (Tilinger & Madár, 2008).
3.1 Simulation methods
There exist two basic methods employed in computer simulations of the light environment,
namely the Monte Carlo method, which applies the technology of tracing the light rays (ray
tracing is the name used for the follow-up of rays; one also uses the term of "ray casting" -
sending the light ray when a ray of light comes from the light source), and the radiation
method (also radiosity). From a physical point of view, both of the methods are similar; the
difference lies in algorithmization.
3.1.1 The Monte Carlo simulation method and the calculation of direct and indirect
lighting
We have initially considered only spectacular reflections of light in a manner of
subsequently applied probability calculations and other components of illumination. The
stochastic (probability) method of light calculation, often referred to as the Monte Carlo
method, is conveniently applied in furnished rooms with surfaces that have different optical
properties. In general, this method is one of the operational methods of research used for the
simulation of technical, economic, and social situations (Rybár, et al. 2001). There exists a
number of variants of this method.
In general, these methods employ a large number of randomly cast light rays or energy
bearing particles. Their movement in the area is subject to physical laws and is monitored. A
completely accurate calculation can only be made if the path of each photon can be
followed, which, of course, is impractical for a number of reasons. However, if a sufficient
number of rays (particles), e.g. 50 million, is accidentally sent out, the calculation of the
lighting capacity will also correspond to high demands for accuracy. If the propagation of
light is monitored from the source to the environment, one usually talks about the method
of monitoring the particles.
In terms of computer graphics, ray tracing in the direction of the light source to the
observer's eye or camera lens is onerous. Quantity rays are "lost" before the eye reaches the
observer (Smola at al., 2005). It is therefore a frequently used method of tracing rays when
the monitor path of light rays is in the direction of the observer to the light source.
In this way, the algorithms take into account the particles that are mostly involved in the
lighting of the scene as seen by observers. In this case, lighting of a place is proportionately
dependent on the number of light particles which hit it, and on the density of luminous flux
carried by each of these particles.
In the method of back tracing the rays, a virtual ray of light is cast in the direction of the
observer through each of the imagining points on the display screen (pixels), and its
intersection is tested along with all the objects in that space. Rays are cast in the direction of
the light source to determine whether a visible place is overshadowed by an object. If the

New Technologies – Trends, Innovations and Research 194
object surface is shiny, it mirrors the reflection of the primary ray. If the surface is
transparent, rays are created, representing light reflection and refraction according to optical
properties of the transparent material. If the surface is non-transparent, rays are generated
(often more than 100) mimicking the light reflection from the surface concerned.
In the case that the location of the intersection of the primary ray with a certain object in
space is illuminated by any of the light sources (or a mirror reflection of a certain material),
its lighting or brightness is calculated. The term of direct lighting is employed in computer
graphics for this lighting in contrast to the overall lighting containing the contribution of the
reflected light, which in this field of science is called global lighting.
The nearest intersection is determined for each secondary ray, and the process is repeated
until the ray leaves the space, or until the amount of light (or brightness) represented by the
imaginary ray falls below the selected value. In some of the algorithms, the ray is monitored
until it is returned in the eye of the virtual observer, or only a specified number of
reflections is considered. In this way, the geometry of the space is modelled simultaneously
with its synthetic (colour) imagining. Maps of direct and overall lighting are stored in the
computer memory, which are further processed to achieve a smooth transition of shadows,
in order to describe optical phenomena, among other things. In principle, the ray tracing
technique solves the following integral equation (1) for the energy balance of each nearly the
same surfaces in space (Rybár, et al. 2001).
( ) ( ) ( ) ( ) , , , . , , , . cos .sin . .
r r r e r r i i i bd i i r r i i i i
L L L d d u ¢ u ¢ u ¢ µ u ¢ u ¢ u u u ¢ = +
}}
(1)
where:
θ - polar angle measured from the surface at normal levels,
φ - azimuthal angle of the surface at normal levels,
Le (θr,φr) - its own radiation (as is the area's primary source of radiation) [W.sr
-1
.m
-2
],
Lr (θr,φr) - the total radiation [W.sr
-1
.m
-2
],
Li (θi,φi) - incident radiation [W.sr
-1
.m
-2
],
ρbd (θi,φi, θr,φr ) - two-way function of the reflectivity distribution [sr
-1
].
3.1.2 Radiation methods and radiation equation
Although the ray tracing algorithm produces perfect results in modeling the mirror
reflectivity and undispersional refractional transparency, the algorithm has a shortcoming;
specifically, it does not take into account the physical laws of some of the important visual
effects, for example shade staining by the influence of the reflection of light from another
object. It is due to the fact that ray tracing only monitors the final number of rays emanating
from the observer's eye. The radiation method is attempts to remove this shortcoming.
(Baum & Winget, 1990).
The radiation method may be seen as a certain generalization of the method of monitoring
the ray. This method assumes that all the surfaces are ideal diffuse primary or secondary
light sources or a combination of the given types of sources. The advantage of this method
in terms of visualization and algorithm development is that the surfaces lighting is
calculated independently from the direction of view on the simulated scene (Sillion &
Puech, 1989).

Application of Radiosity Simulation Methods for Lighting Researches 195
The radiation method is based on the principles of the spread of light energy and the energy
balance. Unlike conventional rendering algorithms, this method first determines all the
mutual light interactions in space from various independent perspectives. Then one or more
perspectives are calculated by defining a visible surface and interpolation shading.
In the algorithm of shading, the light sources have always been considered independently
from the surfaces that are lighted. In contrast to the above, the radiation method allows any
surface to emit light, i.e., all the light sources are modeled naturally as an active surface.
Consider the distribution of the environment as a final number of n discrete surfaces
(patches), each of which has its final respective size and emits and reflects light evenly
across its surface. The scene then consists of surfaces acting both as light sources and
reflective surfaces creating a closed system. If we consider each of the surfaces as an opaque
Lambertian diffuse emitter and reflector, then the following equation applies for the surface
due to energy conservation (2):

1
j
i i i j j i
i j n
A
B E p B F
A
÷
s s
= +
¿
(2)
where:
B
i
, B
j
-intensity of radiation areas i and j measured in units of energy per unit of surface
(W.m
-2
)
E
i
-power of light radiated from the surface i and has the same dimension as
radiation,
p
i
- the reflection coefficient (reflectivity) of the surface i and is dimensionless,
F
j-i
-dimensionless configuration factor (form-factor), which specifies the energy
leaving the surface i and the energy incoming to the surface and taking into
account the shape, relative orientation of both of the surfaces, as well as the
presence of any areas that could create an obstacle. The configuration factor takes
its values from the interval <0,1>, while for the fully covered surfaces it takes the
value of 0,
A
i
, A
j
- surface levels i and j.
Equation (2) shows that the energy leaving the unit part of the surface is the sum total of both
light emitted and reflected. The reflected light is calculated as a product of the reflection
coefficient and the sum total of the incident light. On the contrary, the incident light is the sum
total of the light leaving the whole surface changed in the part of the light which reaches the
receiving unit content of the receiving surface. B
j
F
j-i
is the amount of light leaving the unit
content of the surface Ai area and incident on the entire surface of A
i
. It is therefore necessary
to multiply the equation by the ratio of Ai/Ai for the determination of light leaving the entire
surface Ai and incident on the entire surface A
i
. (Cohen, Greenberg, 1985). A simple
relationship is valid between the configuration factors in the diffuse medium:

i i j j j i
AF A F
÷ ÷
=
(3)
By simplifying equation (2) using equation (3) we obtain the equation:

1
i i i j i j
j n
B E p B F
÷
s s
= +
¿
(4)

New Technologies – Trends, Innovations and Research 196
By subsequent treatment we get the equation in the form:

1
i i j i j i
j n
B p B F E
÷
s s
÷ =
¿
(5)
Interaction of light between the surfaces may be expressed in the matrix form (Sillion &
Puech, 1989):

1 1 1 1 1 2 1 1 1 1
2 2 1 2 2 2 2 2 2 2
1 2
1
1
1
...
...
. . ... . . .
...
n
n
n n n n n n n n n
p F p F p F B E
p F p F p F B E
p F p F p F B E
÷ ÷ ÷
÷ ÷ ÷
÷ ÷ ÷
÷ ÷ ÷ ( ( (
( ( (
÷ ÷ ÷
( ( (
=
( ( (
( ( (
÷ ÷ ÷
¸ ¸ ¸ ¸ ¸ ¸
(6)
Note that the contribution of a part of the surface to its own reflected energy (which may be
hollow, concave) must be taken into account. Thus, in general, each term on the diagonal need
not necessarily equal to 1. Equation (6) must be solved for each group of wavelengths of light
in the model, since pi and Ei depend on the wavelength. Form factors are independent of
wavelength and are solely a function of geometry; therefore, they need not be recalculated, if
the surface reflectivity or illumination changes. Equation (6) may be solved by employing the
Gauss-Seidel method obtaining radiation for each area. In order for radiological methods to
become partial, one had to start calculating the form factors for absorbed surfaces.
Cohen and Greenberg proposed the following method for the distribution of intensities of
emission peaks. If the point is internal to the surface, he is assigned the average radiation
from the radiation spots, which are divided on this point. If the point on the edge, then find
the nearest internal point v. Top marginal radiation when averaged with BV should be the
average radiation spots, sharing the top edge.
Consider the flats Fig.1. Radiation to the internal edge e Be = (B1 + B2 + B3 + B4) / 4.
Radiation to the top edge b is calculated by finding the nearest internal peak e taking into
account that b is the shared areas and a second Therefore, the calculation will use the
following definition (Bb + Be) / 2 = (B1 + B2) / 2.
The solution to get Bb is Bb = B1 + B2-Be. Internal peak closest to the peak and the peak is
also ea very top and the first part of the area Therefore, since (B + Be) / 2 = B1, we get for B
= 2B1-Be. Radiation for other peaks are calculated similarly (Cohen et al., 1985).

Fig. 1. Elementary surface area of the object

Application of Radiosity Simulation Methods for Lighting Researches 197
The first radiation method was used by Goral (Goral et al., 1984), who used the contour
integrals to calculate the form factor for the environment without absorbed convex
surfaces.
In the picture (see Figure 2) you can see the effects of "color bleeding", caused by diffuse
reflection between adjacent surfaces, visible in the model in image rendering. Diffuse
surface impurities are colors that are reflected by other diffuse surfaces. To become a partial
radiological methods, they had to start form factors to calculate absorbed surfaces.


a b c
Fig. 2. a-original cube with six sides (front, not shown), we want to model, b-rendered image
with 49 flats, using the constant shadow, c-rendered image with 49 flats, using the
interpolation shading
3.2 Form-factor calculation
To find the form factor, we must find the fractional contribution that a single patch makes
upon another patch. This term is purely geometric, related only to the size, orientation,
distance, and visibility between the two patches. The basic geometry for the form factor
calculation is shown in Fig. 3.

Fig. 3. Form-factor geometry

New Technologies – Trends, Innovations and Research 198
Fig. 4. Projected area onto the hemisphere
If we look at Fig. 4, we see that the area A is related to the projected area, A
p
, by A
p
=A.cosq,
and the contribution of the projected area A
p
is related to the solid angle by (7):

2
p
A
r
e = (7)
The expression relating the contribution from one infinitesimal area to another is:

2
cos . cos .
.
i j
i j j
dA dA
dA
F
r
| |
t
÷
= (8)
The contribution from the infinitesimal area to the finite area is found by integrating over
the receiving area:

2
cos . cos .
.
i j
j
i j j
dA dA
A
dA
F
r
| |
t
÷
=
}
(9)
And from a finite patch to another finite patch, we take the area average of the previous
equation:

2
1
cos .cos .
.
i j
i j
i j j
dA dA
i
A A
dA
F
A r
| |
t
÷
=
}}
(10)
There are several different methods for evaluating this integral. The contour integral is
found by transforming the double integral by Stoke’s Theorem (Goral, at. al, 1984):

( ) ( ) ( )
( )
1
ln . ln . ln
i j
i j
dA dA i j i j i j
i
A A
F r dx dy r dy dx r dz dz
A
÷
= + +
}}
(11)
where ln(r) is the intensity for a particular wavelength. One limitation of this algorithm is
that it does not take into account the visibility between one patch and another; another
limitation is that it is extremely expensive computationally. Baum (Baum & Winget, 1990)
also uses an analytical approach to find form factors. They integrate the outer integral

Application of Radiosity Simulation Methods for Lighting Researches 199
numerically, while integrating the inner integral analytically by converting it into a contour
integral. They then calculate the contour integral by piecewise summation, from Fig. 5.

1
2
.
.
j i
i
dA A j g
g G
F N
t
e
= I
¿
(12)
where:
G
i
- is the set of edges in surface i,
N
j
- is the surface normal for the differential surface j,
Г
g
- is a vector with magnitude equal to the angle gamma illustrated in Fig.3 and
direction given by the cross product of vectors R
g
and R
g+1
Baum use this approach when evaluation by the more efficient hemi-cube method of
evaluation described in the following sub-section is geometrically inappropriate for form
factor evaluation. They also incorporate an extra term to account for the visibility between
surfaces (Baum et al., 1990).

Fig. 5. Geometry for contour integral
3.3 Hemi-cube evaluation
The hemi-cube approach for evaluating form factors was introduced by Cohen (Cohen et
al., 1985). It is motivated by examining the geometry in Nusselt’s Analog shown in Fig. 6.
First, a patch is projected onto the hemisphere surrounding a patch. This projection
accounts for the cosine of the angle between the normal of the projected patch and the
hemisphere as well as one 1/ r term. This projected patch is now projected onto the base
of the hemisphere accounting for another 1/r and cosine term. The area at the base is
equivalent to the form factor. Many areas are identical to the projected area on the
hemisphere, and several of these lend themselves to calculation in a more straightforward
fashion. See Fig. 7.
Instead of projecting the environment onto a hemisphere, a cube is placed with the receiving
patch at the center (see Fig. 8) and each surface of the cube is divided into a set number of
pixels. The contribution of each pixel on the cube’s surface to the form factor value can be

New Technologies – Trends, Innovations and Research 200
Fig. 6. Nusselt’s Analog

Fig. 7. A, B, C, D & E all have same form factor

Fig. 8. The hemi-cube project

Application of Radiosity Simulation Methods for Lighting Researches 201
recalculated since it is only dependent on the pixel location and orientation. The
environment is then projected onto the faces of the cube (and half faces). Object visibility can
be determined by using simple z-buffer techniques.
4. Outputs from the proposal of lighting system
Currently, the development of computer graphics software products exist to enable a
comprehensive design and calculation of parameters of lighting systems, which would
reflect light effects that arise in artificial and day lighting.
The computer programs will be able to calculate and visualize the daylight, lighting scenes,
plan the color and intensity of the lights, position on the project the emergency lighting,
with the right legal number of luminaires, prepare photo realistic visualizations of light
planning. The furniture, surfaces and luminaires can be placed simple dragging and
dropping elements from the provided libraries. For a better realism, the programs can use
different textures and furniture, and it uses an integrated ray tracing or radiosity module. In
consequence, the market appeared to be several light-technical programs with different
purposes and uses. In principle, computer programs can be divided into two basic groups
(Smola et al., 2005):
- calculation programs - the result include lighting parameters (Dialux, Relux, Europic,
Calculux, WinLuxus, Wils a pod.),
- visualization programs - the result include visualization - the figures of lighting (3D
Studio, Catia, AutoCAD, LightScape, Corel Photo-Paint a i.).
These programs are designed for light-technical parameters calculation and on the
presentation of projects and usualy offers the following program modules for lighting design:
- Interior lighting, utilisation factor method,
- lnterior lighting, point by point calculation,
- lnterior lighting, direct glare (UGR calculation),
- lnterior lighting, glare by reflection on visual display terminals,
- Exterior Iighting, area lifting,
- Exterior lighting, street lighting.
For purposes of this contribution to the possibilities of simulation outputs in the DIALux 4.7
(Fig. 9) and Relux (Fig. 10). This simulation programs offers the following options selected
lighting system and various options for presentation of results as graph values, isolines
maps, light maps - colour scale, false colour rendering, summary tables of lighting
respectively brightness, three-dimensional model lighting respectively, brightness economic
evaluation of the lighting project in terms of energy consumption, visualization of sunshine
and so on (Krupa, 2005).
Figure 11 shows isoline maps, which display equal values of illuminance measured in the
vicinity of a luminaire. With DIALux the user has the option to display the 3D rendering
in a false colour rendering presentation (Fig. 12). The presentation of illuminance and
luminance with freely scalable value ranges and definable colour gradients is now
available.

New Technologies – Trends, Innovations and Research 202



Fig. 9. Display of Dialux







Fig. 10. Display of Relux

Application of Radiosity Simulation Methods for Lighting Researches 203

Fig. 11. Isoline map (Dialux)


Fig. 12. Layout of false colour rendering (Dialux)
On the Fig. 13 is 3D Light distribution curve, (LDC). This function is useful to check the
correct placement of luminaires with asymmetrical distribution. Figure 14 shows layout of
false colour rendering and detail of light distribution curve.

New Technologies – Trends, Innovations and Research 204
Fig. 13. Detail of light distribution curve (LDC) (Dialux)


Fig. 14. False colour rendering and LDC (Dialux)

Application of Radiosity Simulation Methods for Lighting Researches 205
Relux offers the raytracing calculation (fig. 15) which is based on a version of Radiance that
has been revised by Relux. This verified method, which has been validated worldwide, is
noted for its accu- rate calculation results.


Fig. 15. Raytracing calculation in Relux
5. Conclusion
In terms of the quantity of information, a person registers 80% to 95% of all the information
visually in the work. The primary role in creating the work environment is to ensure
optimal conditions of vision and ensure a safe working environment. Visibility must
therefore be seen as a precondition for the implementation of high quality, safe, and reliable
work operations. It is necessary to pay close attention to this issue. When dealing with light-
technical projects, the visualization of lighting parameters is a useful tool by using
programmes realistically displaying the lighting parameters.
Despite numerous possibilities that the current software tools offer, in some cases there is a
difference between the modelled and actual light-technical parameters. One of the reasons
affecting the result of the computer output may be the inadequate definition of certain
inputs (the colour shades and quality of the room's surfaces, the lightning effects on the
scattering characteristics of light sources, etc.). However, these differences do not affect the
overall relevance of computer outputs and may be virtually eliminated by qualified
estimation.
Current development of computer technology has influenced also the area of light-technical
design. A wide range of computer software, that are available, enable the efficient
generation of lighting projects, visualization of photometric parameters, lighting
parameters, etc. Calculations of these parameters are becoming commonplace, without

New Technologies – Trends, Innovations and Research 206
laborious and lengthy calculations as it used to be in the past. These programs based on the
point and flow method use several simulation methods to model lightning-technical
parameters (described in this paper). They also allow the creation of models of various
scenes, by which it is possible to model different variants of each situation and subsequently
evaluate them. New trends of research in this area will continue to focus on research into the
impact of color of light appearance to man, with emphasis on biological lighting systems,
that will positively influence the feeling of a man. In connection with the application of
information technology in research of lighting, developments in area of fotorealism,
interactivity, dynamic visualizations of illuminance is expected, allowing to monitor the
variability of lightning-technical parameters depending on the current environment and
time.
6. Acknowledgment
The chapter was prepared on the Department of Environmentalistics of Faculty of
Mechanical Engineering on Technical university of Kosice, Slovak Republic.
This contribution was elaborated within the project KEGA No 3/7426/09 “Physical factors
of environment - valuation and assessment" and KEGA No 3/7422/09 Creating of research
conditions for preparation of modern university text book "Ecodesign in Mechanical
engineering".
7. References
Baum, D. R. & Winget, J.M. (1990), Real Time Radiosity through Parallel Processing and
Hardware Acceleration. Computer Graphics, vol 24/2, pp.67-75.
Budak, V. P.; Makarov, D., N. & Smirnov, P., A. (2006). Přehled a porovnání počítačových
programů pro navrhování osvětlovacích soustav, SVETLO 1/2006, vol.1, No.1/2006,
p.50-54, ISSN 1212-0812, Czech Republic
Cohen, M. F. & Greenberg D. P. (1985). The hemi cube: A radiosity solution for complex
environments. Symposium on Computational Geometry, 1985, pp. 31 – 40.
Goral, C. M.; Torrance, K. E.; Greenberg, D. P. & Battaile, B. (1984). Modeling the interaction of
light between diffuse surfaces. Computer Graphics, 18(3), vol. 18/3, pp. 213-222.
Krupa, M. (2005). Methods of lightening for workplaces, In: Novus scientia. ISBN 80-8073-354-6.
pp. 215-220., Košice, Slovakia
Rybár, P. et al. (2001). Denní osvětlení a oslunění budov, ERA group spol.s r.o., ISBN 80 – 86517
– 33 – 0, Brno, Czech Republic
Sillion, F. & Puech, C. (1989). A General Two-Pass Method Integration Specular and Diffuse
Reflection. Comp. Graphics, vol. 23(3), Boston. p. 338
Smola, A. (2003), Osvetlenie priemyselných hál, In: AT&P Journal 3/2003, vol.3, No.3/2003,
ISSN 1336-5010, Bratislava, Slovakia
Smola, A.; Gašparovský, D. & Krasňan, F. (2005). Navrhovanie vonkajšieho a vnútorného
osvetlenia v nadväznosti na technické normy a právne predpisy, SAP, ISBN 80-89104-71-
1, Bratislava, Slovakia
Tilinger, Á. & Madár, G. (2008). Spectral Radiosity Rendering Application for Lighting
Researches, Acta Polytechnica Hungarica Vol. 5, No. 3, pp. 141-145, ISSN 1785-8860
Part 6
Dental Medical Technologies

10
Combined-Correlated Methods Applied to the
Analysis of Dental Prostheses Materials Quality
Diana Laura Cotoros and Mihaela Ioana Baritz
University Transilvania from Brasov
Romania
1. Introduction
Over time, there were a multitude of researches and solutions concerning dental prosthetics,
which was in its turn subjected to a major revolution at the time of a new procedure
emergence, namely oral implantology. Today, millions of dental implants are used as oral
implantology providing the possibility of using additional pillars that may be inserted
wherever needed. Thus, a wide range of edentations that not long ago were benefiting only
of mobilizable or mobile solutions can be approached today by fixed prosthetic restorations.
From the point of view of a simple classification the prosthetic parts can be attached
exclusively upon implants (pure implanter) or can be mixed (dental-implanter).
Prosthetic works upon implant may replace from a single tooth to an entire arcade. They can
be made of various materials: metal-acrylate or metal-ceramics. Metal-ceramics works (with
porcelain antagonists) are preferred today for their structure rigidity. Acrylic works present
the benefit of shocks damping, but they are not resistant enough. The abutment applied on
the implant represents the trans-mucous component and the implant package is covered in
order to rebuild the aspect of a natural tooth. The hexagonal shape of the implant’s end
prevents the abutment rotation and the contact surface between this and the implant is
especially important especially in the process of occlusion with upper arcade. The dentist
may reshape some part of the abutments used in implantology today, while the micro-
prostheses edges may be placed sub gingival without being followed by complications.
Also, upon an implanter support we can create the so called prosthesis upon implant, which
is a mobilizable prosthetic device. On a small number of implants, the prosthesis with
special aggregation systems can be manufactured, systems that confer much higher stiffness
and support to the prosthesis on implant than to the usual prosthesis.
Artificial teeth are usually included in one of the following situations: single maxillary mobile
prosthesis, with noble alloys as antagonists, acrylic or diacrylic resins, in order to prevent
their accelerated wear; alveolar ridge; rubbed off or periodontal antagonists; or the case when
there is a single dental prosthesis or a metal bridge on the antagonist arcade.
Wherever would be the position of teeth in prosthetic area, artificial teeth are defined by
characteristics related to color, shape, dimension and occlusal shape. Besides for the frontal
teeth the order of importance is color, shape, height and width. From the material point of
view, the used artificial teeth are consisting of PMMA co-polymerized by reticular agents

New Technologies – Trends, Innovations and Research

210
and usually these are provided with an increased resistance to cracking by using a greater
amount of reticular agents. In the contact zone with the prosthetic basis, a lower
concentration of reticular agent is recorded, than in the incisal, respectively occlusal areas, in
order to facilitate the chemical connection to the polymers in the prosthetic basis. To provide
the most physiognomic aspect, artificial teeth use a large range of pigments and to increase
resistance of teeth, they are treated with inorganic micro-particles.
For long-term successful performance of all dental implant types the following general
factors should be considered: biomaterials, biomechanics, dental evaluation, medical
evaluation, surgical requirement, healing processes, prosthodontics, laboratory fabrication,
post insertion maintenance. All practitioners involved in patient care should be
knowledgeable regarding these factors and their interrelationships.
Standards of dental practice would suggest the following general contraindications for the
above three categories of dental implants: debilitating or uncontrolled disease, pregnancy,
lack of adequate training of practitioner, conditions, diseases or treatment that severely
compromise healing, e.g., including radiation therapy, poor patient motivation, psychiatric
disorders that interfere with patient understanding and compliance with necessary
procedures, unrealistic patient expectations, unattainable prosthodontic reconstruction,
inability of patient to manage oral hygiene, patient hypersensitivity to specific components
of the implant.
Teeth used in treatment with telescopic prostheses should be covered generally with
porcelain or noble metals crowning and require extended preparations. By help of implants,
abutments can be created upon which different structures connected to the skeletal or fixed
prosthesis are applied. Nowadays, fixed implantologic prosthetics is dominated by screw
fixing but prosthetic works can also be cemented.
In dentistry there were used non-metallic materials to manufacture dental prostheses even
since ancient time. Nowadays, three groups of non-metallic materials are used:
 organic (different plastics);
 inorganic (ceramics);
 composites (organic + inorganic).
It is well known that plastics are non-metallic compounds produced in a synthetic way.
These are generally made of organic parts that can be modeled (in the plastic phase) in
various shapes and then harden creating rigid bodies. As ideal properties of a non-metallic
material used for prostheses manufacturing, we may enumerate the following: they should
have the color and shade of the tissue they are replacing, to posses transparency or be
translucent, properties that allow its esthetic reproduction; to avoid the color and
transparency change after manufacturing or within the oral environment; not shrink or
expand, nor to distort during processing or afterwards in the oral environment; to present
elasticity and abrasion resistance; to be waterproof for the fluids in the oral cavity
preventing the occurrence of an unpleasant halitosis or taste disorders; food and other
materials should not adhere to the processed surface, once introduced in the oral cavity,
allowing the same hygiene procedure as the oral dental tissues; to present a small density
and high thermal conductivity; lamination temperature should be much higher than the
temperatures of all liquids and foods introduced in the oral cavity.

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

211
Presently, the properties of the polymers used in manufacturing prostheses are enhanced
along three directions: by radio-opacity; by increasing impact resistance and respectively
by increasing rigidity.
Radio-opacity can be obtained by introducing organic components of bromine that
determine the plasticity increase, water absorption and respectively a decrease of material
rigidity. By means of additive phase separation (organic component based on bromine)
during the paste phase, we are able to obtain a transition temperature around 110 C and a
rigidity of 2,0 GPa preserving at the same time the esthetic properties and reaching a high
radio-opacity degree.
Increase of shock resistance can be obtained by homogenization during the paste stage of
two or three different polymers. For increasing the rigidity and shock resistance, we know
from dedicated literature that some types of fibers were experimented (glass, alumina,
carbon, Kevlar) used to reinforce PMMA or Bis-GMA resins.
From the properties of acrylic thermo-polymerizable resins, the most important are the
following: structure (from structural point of view methyl polymetacrylate consists of linear
polymerized macro-molecular chains); porosity (in resin’s structure air inclusions of small
or bigger dimensions may appear and microscopically determined); spherical inclusions,
small, inside PMMA (these may appear as a result of too fast heating of acrylate paste and a
temperature increase over 100C, thus, boiling and monomer evaporation determine the
bubbles occurrence inside PMMA); different shapes inclusions, small, countless,
distributed along the entire thickness of the acrylate (this type of porosity is due to the
insufficient compression of the acrylate paste); different shapes inclusions, big, distributed
along the entire thickness of the acrylate (the cause of their presence is due to lack of
homogenization of acrylic paste, distorted distribution of monomer or too high variation of
polymer molecular mass); water absorption (phenomenon is evaluated by weight increase
of acrylate sample, which was assessed per resin surface unit, immersed in water at 37
0
C for
24 hours and then well dried; solubility (evaluated by determining the weight diminishing
per resin surface unit, immersed in water for 24 hours and well dried); volume variations
(during polymerization process the following physical phenomena take place successively:
thermal expansion, contraction of polymerization and finally thermal contraction); thermal
expansion (is due to the temperature difference between the environment and the 60C
temperature of the water meant for polymerization); contraction of polymerization (these is
generated by the methyl polyacrylate that presents a 21% volume decrease during
polymerization); thermal contraction (occurs during the pattern cooling phase and is
limited by PMMA adherence to the pattern margins, but the most important of the thermal
properties is the thermal expansion coefficient which is evaluated at 81  10
-6
/deg. Thermal
conductivity of PMMA is low, the thermal conductivity coefficient being 4,5  10
-4
cal  cm
-1

s
-1
 deg
-1
).
As far as the mechanical properties of the acrylic resins are concerned, the most important are
the following: hardness (Knoop hardness is 20 times lower than that of dentine (65) or
enamel (300)); bending resistance (compression resistance is approx. 75 MPa; traction
resistance is approx. 52,5 MPa; abrasion resistance is low, being a major inconvenient for
these resins).

New Technologies – Trends, Innovations and Research

212
The most known chemical properties are: corrosion (PMMA presents a high chemical inertia,
being very stable in the oral cavity- still an unfavorable evolution in time is possible so that
the initially translucent resin becomes opaque and yellow and due to micro-cracks occurred
in time, the mechanical resistance is also lowered); biological properties (ceramics consists
of metallic and non-metallic components – oxides, nitrites, silicates).
Introducing ceramics in dentistry, as it is or as lead material on metallic support is due first
to their outstanding esthetic qualities as well as to the fact that it is an inert material, very
well tolerated by tissues. From chemical point of view, ceramics is a complex silicate. The
basic raw materials in its composition are: feldspar, quartz and kaolin. Beside these basic
components, ceramics also contain a large range of ingredients only in pure state, because of
the multiple requirements related to color, resistance, fragility, insolubility, translucence.
2. Performing the mastication process
Mastication is the process used to food fragmentation, salivation and food bulk formation,
lubricated and prepared for deglutition. Mandible and upper maxillary take part in the
mastication process by means of dental arcades, jaws, lips and tongue. By contracting the
oro-facial musculature the food particles are maintained on the teeth occlusal surfaces while
the tongue separates the large particles from the small ones, brings the large ones back to the
grinding areas and creates the food bulk. Salivary mucin is the binder that helps shaping the
food bulk and is the necessary lubricant both in the mastication process and the deglutition
one. [1]
The active factor of mastication is the mandible, driven by the mastication muscles by the
complex performed motions. The complexity of mastication motions are explained by the
temporal-mandible anatomical shape and the various possibilities of masticating muscles
combined action.
During mastication, significant forces are developed (pressures of 15- 20 kgf), representing a
real danger for the soft tissues and also for the hard tissues participating in this process.[1]
Maintaining the integrity of the tissues involved in mastication is accomplished by
structural and mechanical factors and by a very accurate coordination of the mastication
motions assured through the nervous system, based upon considerable sensorial
information.
The change of any of the morphological or functional components perturbs the mastication
process at a certain extent, according to the importance of the affected component. [1].
The main motions of the mandible (opening-closing, anterior-posterior, lateral right-left),
related to the three planes in space are harmoniously integrated in the mastication function,
according to an individual pattern characteristic for each individual.
Due to the fact that mastication is part of the hard approachable functions, it is necessary to
use adequate research methods.
According to some researchers (Gillings, 1967), one of the ways of recording the mandible
kinematics uses electronic transducers, based on photo-sensitive elements (photocells)
sensitized by one or more bright spots connected to the inferior incisors. The results appear

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

213
synchronously with the recording, as time functions, on the amplifying-recording device
paper.[1]
The device and the conceived recording method, performed and applied by Prepeliceanu
and his collaborators (1970 – 1971) allows the simultaneous recording of the three directions
of mandible motions performed during the mastication process (opening-closing, anterior-
posterior, lateral right-left), aspects observed in fig.1.[1]

Fig. 1. Schematics of mandible motions recording [1]
From the multiple research performed by a team of specialists it was established that the
motions performed by the mandible during the mastication process are integrated in the
mastication cycled developed on vertical-oblique trajectory, with a transversal lateral
component and also oblique, accompanied by gliding at occlusion level. Also there was
found the existence of some friction motions in the occlusion process, especially performed
in the final phase of the mastication process. Additionally we observed lateral gliding
motions as well as combinations between lateral and thrust motions, accompanied by
friction between the cuspidian slopes, due to functional requirements of shearing and
grinding food, especially the most consistent and fibrous ones. Though these motions have
low amplitudes, of the order of millimeters, they are differentiated from individual point of
view, due to the structure of mastication system of the analyzed subject.
Another series of researchers assessed the mastication system as a system with a very
complex neural-muscular activity, based upon conditioned reflexes, and that the
development of this action cannot be considered as a chain of reflexes, independent of the
occlusal guide influence. This aspect is confirmed also by the fact that in the most part of the
mastication process, direct dental contacts take place and this way the influence of the
occusal guide in guiding the mandible mastication motions cannot be ignored.
In mastication process, besides the mechanical action of the dental arcades, an important
part is played by the saliva. Saliva is the secretion product of three pairs of big glands
located within the thickness of the oral cavity sides- parotid, submaxillary and sublingual
glands – and of numerous small glands situated in the mucous covering this cavity. There is

New Technologies – Trends, Innovations and Research

214
classical accepted that saliva consists of 99,4% water and 0,6% solid substances, from which
0,2% inorganic and 0,4% organic. Saliva composition is variable in a very large range,
according to the glands, debit from one subject to another or even for the same subject at
different moments in time.
3. Biocompatibility issues of implants and dental prosthesis
Biomaterials class is different from the other classes of materials due to the biocompatibility
criterion, which is defined as the biomaterials property that after their implantation in a
living organism, they do not trigger adverse reactions and are accepted by the surrounding
tissues. So, the biomaterial should not present toxicity or should not produce inflammatory
reactions when introduced in the human body as an implant. According to a more general
and officially approved definition (Williams, 1987), a material with an optimal
biocompatibility is the one that do not determine any tissue adverse reaction. Also, the
implanted material is expected to withstand any physiological strain without showing any
substantial dimensional change, shape alteration or any other catastrophic event. The
implants should resist to any degradation or corrosive attack of the physiological or
nutritional fluids. Their constituent materials must be resistant to oppose any force applied
to them during their designed life cycle. Biocompatibility of an implant depends upon
several factors like: patients’ general health state, age, tissue permeability, immunologic
factors and implant characteristics (material roughness and porosity, chemical reactions,
corrosion properties, toxicity).
A great importance for the human tissues is presented by the development of
electrochemical corrosion processes in blood serum at 37C temperatures. When the
material is introduced inside the body we should consider two aspects. One is the influence
of the physiological environment that may change the material nature and properties. The
other is the effect of the implant material and of each of its degradation products upon the
physiological fluids and tissues. We must highlight the fact that the chemical action of the
physiological fluids does not involve just some chemical reactions of ionic exchange or
oxidation-reduction reactions with the consisting molecules of a given biomaterial, but
above all these the interaction of an impressive number of food substances, still unknown,
that operate at the level of complex substances and are able to selectively attract specific
ions, creating a physical-chemical unbalance state inside the material. Thus the material may
sustain various chemical or physical degradations.
In order to determine the biocompatibility of the materials used in dental prosthetics and
implantology a questionnaire was conceived, which was filled in by a human subjects’
sample with prosthetic works made of the same type of acrylic material. Based on the
questionnaire’s answers and suing a module of the software developed on Fuzzy logic, we
accomplished the analysis of the materials used in manufacturing dental prosthetics works.
The result of the analysis is materialized in a graphic presenting the analyzed dental
material biocompatibility by means of percents, for the selected subjects’ sample.
The first stage in biocompatibility analysis by Fuzzy logic consisted in introducing initial
data in a main window (fig.2) considering the most two important causes leading to
materials incompatibility: nourishment and health of the studied human subjects. Graphics
were made at a percent scale of 1 to 100.

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

215

Fig. 2.
To each of the two variables may correspond from 2 to n concepts. Thus, for the
“nourishment” variable we considered as valid the following concepts: “soft food”, “hard
food”, “acid food” and “sweets”. For the second variable “health” we established the
concepts: “bad”, “average” and “good”. The graphics corresponding to the two variables are
shown in fig.3 and fig.4.

Fig. 3.

Fig. 4.

New Technologies – Trends, Innovations and Research

216
Fuzzy analysis continued with the second stage, introducing final data, where we
established a single variable as being biocompatibility (“bio”) with the following concepts:
“null”, “partial” and “total”. Fuzzy type analysis assumes the compilation of initial data
and of the final ones based upon the definition of certain rules that are presented in a
separate window. Prior to starting the fuzzy analysis process we checked on software
basis all the introduced data and rules to avoid the errors during analysis.

Fig. 5.
The last stage, concerning the analysis results was performed using a soft simulator that
calculated based upon the numerical values and established rules, the percentage of studied
material biocompatibility. The obtained results reveal the fact that the biocompatibility level
stage is remaining at low values due to the health state and nourishment style of the
investigated sample of subjects.
4. Polymerization process of restoration materials samples – Microscopic
analyzes
The experimental setup used for the microscopic analysis of the polymerization dental
materials samples consists of a digital microscope Keyence VHX-600 type, with objective
magnification between 500x and 5000x, an object field of 0, 25 mm
2
and software suitable for
the assessment studies and surface quality measurements, roughness, 3D representations.
The used samples were manufactured in the same conditions and assessed according to the
same procedures.

Fig. 6. Keyence VHX-600 digital microscope (first two pictures) and mechanical testing
system for dental samples

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

217
Most of the restoration materials should withstand forces during manufacturing or
mastication, so the mechanical properties are important in understanding and predicting the
material behavior under load. Because a single mechanical property cannot represent a
quality measure, the application of the involved principles in a range of mechanical
properties is essential, especially considering the human factor implication.
One of the most important applications in dentistry is the study of the forces applied to teeth
and dental restorations. The maximum forces recorded by strain gauges and telemetry
devices reach 250 to 3500N. The forces developed in the dental occlusion for an adult subject
decrease from the molar area towards the incisors, reaching forces values from 400 to 800N,
upon the first and second molar.
Of the same importance for the study of forces developed in the natural teeth occlusion, is the
determination of stress and strains in the restoration type works, such as insertions, fixed
connections, partial and total prostheses. One of the first investigations of the occlusion forces
shows that average biting force in patients with replacement of first molar is determined at
250N for the restored part and 300N for the opposite side, in comparison to the average biting
forces for permanent teeth, reaching 665N for molars and 220N for incisors. In a different
study, forces measured for patients with partial prostheses are form 67 to 235N. Generally, the
force in women bite is 90N smaller than the one applied by a man.
These studies indicate that the mastication force on the first molar with a fixed connection is
approx. 40% of the force exerted by the patients with natural teeth.
Recent measurements performed by help of strain gauges devices are much more accurate
than those performed with other previous equipments, but generally the conclusions are the
same. These measurements concluded that the forces distribution between the first
premolar, second premolar and first molar in a complete dentition can be established as
approx. 15%, 30%, and 55% of the normal force.
From the point of view of the polymerization process, an important aspect is represented by
the polymerization time, which is a parameter affecting the mechanical characteristics of the
prosthesis teeth, dental restorations or implants. Polymerization time for the composite
diacrylic polymerizable resins cannot be measured based on viscosity changes. Approximately
75% of the process takes place in the first 10 minutes, the reaction continues slowly for 24h.
The sub-polymerized layer at the surface has an internal conversion ratio of approx. 25%.
By comparing some materials used for artificial teeth construction we may notice that in the
case of dental acrylate (having the following characteristics – compressive strength of 84 MPa,
elastic modulus of 1700 MPa and elasticity limit of 55 MPa) this is used in dental technique
offices in 80% situations unlike the ceramics materials which are present only in 20% of
situations. Duropont composite material (having the characteristics – compressive strength of
90 MPa, elastic modulus of 1600 MPa and elasticity limit of 45 MPa) presents a highly
superior hardness to the presently used acrylates. Unlike these, the duropont composite
polymerizes in 6 atm external pressure conditions and even if it does not show the cromasit
hardness, the favorable price makes it the most used material for dental prosthetic works.
During the performed tests we manufactured some working samples with the same size and
volume.

New Technologies – Trends, Innovations and Research

218
First working samples were made of TE-ECONOM material and were polymerized for
various time intervals (5 min, 6 min, 7 min and respectively 9 min) and monitoring the
photo-polymerization process in order to avoid other environmental influences.

Fig. 7a. Sample 1 (TE-ECONOM) structure, photo-polymerization time of 5 min (500x digital
microscope)

Fig. 7b. Roughness profile variation in the area marked for sample 1

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

219

Fig. 8a. Sample 2 (TE-ECONOM) structure, photo-polymerization time of 6 min (500x digital
microscope)

Fig. 8b. Roughness profile variation in the area marked for sample 2 (TE-ECONOM)

Fig. 9a. Sample 3 (TE-ECONOM) structure, photo-polymerization time of 7 min (500x
digital microscope)

New Technologies – Trends, Innovations and Research

220


Fig. 9b. Sample 3 (TE-ECONOM) structure, photo-polymerization time of 7 min (500x
digital microscope) and analyzed by MountainMap software




Fig. 10. Roughness profile variation in the area marked for sample 3 (TE-ECONOM)




Fig. 11a. Sample 4 (TE-ECONOM) structure, photo-polymerization time of 9 min (thickness
4mm) (500x digital microscope)

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

221

Fig. 11b. Sample 4 structure, photo-polymerization time of 9 min (thickness 4mm) (500x
digital microscope) and analyzed by MountainMap software


Fig. 11c. Roughness profile variation in the area marked for sample 4 (TE-ECONOM).
For the second set of samples the material we used was: VALUX-PLUS, and the chosen time
intervals were the same – 5, 6, 7 and 9 minute.


Fig. 12a. Sample 1 VALUX-PLUS structure, photo-polymerization time of 5 min (thickness
4mm) (500x digital microscope)

New Technologies – Trends, Innovations and Research

222

Fig. 12b. Sample 1 VALUX-PLUS structure– photo-polymerization time of 5 min (500x
digital microscope) and analyzed by MountainMap software


Fig. 12c. Roughness profile variation in the area marked for sample 1 VALUX-PLUS


Fig. 13a. Sample 2 structure VALUX PLUS- photo-polymerization time of 6 min (500x digital
microscope)

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

223

Fig. 13b. Sample 2 structure VALUX PLUS- photo-polymerization time of 6 min (500x digital
microscope) and analyzed by MountainMap software


Fig. 13c. Roughness profile variation in the area marked for sample 2 VALUX PLUS


Fig. 14a. Sample 4 structure VALUX PLUS – photo-polymerization time of 9 min (thickness
4mm)

New Technologies – Trends, Innovations and Research

224

Fig. 14b. Sample 4 structure VALUX PLUS – photo-polymerization time of 9 min (thickness
4mm) and analyzed by MountainMap software

Fig. 14c. Roughness profile variation in the area marked for sample 4 VALUX PLUS
The third set of samples was made of: Concise – 3M self-photo-polymerization composite
that was subjected to the same photo-polymerization methods (5, 6, 7 and 9 minutes)

Fig. 15a. Sample 1 structure CONCISE 3M – photo-polymerization time 5 minutes

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

225

Fig. 15b. Sample 1 structure CONCISE 3M – photo-polymerization time 5 minute and
analyzed by MountainMap software

Fig. 15c. Roughness profile variation in the area marked for sample CONCISE 3M
From the performed measurements presented above we may observe the following:
 According to the materials polymerization degree we notice changes in their aspect
depending on the photo-polymerization time interval;
 For Valux plus material we observed an incomplete polymerization due to the white
spots upon the material surface, while for all the TE-ECONOM samples, the photo-
polymerization was uniform, there were no white spots on the material surface;
 The two materials surfaces are very different, as for valux plus the surface does not
appear entirely homogeneous, while for TE-ECONOM, the surface is much more
homogeneous and uniform;

New Technologies – Trends, Innovations and Research

226
 The tested materials withstand very well the applied forces considering that: these
analyzed materials resisted up to a 2300 N force, the equivalent of a 117 MPa strain for
Valux Plus, respectively 2500 N, the equivalent of a 127 MPa strain for TE-ECONOM;
 Tests also performed on duropont composite materials showed they are able to
withstand, according to the load type, centric or eccentric, forces of: 1600 N
equivalent of a 48,9 MPa strain for centric compression, respectively of 1000 N
equivalent of an 82,77 MPa strain for eccentric compression. All these results are
determined considering that the bite force of a human being may reach the maximum
value of 270 N;
 We also noticed based on the surfaces profile analysis that the photo-polymerization
process determining the best surface quality must take place along a 6 min time interval
for TE-ECONOM material and respectively along 9 minutes for Valux Plus material. As
far as CONCISE 3M is concerned, regardless of the photo-polymerization time, the
surface aspect presents an extremely changeable profile, which requires a prior
processing. (fig.16.)





Fig. 16. Diagrams of roughness values

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

227
5. Microscopic analysis of edible substances upon the structure, quality and
aspect of the dental prosthetic elements surfaces
In order to test the behavior of prosthetic elements in aggressive environments, prosthesis
teeth presented in fig.17 were thoroughly cleaned up and introduced in washed and dried
recipients, without any trace of impurities. Glass recipients, each carefully labeled, were
filled with the following substances shown in fig.18:
- Water and sugar, concentration 50 %; water and salt, concentration 50 %; coke at room
temperature; cold instant coffee concentration 1:1; vinegar; oil; alcohol (concentration
45%); grapefruit juice; orange juice; hot tea.


Fig. 17. Prosthesis teeth to be experimentally analyzed



Fig. 18. Recipients with substances used to test prosthesis teeth
As preponderant substances in human nourishment we established a number of 10 edibles
that affect more or less the biocompatibility of restoration materials used in dental
technique. These are:

New Technologies – Trends, Innovations and Research

228
- Drinkable water with sugar, concentration 50%. Water is a colorless, transparent, odorless
and relatively tasteless liquid, having an average content of mineral substances (calcium
carbonates, magnesium, sulfate salts). Sugar is some kind of carbohydrate mostly used
being sucrose, a crystalline white solid. It is used to sweeten or improve taste of
beverages or foods.
- Salted water, concentration 50 %. Kitchen salt is a solid, ionic, crystalline substance that
contributes to the increase of intracellular osmotic pressure and blood pressure due to
sodium ions and represents a basic preservative and spice in nourishment.
- Coke is a soft drink made of decocainized coke leaves. Name comes from two of the
ingredients: coke leaves and cola beans. The distinctive “cola” flavor comes mainly
from the sugar, orange oil, lemon oil and vanillin mixture, the rest of the ingredients
having only minor contributions.
- Instant soluble coffee, highly concentrated. Instant soluble coffee is a black colored beverage
containing caffeine, obtained of roasted coffee beans, ground and chemically processed
containing PP vitamin (nicotinic acid or niacin). Coffee beans are the fruits of some
plants from Rubiaceae family with two important varieties like Coffea arabica and Coffea
canephora, first having superior quality beans. Coffee quality is also influenced by the
place of cultivation, storage and the way the coffee beans are roasted.
- Vinegar (acetic acid) is an organic chemical compound that appears as a colorless liquid
with a characteristic pungent odor that can be mixed in any proportion with water.
Melting and boiling temperatures are 16,7 °C and respectively 118,2 °C. It is processed
by acetic fermentation of alcohol diluted solutions, dried distillation of wood or
oxidizing acetic aldehyde. Vinegar contains acetic acid in a 3–9% concentration.
- Oil is a fat liquid of vegetal, animal, mineral or synthetic origin, insoluble in water and
lighter than water, used in nourishment and also industry, etc.
- Tzuica (concentration 45 %) is a Romanian traditional beverage obtained by plums
fermentation and distillation.
- Grapefruit juice. Grapefruit is a citric fruit, big, round, yellow or rosy colored (Citrus
paradisi), with juicy and bitter pulp, appreciated for the enzymes rich content
stimulating digestion; it is obtained by pomelo and various types of oranges hybrids.
- Orange juice. Orange is a citric, round fruit, orange colored, with juicy and sweet-sour
taste, appreciated for the rich content in active substances (hesperidins, pectin), acids
(ascorbic acid, citric), alkaloids (betadine), sugars (fructose, galactose), vitamins (B2, B1,
B6 and C), minerals (iron, calcium, magnesium, phosphorus, potassium, sodium, zinc.
- Green tea with tangerines extract. This is a type of tea obtained from Camelia sinensis leaves.
Due to the rich content in theine and caffeine it is an excellent antioxidant, diuretic,
cerebral stimulator, stimulator of fat burning process and anticancer factor protection.
Mandarin extract is rich in A and C vitamins, pectin, beta carotene and esters.
Teeth were maintained in substances considered as aggressive environment for 7 days, at
constant temperature and without contact to solar rays, then they were extracted and
microscopically examined.
Recordings of the prosthesis teeth images before the experiments were taken, writing down
the day and hour when they were introduced in the aggressive environment and there were
pictures taken after the experiments (fig. 19). Further on the study on the digital microscope
was performed in order to draw the conclusions concerning the experimental results.

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

229

Fig. 19. Prosthetic elements after experiment, isolated and labeled.
In order to establish an analysis methodology of prosthesis behavior, made of acrylic
material with respect to the use by the human factor and to some surface tests, we selected a
digital microscope VHX 600 Keyence type to visualize the structure changes at the level of
active or support surfaces and respectively a universal machine for fatigue testing to
determine the eccentric compression force.
The acquired images by help of the digital microscope with the video cam were stored in a
database in order to be processed using a specialized software (Adobe Photoshop) in order
to observe as many as possible characteristics of the analyzed prostheses surfaces.
These characteristics refer to the quality of the materials surfaces, dimensions or color to
emphasize the possible deformations, deposits or excavations, existence of scratches, contact
at the combined surfaces metal-acrylate or porcelain.
The acquisition methodology of the recordings consists of the following stages:
- we set the prosthesis teeth after the experiments on the microscope plate and captured a
wide range of images to analyze, this way creating the microscopic images database;
- we analyzed then the prosthesis surfaces by help of a software dedicated to the digital
microscope and processed the images to obtain other characteristics.
The stage of image acquisition and processing consisted of capturing images step by step
(using fine depth) of the analyzed surface, reconstruction of their composition and saving the
resulted 2D image. For each sample we captured 2-3 2D images (on various areas) and 2
images in 3D according to the analyzed surface.
The first analyzed sample was the one introduced in the mixture of green tea with tangerines
extract. Due to the fact that the green tea has a high content of theine, caffeine and vitamin C
we notice slight traces of corrosion upon the analyzed surface.
Corrosion occurs as a chemical reaction between the dental material and the aggressive
environment. Analyzing the surface we can see that as a follow of the corrosion, the material
lost its shine on the affected areas and we observed changes in color. The analysis was
performed on 3 zones of the sample surface as shown in fig.20- fig.25.

New Technologies – Trends, Innovations and Research

230

Fig. 20. Zone I in depth Fig. 21. Zone I – 3D

Fig. 22. Zone II in depth Fig. 23. Zone II – 3D

Fig. 24. Zone III in depth Fig. 25. Zone III – 3D
In all three analyzed areas upon the tooth surface we notice, due to the pigment in the
tangerines extract, a series of deposits (stains) with reddish aspect. This is due to the
adherence of the aggressive liquid upon the tooth surface.
The sample kept in orange juice presents on most of the analyzed surface several deposits
with oily character given by pectin and esters quantity (essential oils) which are components
of the orange extract. The liquid adhered to the tooth surface creating locally a sticky film. In
fig. 26 and 27 we may observe the aspect of the sample surface.

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

231

Fig. 26. Fig. 27.
For the sample kept in soluble coffee the effects are really visible. Due to the high
concentration of soluble coffee (1 teaspoonful to 1 teaspoonful of water) and its strongly acid
character we notice in 28 .... 32 that the aggressive liquid adhered to the tooth surface
leaving coffee traces as granules. This thing happened due to van der Waals forces and
hydrogen links between the tooth surface and the aggressive environment. We also can
notice changes of the dental material color on the surfaces where the coffee adhered.

Fig. 28. Zone I in depth Fig. 29. Zone I 3D
Figure 30 shows the soluble coffee granule, intact, adhering on the surface layer of the
dental material.
Fig. 30. Zone I 3D - light

New Technologies – Trends, Innovations and Research

232

Fig. 31. Zone II in depth Fig. 32. Zone II 3D -light
Unlike the previous sample where the effects of the instant coffee are clearly visible as
deposits upon the material surface, the sample kept in coke presents some corrosion traces
on the dental surface determining the change of surface structure by losing shine. We
captured images from 2 areas of the analyzed surface that were analyzed in depth as shown
in fig. 33- 36.

Fig. 33. Zone I in depth Fig. 34. Zone I 3D

Fig. 35. Zone II in depth Fig. 36. Zone II 3D
For the sample kept in acetic acid (vinegar) the corrosion effects are really visible. Thus, in the
first captured images, fig. 37 and 38, we may notice that the material adhered to the tooth
surface, creating local deposits. In fig. 39 and 40, images 2D and 3D captured in depth we

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

233
found local corrosions in plane on the dental material surface due to the acid character of
vinegar.





Fig. 37. Zone I normal


Fig. 38. Zone II normal
Also in depth analysis reveals local stains and loss of shine.





Fig. 39. Zone III in depth


Fig. 40. Zone III 3D
Unlike the previous samples where the aggressive liquids created stains or erosions of the
studied material, for the sample kept in salted water (NaCl) – 50% we could notice deposits in
parallelepiped crystals shape. Thus, following the image analysis we could see the NaCl
granules that crystallized at air contact and adhered due to van der Waals forces to the
studied dental material surface. Van der Waals forces act between all the close enough
molecules and with stable electronic shells, without sharing electrons or transfer them
between these particles. The behavior of dental material in salty solution, concentration 50%
is represented in fig. 41 .... 44.

New Technologies – Trends, Innovations and Research

234

Fig. 41. Zone I in depth Fig. 42. Zone I 3D

Fig. 43. Zone II in depth Fig. 44. Zone II 3D
In case of samples kept in water with sugar, concentration 50%, we notice that sugar at air
contact crystallized in shape of white prismatic granules that created white deposits on the
tooth surface. These deposits emerged due to a certain component of the refined sugar: an
additive called E220 (sulfur dioxide). We also captured 4 images (2D and 3D) from two
areas of the surface presented in fig. 45... 48.



Fig. 45. Zone I in depth Fig. 46. Zone I 3D

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

235

Fig. 47. Zone II in depth Fig.48. Zone II 3D
For the samples kept in tzuica, concentration 45% we captured three images (two in 2D and
one in 3D). Following the image analysis we may notice some deposits upon the tooth surface
and local chromatic changes due to alcohol. This is shown in fig. 49 ... 51 presenting the
behavior of the dental material subjected to the action of alcoholic aggressive environment.

Fig. 49. Zone I normal Fig. 50. Zone II in depth
Fig. 51. Zone II 3D
Again, following the image analysis we found that the sample kept in sunflower oil resisted
best to the action of the aggressive environment. Thus, the oil does not have damaging

New Technologies – Trends, Innovations and Research

236
effects upon the material used in prosthetics; it creates though some oily deposits due to
esters on the tooth surface. Fig. 52 and 53 reveal best this aspect.




Fig. 52. Zone I in depth

Fig. 53. Zone I 3D
6. Conclusion
Analyzing the benefits of composite materials based upon resins, used as dental materials,
we may find the following: they do not include Hg; due to a suitable edge adjusting and a
volume constant in time they do not allow deposits in the contact area between the two
materials (root and tooth); there is a biocompatibility with the human organism; they obtain
very hard materials with high mechanical resistance and consequently at least 20 years life
cycle; the hardness of these materials being below the one of the dental enamel it does not
scratch the antagonist teeth during mastication; the hardening reaction of these materials
used in the dental office for root canals takes place in a few minutes, which proves to be
very comfortable to the patient; the reticulation reaction of the polymerizable materials may
take place without any chemical reaction with a reticular agent, only if exposed to a UV
radiations lamp, meaning there is no toxicity for the human factor.
Among the disadvantages of using composite materials as dental materials we may list the
following: situation when the hardening agent is not entirely consumed in the
polymerization reaction and it may become toxic to the human body, triggering local
inflammation; composite materials may sustain some mechanical damage due to forces
occurred during mastication or due to important temperature changes, and if it is used in
visible areas, it may present the fluorescence phenomenon when using a certain type of light
radiation.
7. Acknowledgment
These researches are part of the Grant PNII-IDEI 744 with CNCSIS Romania and we’ve
developed the investigations with equipment from Research Project “CAPACITATI” in
Mechatronic Researches Department from University Transilvania of Brasov, Romania.

Combined-Correlated Methods Applied to the Analysis of Dental Prostheses Materials Quality

237
8. References
Albrektsson, T. & Wennerberg, A. (2004), Oral implant surfaces: part 1—review focusing on
topographic and chemical properties of different surfaces and in vivo responses to them. Int.
J. Prosthodont. 17, 536–543.
Anders Palmquist, Omar M. Omar, Marco Esposito, Jukka Lausmaa and Peter Thomsen
(2010) Titanium oral implants: surface characteristics, interface biology and clinical
outcome, J. R. Soc. Interface 2010 7, S515-S527 doi: 10.1098/rsif.2010.0118.focus;
Baritz M., Cotoros D., Cristea L.,(2010) Analysis of dental implants behavior in mobilizing
prosthesis, The 12th WSEAS International Conference on MATHEMATICAL and
COMPUTATIONAL METHODS in SCIENCE and ENGINEERING (MACMESE
'10) Faro, Portugalia 2010;
Baritz M., Cotoros D., Moraru O., (2007) Virtual and Augmented Reality Used to Simulate the
Mechanical Device, Annals of DAAAM &Proceedings of 18
th
International DAAAM
Symposium, ISBN 3-901509-58-5;
Bratu D., s.a. (1994) Materiale dentare-Materiale utilizate în cabinetul de stomatologie; Editura
Helicon;
Cotoros D. (2010) Analyses by image processing of surface quality of mobile skeletal dental
prosthesis, International Conference on CNC Technologies, Bucharest, Romania,
May 05-07, 2010
Cotoros, DL. et al. (2009) Aspects concerning impact tests on composites for rigid implants,
WORLD CONGRESS ON ENGINEERING, London England, Pages: 1658-1661
Grosu L., s.a. (1983) Biosistemul orofacial, Cluj- Napoca, Ed.Dacia,.
http://www.digitalsurf.fr/en/index.html accessed oct.2010
Ieremia L., Dociu I., (1987) Functia si disfunctia ocluzala, Editura Medicala, Bucuresti,
Romania,;
Lussi A., (2006) Dental Erosion From Diagnosis to Therapy, Copyright 2006 by S. Karger AG
www.karger.com, ISSN 0077–0892 ; ISBN 3–8055–8097–5;
M Navarro, A Michiardi, O Castaño and J.A Planell, (2008), Biomaterials in orthopaedics, J. R.
Soc. Interface 2008 5, 1137-1158, doi: 10.1098/rsif.2008.0151;
Rajeswari Ravichandran, Subramanian Sundarrajan, Jayarama Reddy Venugopal, Shayanti
Mukherjee and Seeram Ramakrishna, Applications of conducting polymers and their
issues in biomedical engineering J. R. Soc. Interface 2010 7, S559-S579 first published
online 7 July 2010; doi: 10.1098/rsif.2010.0120.focus;
Regenio M, et al. (2009) Stress distribution of an internal connection implant prostheses set,
Stomatologija, Baltic Dental and Maxillofacial Journal, 11, 2009,
Regenio Mahfuz Herbstrith Segundo, Hugo Mitsuo Silva Oshima, Isaac Newton Lima da
Silva, Luis Henrique Burnnet Junior, Eduardo Goncalves Mota, Liangrid Lutiani
Silva, (2009), Stress distribution of an internal connection implant prostheses set: A 3D
finite element analysis, Stomatologija, Baltic Dental and Maxillofacial Journal, 2009;
11 (2): 55-59;
Rogozea L. et al. (2009), Ethical Aspects in Bioengineering Research, WSEAS Conference on
Instrumentation, Measurement Circuits and Systems, China.

New Technologies – Trends, Innovations and Research

238
Stanciu A., Cotoros D., Baritz M., Florescu M. (2008), Simulation of Mechanical Properties for
Fibre Reinforced Composite Materials, Theoretical and experimental aspects of
continuum mechanics, WSEAS Cambridge;
Part 7
Smart Homes

11
Smart Homes as Service Platforms for
New Healthcare and Energy Services
Mikko Pynnönen and Mika Immonen
Lappeenranta University of Technology
Finland
1. Introduction
Industry transformation and convergence create new possibilities, business opportunities
and even new industries. Many factors can be identified as reasons for transformations in
industry branches in international level. The change drivers include e.g. fast growth and
development of international trade and growth, participation of very different countries
with various cost levels in international change and trade, quick evolution of international
logistics and tremendous changes in information change and transmission and
fragmentation of value chains to value networks. Particularly in small countries the clusters
have fragmented and even their parts have been unbundled to pieces in different counties as
part of globalization.
On national level fragmentation and unbundling are striking features in transformations of
industries. When each company or network on international level seeks for a most
favourable structure or position compared to the actors or networks of other countries, the
national actors or networks seek besides for competitive advantage also efficient cost
structure compared to competitors via network structures. When considering value
networks the attention is often paid only to material and service flows. However, the
functioning of the value networks requires also capabilities, rules of games and procedures
of actions from different parties of the network, and economical aspects from point of view
of each partner of the network.
In this research, especially elderly care, heath care, electricity distribution and intelligent
concept have been discussed. Quickly observed, these are very different and heterogeneous
group of activities. The common factors in these fields are the networks, their build-up and
management. We use the smart home as combining platform that integrates these networks
together.
The concept of smart home has been analysed in literature mostly from technology
perspective. The aim of this study was to analyse the smart home concept from services
perspective, as a platform for service integration. The research problem in our study was
how the services integrate through this kind of service platform. We use the Service
Dominant Logic (see etc. Lusch & Vargo, 2006; Vargo et al., 2008) as the theoretical
framework for the study. The research process follows the of future oriented business
mapping process (see e.g. Immonen et al., 2010; Pynnönen & Kytölä, 2008), where first the

New Technologies – Trends, Innovations and Research 242
plausible future business scenarios are formed. Then second the service elements and
service models are analysed. Third these elements and models are combined into service
systems by opening the actors and their relationships and business models. The main
implication is that regulator should guide the technology development to be refocused from
development of specific technologies to integrated platforms, which support diffusion of
both home systems and related service businesses.
We have structured the chapter so that first we review the resent discussion of service
dominant logic which we use as theoretical framework for this study. Second we introduce
the emerging smart home business from the service platform point of view. In this section
we also introduce the two service models and highlight the resent developments in these
businesses in Nordic market and especially Finland. Third we open research findings of the
case service models and discuss their integration to the service platform of smart home.
Fifth we discuss the conclusions and implications of this study.
2. Service dominant logic
The core arguments of S-D logic are constituted of several rules; (1) service is a fundamental
basis of exchange, (2) products are distribution mechanisms for service provision, (3) value
is delivered through co-creation between the firm, the customer and networks, and (4)
intangible capabilities, skills and knowledge are the primary source of competitive
advantage (Vargo et al., 2008). Service in this context is understood as a process of doing
something for another party in collaboration by integrating internal capabilities into external
ones to co-create value (Vargo & Lusch, 2008).
2.1 Service systems
Focusing attention into service processes unavoidably impacts on the competitive basis of a
firm. In order to create value in this economy of service systems, the firms have to
understand the new logic of creating value. In service systems, the value creation is more
complex than in product based economy. It is called the systemic nature of customer value.
The systemic nature of customer value means that the value delivered to the customer is
dependent on several different but intertwined service and product functions, and is most
possibly created by a network of firms (Pynnönen et al., 2011). These systemic functions are
often technology platforms that connect separate services together e.g. internet application
stores, smart phones or smart home systems. Also in S-D logic one of the key arguments is
that physical products are acting as distribution mechanisms for services (Lusch & Vargo,
2006; Vargo et al., 2008). The role of systemic functions is important as they are the key to
boost the value of the service system.
Competing by a service is much more than including value-add features into products;
rather, the competition shows in the customer’s willingness to pay for the integrative
capabilities of the firm in this view (Lusch et al., 2007).
A service system can be divided into two parts: (1) the service infrastructure and (2) customer
service operations (i.e. the implementation of a service process) (Flieβ & Kleinaltenkamp,
2004). The smart home concept we use in this chapter, and the services integrated into this
platform, are good examples of this kind of service system. The infrastructure determines the

Smart Homes as Service Platforms for New Healthcare and Energy Services 243
firm’s capability to manage operations for required outcomes. The service process and the
supporting and processing resources constitute the service business model, which integrates
external resources into a complete service product (see Figure 1). During service operations,
the customer contributes to production by offering information, rights and physical objects.
Processing and supporting resources are built on the firm’s internal resources and the external
value network (suppliers) of the company (Fließ & Kleinaltenkamp, 2004). The service process
itself is an intangible entity that comprises technology, know-how and intellectual properties,
and aims to the integration of resources (Tadelis, 2007).

Fig. 1. Service production model (adapted from Flieβ and Kleinaltenkamp, 2004; Tadelis,
2007)
The service production models merge activities which may be operated by external actors.
Our argument is that designing service models is always searching for appropriate value
networks at the same time. S-D logic expects that some prime service integrators are
included in the service provision networks, which have power to steer offerings. The
literature suggests that such integrators should avoid high rates of investments in
manufacturing processes to retain responsiveness, and the successful actors should have
directs link to the market place and customers (Lusch et al., 2007). Overall, it is probable that
retailers become the pivotal link in the value network which makes them potential prime
integrators in service provision.
2.2 Structure of the public service provision
In the public sector, it is important to consider that the roles of the buyer, client and supplier
need to be clearly differentiated. Local authorities have to identify the characteristics of the
provided services and to match those with the needs of citizens, who are paying for the
services directly or through taxation. The key point of actions is translating the specific
needs into technical specifications to be included in contracts (Ancarani, 2009). Therefore,
the development of service provision is a complex interconnected multi-stakeholder system
in which service providers, authorities and clients communicate with each other. The system
is illustrated at a general level in Figure 2.

New Technologies – Trends, Innovations and Research 244

Fig. 2. Roles and interactions of actors in public service provision, adapted from (Ancarani,
2009; Walker et al.,2006; Aschhoff & Sofka, 2009; Edler & Georghiou, 2007)
The two most important elements of the model are interactions between the end-user and
the authority, and the authority and service providers. Regulation projects the needs of end-
users (e.g. consumers), creating signals for monopolies to develop product and service
offerings toward society’s expectations, which may change the premises of operations. In
the future, public monopolies are expected to operate in a more service-oriented manner.
Thus, the integration of offerings from multiple service providers becomes a focal operation
principle (Vargo et al., 2008; Janssen et al., 2009). Public organizations need to orchestrate
sources of supplies in the new operation environment when it operates as the core actor of
the service provision network (Vargo et al., 2008). Managing such trends is a topical issue in
European countries in multiple spheres of authorities. However, mechanisms for the
controllable creation of private market offerings are still obscure, which may lead to a
significant risk of opportunism.
3. Smart homes as service platforms
A good example of a service platform is the smart home concept. We use the smart home
concept as an example of a service platform and two different service models implemented
on that platform to explain the role of platforms in service networks. The service models
used are:

Smart Homes as Service Platforms for New Healthcare and Energy Services 245
1. The smart energy networks (smart grids)
2. The intelligent medical management concept
At first these services seem to have nothing in common but as they are both provided to
peoples’ homes and need an ICT operating system with data network integration, they start
to link together. Before analyzing the service concepts, we define the concept of smart home
and analyze the key driving forces of the emerging smart home business.
The smart home has been seen as a potential solution to cut the costs of health care and
energy in modern societies by increasing the efficiency of services and empowering people
to take part into the service creation (Chan et al., 2008; Skubic et al., 2009). However, cutting
costs is not the only advantage brought about by technology; it also enhances the comfort
and well-being of the people in general (Skubic et al., 2009). The main driving factors for the
growing interest towards smart homes are the rising costs of health care and energy. We use
the Finnish market as an example of the recent developments in the costs and market
development.
The most urgent issue in the Finnish health and social is rapidly raising costs which are
caused by aging of citizens and inflexible service structure. We claim that a critical issue
in the service structure is lack of solid view into service needs of aging citizens, lack of
reasonably designed service infrastructure and missing discussion between specialists in
different sectors. In Finland, the number of aging citizens has grown from 780 000 in year
2001 to up to 880 000 (13%) in 2007. The growth of older age segments has been faster than
the average growth of the population, which has led to an increasing proportion of the
age segment of over 65-year olds from 15.2% to 16.5% of the population. At the same
period (years 2001-2007), the expenses of elderly care have grown by 35% from €1 157
million to €1 492 million even though the growth of demand has been 13% which
significantly exceeds the changes in the aging population, growth of demand and rate of
inflation.
The second issue is the rising expenses of medical care. Medical expenditure in 2007 in the
Finnish health care system was nearly €2 billion of which prescription pharmaceuticals for
outpatients amounted to €1.6 billion, which is over 70% of the total expenditure of medical
care (National Institute for Health and Welfare, 2009; Statistics Finland, 2009). The growth in
the expenditure has been significant. The medical expenses presented here are not the whole
truth about latent problems, because administration, logistics and other indirect cost
categories are not included in the figures. It is notable that a great amount of growth is
focused on the prescription drugs of outpatients which are the potential users of novel
technologies. Therefore, health care actors are calling for new solutions for medical care
management creating attractive potential for offerings which improve medical care
management at present.
The second issue driving the development of smart homes is the European Union. EU
legislation drives the market towards smart metering and smart grid solutions. The aim of
the EU is to empower consumers to participate on both saving the energy and producing the
energy. The energy and network providers are also seeking new business opportunities
from the emerging smart grid technology. But there is also consumer demand for the new
electricity saving technologies and services. The energy prices have been rising all over the
world. In Finland the electricity price (EUR /KWH) has grown from 0,76 in year 2001 to 0,98

New Technologies – Trends, Innovations and Research 246
in year 2007 (29%) (Statistics Finland, 2009). The new services allow for example monitoring
the electricity consumption more closely and help to change the consumption habits. The
more advanced services allow for example households selling the extra electricity back to
the grid and thus help to balance the total energy costs.
Regardless of the buzz around the smart home and ubiquitous solution, no common
definition for the business model exists at the moment. Smart homes can be approached
from at least two views. The concepts are often defined either as intelligent solutions at
homes to support daily living or as solutions the primary purpose of which is to provide a
comfortable life for residents in a home environment. Furthermore some authors have
provided more specific definitions regarding the features of the smart home concept:
• Any living or working environment that has been carefully constructed to assist people
in carrying out required activities. (Chan et al., 2008)
• Acquires and applies knowledge about the environment and its inhabitants in order to
improve their experience in that environment. (Cook & Das, 2007)
• Built entities in which various products and services interoperate by means of
Information & Communication Technologies (ICT) to constitute a product environment.
(Peine, 2009)
• Uses sensors and other devices and telecommunication features to enhance residents’
safety and monitor their health and overall well-being. (Demiris et al., 2008)
• Monitors the activities of the person within their own living environment along with
how they interact with home automation devices, and based upon these interactions
and their current sequence of activities the ambient environment can be controlled and
adapted to provide an improved living experience for the person. (Nugent et al., 2008
By definition, the smart home concept should be considered a bundle of technologies,
services, and information and service provision resources which constitutes an intricate
environment, i.e. a value network of firms with different resources which provides value for
its common customer. We approach the topic from the perspective of service-product
offerings which improve security at home, prevent loneliness by fostering social contacts,
and support home care providers to develop appropriate performance. A general
construction of the studied concept is presented in Figure 3.
Generally the smart concept has been so far ambiguously communicated to customers. The
marketing of smart homes has concentrated on the single functionalities and technical
features of solutions, lacking a wider construction that provides benefits for the customer.
Information gathering and sharing among a network of organizations involved in service
network will require significant renewals from the supporting infrastructure. Therefore,
innovations should focus on the systems that integrate the services to the homes. The
transformation of elderly care and energy services, however, requires adopting new
capabilities for orchestrating operations in the future as well as developing a broad home
living concept that should be forged through co-operation among firms from various
industries. Also the services need some integrated marketing. Generally the smart concept
has been so far ambiguously communicated to customers. The marketing of smart homes
has concentrated on the single functionalities and technical features of solutions, lacking a
wider construction that provides benefits for the customer.

Smart Homes as Service Platforms for New Healthcare and Energy Services 247

Fig. 3. Illustration of a general smart home construct (adapted from Chan et al., 2009)
4. Smart home service models opened
The emerging new industry and home centred thinking enables several new business
models and simultaneously challenges the old ways of doing business. Because the smart
home as a service platform integrates several businesses together, the total amount of
different service models can be quite big. We have chosen two service concepts that we use
to demonstrate the nature of the services that can be integrated into this platform. We have
used a process of future oriented business mapping (see e.g. Immonen et al., 2010; Kytölä et
al., 2011, Pynnönen & Kytölä, 2008), where first the key driving forces of the business are
mapped to form plausible scenarios of the developments. Second step is to analyse the
service elements and service models that are enabled by these scenarios. The third step is to
combine these elements and models into service systems by opening the actors and their
relationships and business models.
4.1 Smart energy metering
To open the smart energy services we have conducted a future oriented study among the
energy experts (Immonen et al., 2010). The primary aim of the study was to increase and
harmonize understanding about the future challenges in the field of energy metering and
related services. This study introduces the future oriented analysis of smart energy metering
services that are using the smart home platform. The results are based on a group decision

New Technologies – Trends, Innovations and Research 248
process arranged with energy specialists. The most important drivers were proposed as
follows:
• Climate change and progressive demands for efficient use of energy
• Demand for increased functionality of electricity markets
• Distributed energy production and virtual power plant
• Advanced technologies – support for intelligent customer interfaces
• Increased use of energy and raised unit prices
The second target of the research was going beyond from the state of the art to define the
key characteristics of future service systems. The idea was to find out the potential roles of
installation and maintenance service providers and increase understanding about the
architectures of competitive service concepts. The collected ideas consist of both larger
service systems and single services, but also characterize the key resources and capabilities
of services (See Immonen et al., 2010 for total list of service ideas). A further analysis of the
ideas reveals three groups of services which have a fairly unambiguous relation to targets
set by the regulator, and which will unavoidably have impacts on distribution network
companies. The selected categories, on the other hand, create the most significant concerns
for electricity distribution companies. The most remarkable service categories are:
• Reporting of energy consumption
• Ruidance for consumers of energy
• Consumption control services
The recognised services will also challenge the distribution network companies in the future
to develop appropriate models to merge requisite functions or services into their routines.
On the one hand, distribution network companies are capable to develop particular services
locally with public support. On the other hand, energy metering services will not belong in
the core functions of companies. Thus, the services would possibly be offered by specialised
operators. The most important advantage of the latter option is the fact that services would
be developed reasonably to meet customer needs without the limitation of local monopolies.
In any case, the service concepts will be outlined similarly, despite the production structure
or involved value network actors. The form and scope of smart energy metering services
depends on three things: firstly, the decisions of distribution network companies, secondly,
the given incentives by legislation, and, third, the development of technical standards.
Different forms of service concepts are outlined in the following chapters, where service
ideas are analysed in the light of optional scenarios.
4.1.1 Business scenarios of energy metering concept
The structure of the future business environment in the energy sector mainly depends on
political decisions (sanctions, guides, standards, etc.) as well as technology selection among
the network companies. Government policy is an especially important factor, because
distribution network companies operate on secured monopoly positions without the threat
of substitutes, which leads to low bargaining power for customers and insensitivity to
customer needs. Therefore, it is necessary to develop such policies that reflect real customer
preferences and protect customers against the misuse of monopoly positions.

Smart Homes as Service Platforms for New Healthcare and Energy Services 249
The state of future service concepts and business models may depend on the actions of
domestic and European regulators, and the appropriate focus of economic support on
service development (Strbac, 2008). Basically, two possible scenarios can be outlined for
future business environments on the energy sector: (i) the market environment, which is
incoherent and does not offer efficient service platforms and standardized technologies; and
(ii) the purposefully regulated environment, where standards and system interfaces have
been developed to support system integration, customer needs are recognized and service
platforms offer wide support for flexible concepts and the regulator supports new service
business creation (Kärkkäinen et al., 2006; Kirjavainen & Seppälä, 2007). The proposed
scenarios are:
Scenario 1: Pessimistic view
• Technology and business models stay unconsolidated and business branches are driven
by local monopolies.
Scenario 2: Optimistic view
• Advanced technologies, consolidated standards and open business networks will
become dominating regime.
Scenario 1 represents a pessimistic forecast for the development of Finnish national and
Nordic smart energy metering activities and related service markets, which can come true if
recognized threats become dominating in the Nordic electricity industry. Metering systems
are not harmonized, and the monopolistic behaviour of the distribution companies directs
the development of the energy markets. The major reasons behind this development can be
found from small distribution network companies which have no incentives to renew their
network data systems due to relative high investments. At the same time, a lack of
standards and uniform national system requirements hinder the development of metering
technology and services. This leads to a situation, where a lot of parallel systems are
utilized and network companies are in a risky lock-in relationship with suppliers. On the
other hand, the incoherence of technologies keeps unit prices on a high level and, partially,
prevents the exchange of metering data between market actors (Kärkkäinen et al., 2006).
Thus, the future government actions in the Nordic countries have a critical role, when the
flexibility of electricity markets will be developed.
Scenario 2 presents an optimistic view of the future developments in the electricity markets,
which has been created by decreasing the influence of the recognized threats and reinforcing
opportunities offered by intelligent metering. The main result of this scenario is a
description of the competitive environment, where most of the obstacles for marketplace
development and competition are removed. Thus, the following future states have been
realized: The regulator has redefined standards, and the national system requirements for
smart energy metering have been released, which enables harmonizing the systems and
decreases problems at the interfaces. The focus of financial support also has a role in
directing the development. Renewing processes and utilizing purchased services in the
network companies should be supported, if metering service markets are to be emerged.
The harmonized technology platforms decrease network companies’ dependency on
suppliers and the unit prices of smart energy metering because of faster development of
new solutions and the more efficient markets of technology. The development creates,

New Technologies – Trends, Innovations and Research 250
together with renewing of operations, a fertile ground for growing service business, which
is not bound to the local or national level but is international business, where operators are
able to implement generic service platforms.
System integration between smart energy metering and smart home automation systems
is an important aspect of this scenario, because it enables a method to control energy
consumption and intelligent solutions for energy saving among small consumers. There
the development gap is rather high and rules of competition differ radically between ICT
and the energy sector, because energy business is regulated and ICT companies are
competing in the open markets, where end users are determining the demands. Therefore,
operators for smart home systems are the core resources, when system integration is
implemented.
4.1.2 Future value networks of smart energy metering
Optional value networks are constructed to achieve the requisite capabilities to perform
actions related to specific services. It is expected that the requisite performance level of an
actor has a crucial role for market openings if industry evolution creates capability gaps to
incumbent firms.
In practice, distribution network companies aim at long-term asset management strategies
that rely on an assumption of stability in the industry. Improvements in the business
processes and structures are mostly expected to be incremental. Indeed, the risk of dramatic
changes in the industry is usually low because of the monopoly position of distribution
network operators. Thus, firms have low dynamic capabilities, because established positions
only allow concentrating on incremental improvements. Thus, if requirements to reduce the
service level radically occur in the industry, it may lead to significant structural changes and
the emergence of new business branches. The final implications of energy metering for the
industry architecture in energy distribution depend on features which may be materialized
in the service system.

Fig. 4. Expected value chain in Scenario 1
In Scenario 1 can be seen as the continuum of the current market situation and therefore, the
basic level requirements for metering and guidance services will not require enormous
investments. Therefore, Scenario 1 is not likely to lead the industry toward a reconstruction
process, which indicates a strong position of the distribution networks companies as
solution developers. Distribution network companies are likely to build services that are
outlined based on the needs of a local monopoly company. This means less customer

Smart Homes as Service Platforms for New Healthcare and Energy Services 251
oriented actions, and probably poor opportunities for external service providers to develop
generic business platforms. It can be expected that new services occur in the installation and
maintenance of metering systems, which is current practice in distribution network
companies in network construction for instance.
Scenario 2 (Consolidation) includes more radical change, when both the physical
infrastructure (meters, software and communication) and the method of customer service
may transform so remarkably that it creates prospects for new service providers in the field.
The most important drivers for the described change are global service models, the
authorities’ aim at standardized technology platforms and complex interconnection between
home systems. Such development pressures distribution network companies to redesign
their architectures, because limited market areas of local monopolies tend to lead to financial
limitations for investments in developing requisite services.

Fig. 5. Expected value chain in Scenario 2
The most important architectural changes in the value chain may occur among the operating
and maintenance activities of intelligent metering systems. New prospects emerge,
especially, if some dominant platforms for the home system infrastructures are developed.
The new market potential is opened for the actors that collect information from the
integrated home system, and deliver it to different purposes for multiple actors. In this case,
electricity distribution companies may outsource fault alert transmitting, energy saving
controls (switching), systems control, and metering data gathering operations for
specialized service providers. The divergence of metering operations (i.e. installation, data
reading, maintenance, and customer support) may have a crucial role for the convergence of
home technologies, because an external service provider is likely to develop generic
platforms to gain revenues from global markets. Data base services for energy metering may
provide options if domestic authorities set unambiguous standards for storage information.
On the other hand, metering information storages may hold such information that private
ownership is not a convenient approach. The private information sources may limit
information sharing for guidance and consultation purposes, which has a significant role in
steering overall energy efficiency. In this, supporting service providers, public or private, to
develop information sharing platforms have a role, because a sustainable change in
customer behaviour requires both technology platforms and accurate information to create
personal incentives.

New Technologies – Trends, Innovations and Research 252
4.2 Intelligent medical management concept
There is a widely recognized problem in health care services; how to arrange services in
situation where unit prices are growing along with the demand. To open this problem we
have conducted a future oriented large study among the health care specialists (Vanhala et
al., 2011; Kytölä et al., 2011; Immonen et al., 2011). The primary aim of the study was to
increase and harmonize understanding about the future challenges that are facing the
healthcare sector and especially homecare services. This study introduces the future
oriented analysis of the service system of intelligent medical management that also uses the
smart home platform.
New solutions to the increasing problems have been searched for from the innovative use of
technologies to assist the elderly to live at their homes instead of the need for
institutionalization. One important part of the whole system is to rethink the pharmaceutical
supply chain and home care: allowing the re-organizing of the supply chains through
regulatory changes and increasing the use of technologies to assist the patients in home care.
The key drivers of the market development in new health care services can be summarized
to following list (Vanhala et al., 2011; Kytölä et al., 2011; Immonen et al., 2011):
• Growing amount of elderly people in western countries
• Advanced technologies – support for intelligent customer interfaces
• Increased demand of homecare services
• Raising unit costs of healthcare services
The problems in this sector are not solvable by any single solution or service, but in the
home care services there can be seen great saving potential that is enabled by automating
services. The service areas that this kind of integrated service concept can best serve are
maintaining personal health and hygiene, maintaining social contacts and improving safety
at home (Vanhala et al., 2011).
4.2.1 Business scenarios of pharmaceutical supply
In developed countries the pharmaceutical supply network is well regulated and the
participants and their roles well-known. This mostly forms the environment in which to
operate. As the regulation has been quite unchanged, the roles and business models of
individual participants have stayed mostly the same after their formation: the competition
within the network has mainly come from inside the value network and there have not been
pressures from outside the network. Some of the additional services have been formed to
support the network, such as the information services that assist the patients in their treatment.
The cost side is mainly driven by the governments’ will to decrease the costs or at least
restrain the rising costs. Also, the government is willing to keep control of the medicine
supply, especially the prescription medicines. On the other side the customers want to get
better services and the health care personnel strive to improve the monitoring of the
patients’ treatment. To fulfil the requirements and development needs of all the parties, the
whole pharmaceutical supply network faces challenges and all the participants in it try to
match their offerings (individual business models) in the best possible way to fit the needs
and the environment.

Smart Homes as Service Platforms for New Healthcare and Energy Services 253
The current model moreover tries to be one-for-all, without paying any real attention to the
customers and their needs. One of the issues with the current model is also that the
resources available are in the wrong use: for example, home care personnel driving around
or a pharmacist waiting for the next customer are wasted resources inasmuch as they are not
doing what they are trained for. Also, the technologies affecting the health care value
network can be easily utilized in other countries as well. These factors create a good starting
point for studying the scenario further, and we suspect there are lots of efficiency
improvements to be gained by splitting and reorganizing the current system. The scenario
axis in our analysis is regulation. Either the current regulation holds and the market will
have a very little changes, or the regulation is directed to allow a better optimized network
structure for pharmaceutical supply and treatment.
Scenario 1: Regulated monopolies
• The health care services are operated by government supported regional monopolies
and the pharmacies stay in control of the medical supply.
Scenario 2: Competition allowing regulation
• The competition is allowed in health care services and medical supply. Advanced
technologies, consolidated standards and open business networks will change the
business towards more driven by customer needs.
Currently drugstores are in a central position in the material, information and cash flow of
prescription pharmaceuticals. Limiting the possibilities of delivering prescription
pharmaceuticals to the shop (pharmacy) or public health care professionals (home care), the
current legislation also demands that information be delivered to the customer about the
prescribed medicine as to how to use them and about their effects by either of the two.
4.2.2 Future value networks of pharmaceutical delivery
We have mapped the networks in form of value chains to show the structure of business in
the scenarios. The value network in scenario 1 follows the current situation in Finish market.
The mapped business model of the intelligent medical management concept is shown in
Figures 6 and 7. (See Immonen et al., 2011 for detailed analysis of the services)

Fig. 6. The value chain of medical delivery in scenario 1
The current pharmaceutical delivery in Finland is arranged basically with two concepts. The
first one is the traditional pharmacy and the other one is the home care concept which also
needs the pharmacies. The ultimate difference between these concepts is the logistics of the

New Technologies – Trends, Innovations and Research 254
medicine to the end-customer. In the pharmacy concept the end-user picks up the medicine
and in the home care concept a nurse delivers them to the customer’s home. In the scenario
1. development path there can not be seen any radical changes in the industry. The industry
seeks the leanest possible way to make profits and it usually means downsizing the services
and therefore worse customer value.
The regulation that would allow competition and alternative operators in pharmaceutical
delivery would create new business concepts. This would be the situation in scenario 2:
competition allowing regulation. The analysis of intelligent medical management concepts
(see: Vanhala et al., 2011; Kytölä et al., 2011; Immonen et al., 2011) resulted with two
plausible service models:
• Medicines through postal services
• Self-service medicine store
The main issue in the new concepts is how the services are executed: by utilizing automated
services that are enabled by the technology to replace most of the overlapping work of the
health care professionals. In the Medicines through postal services concept the medicines
prescribed by the doctor(s) are packed by the dispenser in a personal package which is then
delivered by secured mail directly to the customer. The package contains the different
medications, the compatibility of which is automatically checked before packaging, as well
technologies to link the package and its contents to information databases to guide the
usage, including alerts and monitoring. The Self-service medicine store concept relies on
similar technologies to assist and monitor the usage, but the package itself has to be picked
up from a store or kiosk where the vending machine recognizes the customer by an
electronic ID and fetches the prescriptions from the database and gives the medicines to the
customer. This can be operated by for example the pharmacies.

Fig. 7. The value chain of intelligent medical management
These new concepts create possibilities for new business which utilizes information and
knowledge storing and sharing by different communication methods. To make the
concepts even possible there are also opportunities for the system providers, which need
to build the systems for these purposes replacing the old ones and connecting them all to
a working concept. The most threatened single business in this scenario seems to be the

Smart Homes as Service Platforms for New Healthcare and Energy Services 255
traditional pharmacy, as many of its current tasks and functions are either shared or
moved to other operators or businesses. One such business could be the medicine
information service that provides the customers with 24-h service about the medication
they are using. It is still more likely that the traditional pharmacies stay in the future as
well, but they need to reconsider their role within the network and make the needed
adjustments, e.g. improve their efficiency by concentrating on their core functions, to stay
in business in the longer term.
5. Conclusion
This chapter opens the emerging business area of home centred services. Especially we
focus on smart homes as service platforms and we use health care and energy services as
examples of services that can use that platform. This study contributes mainly on service
management literature (e.g. Vargo & Lusch, 2008).
The smart home framework revealed links between conventionally distant business areas.
Therefore, assessing actor networks of the smart home business was challenging, because
available databases did not provide support the research. The business networks were
researched from Finnish aspect using public information available on authorities and firm
internet sites. We also used expert panels to validate the service mapping process. The
analysis presented that public organizations dominate service markets leading private
markets very fragmented. The smart home technology market is on emergent state and clear
key actors do not exist at present. Actual public procurement politics that directs to
purchasing specific solutions and low price without considerations about spill-over effects
into supplying industries was suggested one reason for fragmentation. The public actors (i.e.
health care organizations, energy companies, local and domestic authorities, and regulators)
has important role in driving consolidation of the smart home business networks. Longer
term partnerships with service providers for creating key suppliers, translating user needs
into service and product specification, and appropriate standardizing of technologies are the
expedients to increase competitiveness of service and solution markets. The key implication
is that especially national regulation and funding of technology development should be
refocused from development of specific technologies to integrated platforms, which support
diffusion of both home systems and related service businesses.
Governments are in the position to adjust the regulation and subsidies towards chosen
objectives. In most industrialized countries prescription pharmaceuticals are controlled by
the governments the whole way from the manufacturing to the consumption and even
disposal. This affects the competitive dynamics of the whole value network and may even
freeze competition allowing the rise of the overall costs of national health care and energy
supply, which fall on taxpayers, if subsidized like in many European countries. Therefore
governments should investigate the different possibilities to arrange the electricity and
pharmaceutical supply as well as the subsidizing of the use of new technologies to assist
and control the service provision to keep the increasing costs in control.
Allowing the restructuring of the pharmaceutical supply network would radically change
the current dynamics within the network. Each individual participant of the network and
their own business model would need to adapt to the new situation. The companies within
the network need the capabilities to take advantage of emerging opportunities. This

New Technologies – Trends, Innovations and Research 256
dynamics opens up possibilities for new entrances in the network, which fill up the roles
required for the whole concept to work. Such are e.g. postal service providers, which
possess the capabilities to transform their services to fit the traceability and security
requirements, as well as information technology integrators, which can turn their existing
capabilities to build the information systems for the assistance and monitoring of treatment.
The old participants need the capabilities to either transform their former roles into new
ones within the network, e.g. from a wholesaler to a dispenser, or to specialize further, e.g.
from a pharmacy to medicine and treatment information services.
The smart home platform allows the management of different services and the energy
services are a good example of this. The environment in which services contents are defined
basis of end-user needs drives radical changes of energy metering, consumption control,
and maintenance and fault situation management businesses. However, role of public sector
and authorities should be analyzed carefully, because it may have impacts on service market
functionality on long term. This concerns especially metering data storage, and other end-
user information. For speculation, public owner of metering data may enable more open
structures in the markets in which private sector utilize gathered data for service operations.
Otherwise, fragmentation of user information may prevent successful implementation of
optimization, monitoring and guidance services of energy consumption. Indeed, privatized
metering data storage may lead situation in which a single firm create strong barriers for
competition. In general, closed systems presumably lead higher prices, low functionality,
and low diffusion of energy saving services for threat of customer lock-in, which is
important obstacle for market emergence.
Finnish energy industry has recently launch research programmes on intelligent power
grids in which energy metering has its role. Energy metering is researched in those
programmes from techno-economic perspectives where specific needs of energy distribution
are in pivot. Linkages between specific metering service structures and general intelligent
home concepts are probably not in focal point, because actual research drives monopoly
driven services. Thus, risks of inappropriate systems designs and fragmented information
exist from the customers’ point of the view.
Generally in this kind of future oriented studies utilising different group decision and expert
panel methods, the results can not be generalized as such. The processes are repeatable but
the results depend on the context and the respondents. This limits the reliability of the
study. To increase the reliability of the results the further research should utilize for example
pilot platforms and services and study the operating services.
6. References
Ancarani, A. 2009, "Supplier evaluation in local public services: Application of a model of
value for customer", Journal of Purchasing and Supply Management, vol. 15, no. 1, pp.
33-42.
Aschhoff, B. & Sofka, W. 2009, "Innovation on demand—Can public procurement drive
market success of innovations?", Research Policy, vol. 38, no. 8, pp. 1235-1247.
Chan, M., Estéve, D., Escribe, C. and Campo, E. 2008, A review of smart homes – Present
stage and future challenges. Computer Methods and Programs in Biomedicine, Vol 91
No 1 pp. 55-81.

Smart Homes as Service Platforms for New Healthcare and Energy Services 257
Cook, D., Das, S. 2007, ” How smart are our environments? An updated look at the state of
the art”, Pervasive and Mobile Computing, Vol 3 No 2 pp. 53-73.
Demiris G., Oliver D.P., Dickey G., Skubic M. & Rantz M. 2008, ”Findings from a
participatory evaluation of a smart home application for older adults” Technology &
Health Care, No 2. pp. 111-118.
Edler, J. & Georghiou, L. 2007, "Public procurement and innovation—Resurrecting the
demand side", Research Policy, vol. 36, no. 7, pp. 949-963.
Fließ, S. & Kleinaltenkamp, M. 2004, "Blueprinting the service company: Managing service
processes efficiently", Journal of Business Research, vol. 57, no. 4, pp. 392-404.
Immonen, M., Pynnönen, M., Partanen, J. and Viljainen, S. 2010, “Mapping future services: a
case on emerging smart energy metering business”, International Journal of Business
Innovation and Research. Vol. 4, No. 5, pp. 491-514.
Immonen, M., Pynnönen, M. & Kytölä, O. 2011, “Strategic management of forest industry
transformation” International Journal of Strategic Change Management, Vol. 3, No 1/2,
pp. 16-31.
Janssen, M., Joha, A. & Zuurmond, A. 2009, "Simulation and animation for adopting shared
services: Evaluating and comparing alternative arrangements", Government
Information Quarterly, vol. 26, no. 1, pp. 15-24.
Kärkkäinen S., Koponen P., Pihala H. 2006, Research Report no. VTT-R-09048; Sähkön
pienkuluttajien etäluettavan mittaroinnin tila ja luomat mahdollisuudet, Technical
Research Centre of Finland, available at
[http://www.vtt.fi/inf/julkaisut/muut/2006/VTT-R-09048-06.pdf] (in Finnish).
Kirjavainen M. and Seppälä A. 2007, Sähkön pienkuluttajien etäluettavan mittaroinnin
päivitetty tila, Ministry of employment and the Economy of Finland, available at
[http://julkaisurekisteri.ktm.fi/ktm_jur/ktmjur.nsf/all/927C86500EBADA69C225
73B700442F0A?opendocument] (in Finnish).
Kytölä, O., Pynnönen, M. and Immonen, M. 2011, “Future Medical Supply – Challenges for
Business Concept Formation”, International Journal of Business Innovation and
Research, 2011, Vol. 5, No. 5. pp. 493-509.
Lusch, R.F. & Vargo, S.L. 2006, “Service-dominant logic: reactions, reflections and
refinements”, Marketing Theory, Vol. 6, No. 3, pp. 281–288.
Lusch, R.F., Vargo, S.L. & O’Brien, M. 2007, "Competing through service: Insights from
service-dominant logic", Journal of Retailing, vol. 83, no. 1, pp. 5-18.
National Institute for Health and Welfare 2009, Web site. Available:
http://www.thl.fi/en_US/web/en [2009, 1/28] .
Nugent C.D., Finlay D.D., Fiorini P., Tsumaki Y., Prassler E. 2008, ”Home Automation as a
Means of Independent Living”, IEEE Transactions on Automation Science and
Engineering, No 1. pp. 1-9.
Peine, Alexander. 2009, ”Understanding the dynamics of technological configurations: A
conceptual framework and the case of Smart homes”, Technological Forecasting and
Social Change. Vol 76 No 3 pp. 396-409.
Pynnönen, M. & Kytölä, O. 2008, “From business concept innovation to a business system: a
case study of a virtual city portal”, International Journal of Business Innovation and
Research, Vol. 2, No 3, pp. 314- 329.
Pynnönen, M., Ritala, P. and Hallikas, J. 2011, “The New Meaning of Customer Value: a
Systemic Perspective”, Journal of Business Strategy, Vol. 32. No.1. pp. 51-57.

New Technologies – Trends, Innovations and Research 258
Skubic M., Alexander G., Popescu M., Rantz M. & Keller, J. 2009, ”A smart home application
to eldercare: Current status and lessons learned” Technology & Health Care, No 3.
pp. 183-201.
Statistics Finland. Web site. Available: http://www.stat.fi/index_en.html [Accessed 2009,
1/28]
Strbac, G. 2008, "Demand side management: Benefits and challenges", Energy Policy, Vol. 36,
No. 12, pp. 4419-4426.
Tadelis, S. 2007, "The Innovative Organization: Creating Value Through Outsourcing",
Californian Management Review, vol. 5, no. 1, pp. 261-277.
Vanhala, A., Immonen, M. and Pynnönen, M. 2011, “Developing an assistive service offering
for aging citizens”, Innovative Marketing, Vol. 7, Issue 2, pp. 71-80.
Vargo, S.L. & Lusch, R.F. 2008, "From goods to service(s): Divergences and convergences of
logics", Industrial Marketing Management, vol. 37, no. 3, pp. 254-259.
Vargo, S.L., Maglio, P.P. & Akaka, M.A. 2008, "On value and value co-creation: A service
systems and service logic perspective", European Management Journal, vol. 26, no. 3,
pp. 145-152.
Walker, H., Knight, L. & Harland, C. 2006, "Outsourced Services and ‘Imbalanced’ Supply
Markets", European Management Journal, vol. 24, no. 1, pp. 95-105.
Part 8
Speech Technologies

0
Recent Progress in Development of Language
Model for Slovak Large Vocabulary
Continuous Speech Recognition
Jozef Juhár, Ján Staš and Daniel Hládek
Technical University of Košice
Slovakia
1. Introduction
Speech technologies have a potentiality to simplify the human-machine interaction as well as
the communication between people. The use of speech technology applications has nowadays
continuously growing trend. Each speech recognition system, which stands in the heart of
every speech application, besides an algorithmic complexity, is strongly language dependent.
Therefore, one of the challenging tasks by the development of the Slovak large vocabulary
continuous speech recognition (LVCSR) system is a creation of an efficient language model (LM).
Development of the Slovak language model, which belongs to a group of highly inflective
languages, is more laboured than creation of an English language model. First reason is
that the Slovak language is characterized by a relative free order of words in sentences.
This consequently leads to the problem of data sparseness of the text data used for training of
language models (LMs). Second reason is the inflection in the language itself due to the rich
morphology which leads to a several times larger vocabulary than in English. Therefore,
amount of text data that could statistically enough cover the Slovak language is substantially
higher.
Contemporary modeling of the Slovak language is based on the knowledge of modeling of the
related Slavic languages, such as Czech, Polish, Serbo-Croatian or Russian language (Nouza
et al., 2010). From the field of statistics, Slovak language is very similar to the Czech language,
especially in forming words into sentences and determining the sentence semantics. In the
contrast, from linguistic point of view, mainly in phenomena of inflection and assimilation in
voice, Slovak is more or less similar to the Polish. Therefore, for statistical language modeling
it is appropriate to be limited with linguistic constraints as well.
This chapter describes results of the Slovak language model development for judiciary
domain-specific LVCSR task and broadcast news transcription. During this process, we have
coped with several problems in text preprocessing, selection of the basic statistical methods
used in modeling of the other similar languages and adaptation into the area of application.
Another important part in the Slovak language modeling has been optimization of the
resultant model, which introduced phonetic and linguistic relations between words. These
optimization steps have caused an improvement in quality of our LM as well as recognition
accuracy of the LVCSR system itself.
12
2 Will-be-set-by-IN-TECH
This chapter is organized as follows. Section 2 introduces the process of text gathering and
preprocessing text corpora used in training LMs. Section 3 describes the process of creation
a vocabulary of the Slovak language. In the Section 4 the selection of appropriate smoothing
technique, method for the adaptation to the given domain and optimal pruning algorithm are
presented. Some proposed optimization approaches in modeling of the Slovak language are
summarized in Section 5. Section 6 presents the setup of the Slovak LVCSR system used in
real task of domain-oriented speech recognition. At the end of this chapter in Section 7, the
experimental results are summarized. Section 8 closes this chapter with the discussion.
2. Text data and preprocessing
Small languages of Eastern Europe, such as the Slovak language, can be considered as
under-resourced, because they usually suffer from the lack of audio databases and linguistic
resources. Then, the main assumption in the process of creation an effective LM for any
language is to collect and consistently process a large amount of text data entering into the
process of training LM. Therefore, we have proposed an automatic system, called webAgent
(Hládek & Staš, 2010a), which retrieves text data from various web pages written in Slovak
language. Moreover, the text gathering system is able to detect the character encoding of the
given web page, to collect links to other web pages and to retrieve text data from DOC (MS
Word), RTF or PDF documents as well.
Before training LMs it has been necessary to transform the text data into pronunciation form.
These text preprocessing steps include: (a) word tokenization, (b) text normalization, (c) sentence
segmentation and (d) filtering of grammatically incorrect sentences (Hládek & Staš, 2010b).
The most important preprocessing operation is text normalization, for which the following
rules has been proposed:
• each sentence is on exactly one line;
• all words were mapped to lowercase;
• all numerals (cardinal, ordinal, dates, mathematical items and others) were replaced by
their pronunciation form according their surrounding context;
• compound words and numerals were divided to their separated form;
• selected frequent abbreviations, acronyms and names of titles were expanded to the
pronunciation form according their surrounding context;
• numbered and alphabetical intends were transcribed to their pronunciation form;
• in judiciary documents hidden proper nouns and name entities, such as names, surnames,
name of streets and cities were detected and replaced according their surrounding context
using our proposed automatic generator of name entities;
• words with emphasized inter-character spaces were unified;
• all punctuation marks and symbols were replaced by their pronunciation form;
• spelled items were mapped to uppercase due to their better separation and their uniform
phonetic transcription were determined;
• hypertext and email address were excluded from the text corpora.
When preprocessing the domain-specific text data from the field of judicature we had to
resolve the problem of transcription of a large amount of specific abbreviations and numerals
262 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 3
text corpus # sentences # tokens
training web corpus 54 765 873 946 958 508
data set broadcast news 33 804 173 590 274 484
judicial corpus 9 135 908 258 131 635
held-out broadcast news 3 455 523 53 046 071
data set judicial domain 1 782 333 55 163 941
annotations broadcast news 124 733 925 912
judicial domain 319 419 3 197 469
together 103 387 962 1 907 698 020
Table 1. Statistics of text corpora
as well (Staš et al., 2010b). Normalized documents are then stored in relational database based
on PostgreSQL along with their titles, URIs of web pages, and names of sources where they
were published. It should be noted that database is closely associated with the system for
text gathering. In the process of insertion text data into database the duplicity verification is
performed. Nowadays, we are dealing with text corpus of size about 1.9 billion of tokens
in more than 100 million of sentences. The text corpus is divided into several different
domain-related sub-corpora (see Table 1).
It should be noted that for filtering of grammatically incorrect words we have used our
spellcheck lexicon, created by merging available Open Source dictionaries such as aspell,
hunspell and ispell (sk-spell, 2010) with lists of proper nouns, geographical items and various
name entities available on the Internet. The size of our lexicon for spell-checking is about 1.25
million of unique words (Staš et al., 2011a).
3. Vocabulary
Vocabulary which have been used in language modeling was selected from collected text
corpora using standard methods based on the highest occurrence words in the training corpora
and maximum likelihood approach (Venkataraman & Wang, 2003) for selection domain-specific
words fromthe field of judicature. The vocabulary was then extended to the number of names
and surnames, geographical items, names of various institutions and some other name entities
in the Slovak Republic, as can be seen in the Table 2.
description # words
348k base vocabulary 348 255
names female (inflected forms) 1 060
male (inflected forms) 824
surnames female (inflected forms) 55 774
male (inflected forms) 82 388
name geographical items 22 050
entities names of institutions 2 331
legal terms 2 548
multiword expressions 3 000
Table 2. Vocabulary
263
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
4 Will-be-set-by-IN-TECH
We have also proposed an automatic tool for generating inflective word forms for names and
surnames which were used in modeling of the Slovak language, (a) in the on-line dictation
LVCSR system as an independent model of names and surnames, and later (b) in modeling of names
and surnames using word classes conditioned by their grammatical category.
We have found that in modeling of the Slovak language with currently available text data the
optimal results were achieved if the vocabulary size is about 100 − 150 thousand of words
for the domain-specific and about 300 − 350 thousand of words in general domain task of
speech recognition. It should be noted that all words in vocabulary were manually checked
and corrected by linguistic experts.
4. Statistical modeling of the Slovak language
In the following sections, selected methods like smoothing, adaptation, combination and
pruning are summarized. The most suitable algorithms were later used in training of the
reference Slovak language model, as described in Section 6.1.
4.1 Language model
In general, language model determines the probability of the sequence of words as well as
the word itself, what consequently helps the decoder to find the most probable sequence of
words, which corresponds to the acoustic information pronounced by the user. Contemporary
language modeling is based on the use of n-grams, which mainly consider the statistical
dependency between n individual words.
Formally, the main aim of the n-gram model is to determine a priori probability P(W) of a
sequence of words W = {w
1
w
2
. . . w
n−1
} and to provide the quickest and the most exact
estimation of this sequence of words in decoding process of a LVCSR system. This probability
can be defined as follows
P(W) = P(w
1
w
2
. . . w
n−1
) =
n

i=1
P(w
i
|w
1
w
2
. . . w
i−1
), (1)
where P(w
i
|w
1
w
2
. . . w
i−1
) is the conditional probability of word w
i
conditioned by its history
{w
1
w
2
. . . w
i−1
}. Such process of decomposition allows us to recognize for LVCSR system
a sequence of words during its pronunciation and determines the probability P(W) for
searching strategy in decoding process gradually.
The main advantage of using n-gram models in LVCSR lies in relative easy computating
their probability estimations based on computating the relative occurrence of words, or word
sequences in the training data set using maximum likelihood approach (Jurafsky &Martin, 2009;
Manning & Schütze, 1999).
4.2 Smoothing
As it was mentioned earlier, for dealing with problem of data sparseness, some re-estimation
methods such as discounting, interpolation or backing-off also called smoothing are used in
statistical language modeling (Jurafsky & Martin, 2009).
264 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 5
Due to the fact that a speaker can also pronounce a sentence does not occuring in the training
data set, cause that the probability of such events can lead to the zero. Therefore, the
problem of zero probabilities leading to errors in the recognition is resolved by smoothing
of the language model. Smoothing uniformly redistributes parts of probabilities of observed
n-grams among n-grams which are not observed in training data set. Nowadays, there
exist several different smoothing techniques, such as additive Add-One or Add-δ smoothing,
Ristad natural law, Good-Turing estimation, Katz back-off model, absolute and linear discounting,
Witten-Bell model (Manning & Schütze, 1999), or Kneser-Ney smoothing and its modifications,
which use counting of n-grams or counting these counts in computing discounting constants
in smoothing LMs (Chen & Goodman, 1996).
We observed that among all smoothing techniques in modeling of the Slovak language, the
optimal results produce followed algorithms:
• Katz model in smoothing LMs trained on small text corpora (approx. hundrets of MB);
• modified Kneser-Ney algorithm in smoothing LMs trained on huge text corpora (approx.
tenths of GB);
• Witten-Bell smoothing in modeling of the Slovak language from text corpora with more
regular structure of sentences.
4.3 Adaptation and combination
In the process of enhancing the performance of the LVCSR system, the language model
adaptation (LMA) plays an important role in case of domain-specific speech recognition. The
basic idea of LMA is to use a small amount of domain-specific text data to adjust LMs to
reduce the impact of languages differences between the training and testing text data and
set the parameters for independent topic-dependent LMs to correspond domain as much as
possible with the real conditions of LVCSR application. LMA includes not only statistical
dependencies between words in given language, but also the frequency of word occurrences,
structure of the text data and further additional information that usually come from the field
of linguistics and phonology (Staš et al., 2010a).
The LMA is usually performed by combining several (different) topic-dependent LMs when
adaptation text (held-out data set) is used for adjusting the parameters of these LMs. In recent
years, many different techniques have been designed for adaptation and combining LMs,
including maximum a posteriori (MAP) approaches such as count merging and linear, log-linear
or generalized linear interpolation (Gao et al., 2006; Hsu, 2009) and some discriminative methods
such as LMA based on minimum discriminative information, boosting and perceptron algorithm or
minimum sample risk method (Gao et al., 2006), which come from maximum entropy approach.
We have observed that algorithms producing significant results for strong statistically
dependent languages such as English language, do not bring notable improvement in
modeling the Slovak language. Based on detailed analysis experimental results methods for
adaptation and combination LMs published in (Staš et al., 2010a), we also achieved that usage
of the linear interpolation or its generalized alternative for the Slovak language is more than
sufficient and interpolation weights should be adjusted using expectation-maximization (EM)
algorithm by minimization of perplexity on held-out data set.
265
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
6 Will-be-set-by-IN-TECH
4.4 Pruning
Typically, an uncompressed LM in highly inflective language is comparable in size to the text
data on which it has been trained. To build LMs for the task of real-time application it is
necessary to limit the size of the resultant LM. In highly inflective languages, with using a
large vocabulary increases the number of n-grams in LM which may occur in the training
set just once or twice and do not have a big impact on the quality of LM or accuracy of the
recognition system. Therefore, these n-grams can be excluded from the LM using pruning.
There exist several criteria for pruning LMs. To create an efficient and compact model of the
Slovak language for using in real-time application of LVCSRsystemwe observed the influence
on the quality of LM of following pruning methods: (a) cutoff counts, (b) weighted difference
method (Seymore & Rosenfeld, 1996), and (c) pruning based on relative-entropy (Stolcke, 1998).
We found out that the relative entropy-based pruning achieved the best results.
5. Model optimization
Several different techniques and principles have been used and proposed in order to get an
efficient model of the Slovak language for off-line and on-line speech recognition. These
so-called optimization techniques which include the statistical, linguistical and phonetical
principles and practices and lead to the increasing the quality of language models, decreasing
errors in LVCSR system and usability of these models in real conditions of speech recognition
in Slovak are described in following sections.
5.1 Spelling pronunciation
One of the main problems in speech recognition having the significant influence on the
overall result of the speech recognition is how to implement the best phonetic transcription of
words contained in dictionary. The transcription of words from orthoepic to the ortographical form
concerns also such words as abbreviations or acronyms usually spelled character-by-character,
for example: IBM, PhD., P. O. Box, etc. These events were necessary to unify, also to define
their transcription to the Slovak phonetic alphabet (Cer ˇ nak et al., 2003) and to assign them
all possible pronunciation variants. Regarding to the Slovak language, we detect about 620
abbreviations and acronyms (510 alternative pronunciations) in the text corpora mentioned in
Section 2 and manually modified their transcription under linguistic rules used in the Slovak
language.
5.2 Modeling of noise events
Spontaneous speech is also characterized by various non-speech sounds or expressions which
are mainly generated by the speaker or surrounding environment. On the analysis of
the resulting hypotheses obtained from the output of our dictation LVCSR system, we
encountered relatively a lot of mistakes at the beginning of the speech or after long pause,
in situations where the speaker paused, coughed, lip smacked, etc. We have decided to
explore such ways in which it would be possible to model these so-called noise events in Slovak
language modeling without having a knowledge of their occurrences in the training data set
and false increase the estimate of their probabilities. Since the locations of the noise events
are usually tagged by annotators during transcription or annotation of speech recordings into
266 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 7
text by special tags, we have decided to include these annotations of speech recordings with noise
tags into the process of training LMs and model the Slovak language by using selected noise
events as well.
First, we had to map all noise tags contained in annotations into five groups: (a) short pause,
(b) long pause, (c) filled pause, (d) background and (e) speaker noise (Staš et al., 2010b) and these
were later included into the dictionary and used in language modeling. It is important to say
that after recognition, these noise events have appeared in output like transparent words.
5.3 Multiwords in Slovak language modeling
As it was mentioned in previous section, the most common mistakes in speech recognition
arise at the beginning of the speech or after long pause and these also can be caused by
misrecognition of short monosyllabic words consisting of no more than three or four characters.
These words are often added to the following or preceding word, recognized as a noise or
ignored (Kolorenˇ c et al., 2006). To avoid this problem, it is suitable to model these events
using multiword expressions (MWEs).
It has been showed that MWEs in the form of connection of short (monosyllabic) word with
long (di-, tri- or polysyllable) word, which is usually more recognizable, can help increasing
the recognition accuracy of the given short word. Moreover, using MWEs increases the order
of n-gram LM and decreases the number of pronunciation variants depending on the context
of the given word, because in an inflective language some of the words are pronounced
differently in different context.
The extraction of MWEs in the Slovak language was performed by following selection
criteria (Staš et al., 2011b):
1. both words forming the MWE and MWE itself must occur frequently in the text corpus;
2. MWE is formed by at least one short word, consisting from no more than three characters;
3. final selection is conditioned by additional linguistic constraints.
For the process of selection multiwords, we have used the standard statistical measures
based on absolute and relative co-occurrence and pointwise mutual information (PMI) of these
word pairs in the text corpora limited by the linguistic constraints. Selection measures was
intentional. Absolute frequency expresses the most frequented events in given language.
Relative frequency in the context of the first word extract MWEs such part-of-speech in Slovak
as prepositions, conjunctions or pronouns usually occuring in the first place of given MWE.
PMI reflects collocations which do not occur in language frequently but usually have certain
meaning.
Linguistic constraints come from the observations of the behaviour of a LVCSR system in the
process of testing LMs. It have been discovered that our LVCSR system is often mistaking in
following cases: (a) there was an assimilation of voicing on a word boundaries and (b) if a first
word in MWE ended with same letter as the second word begins.
Using mentioned and proposed methodology for extraction MWEs from the text corpora
we obtained about 3 000 word pairs (561 pronunciation variants) which were included into
dictionary with phonetic transcription and into the process of training Slovak language
models.
267
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
8 Will-be-set-by-IN-TECH
5.4 Class-based models
Another problem by using LVCSR system is a possibilty to insert new words into the
dictionary and LM. The similar problem arises also in recognizing proper nouns such as
names, surnames or geographical names and other name entities. The recognition of names
and surnames is one of the key properties of the on-line dictation LVCSR system which
has noticeable influence on its usability in real conditions. There are different suboptimal
solutions, by which we can cover a large part of the vocabulary in given language and
also we can deal with the problem of insertion new words without overtraining LM. One
of these solutions are class-based LMs which have great importance in dealing with certain
problematic tasks, because they generalize context dependency of also such words, which
have not occurred in the training corpora yet. We decided to use class models in modeling
names and surnames in Slovak, in order to easily extend the class of words just for this case
and resolve the problem of insertion new words into the dictionary.
For this purpose, we developed the rule-based morphological tagger for names and surnames,
which is based on pattern matching principle from predefined set of names and surnames
(patterns) conditioned by their grammatical category. The accuracy of this approach is then
limited just by the number of patterns and selected rules. In this case the principle based on
semantic similarity of formal expressions and syntactic knowledge contained in grammatical
category of a proper noun is used. Using this approach we replaced thus approximately 24 818
unique inflected forms of names and surnames with one fromthe set of 20 morphological tags
which have been depended on the case of given proper noun.
Also, it is important to say that for increasing of recognition accuracy we have created
an independent model for names and surnames which can be used in special dictation mode
in our dictation LVCSR system in the Slovak language as a parallel model of a primary
domain-specific LM from the field of judicature.
5.5 Morphology
The inflection in Slovak language usually occurs on the border of stem and endings. This
knowledge can help modeling of unknown words or words with a low occurrence in training
corpus using morpheme-based models (Byrne et al., 2000; Creutz et al., 2007). Dividing singletons
or words with a low frequency in the training corpus into morphemes, it is statistically
possible to cover such events which do not occur in dictionary and LM. The knowledge of
morphology of the given language then allows to also generate new word forms, for example
as it was in the case of declination of names and surnames described in Section 3 or Section 5.4.
5.6 Augmentation statistics of n-grams
Nowadays, research in the language modeling is oriented on the augmentation of statistics
of bigrams or trigrams from other resources than by gathering a large amount of text data of
given language. Statistics of seen or unseen n-grams can be obtained by using:
1. statistics of n-grams contained in free available academical or national text corpora;
2. web search engines by copying statistics of n-grams on the Internet (Creutz et al., 2009; Oger
et al., 2010; Zhu & Rosenfeld, 2001);
268 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 9
3. machine translation systems in translation n-grams from other (similar) languages.
At the end of this section, it is important to say that contemporary modeling of the Slovak
language uses only the text data (trigrams) obtained from the Slovak National Corpus (SNC)
(Šimková, 2006) to the augmenting the statistics of n-grams used in training LMs. However,
the research and development in the other mentioned areas does not lag.
6. Speech recognition setup
In the following sections, the setup of our LVCSR system and description about proposed
methodology of training Slovak LMs, used annotated speech databases, acoustic modeling
and data for testing LMs is presented. The setup of LVCSR system was adjusted to the
testing of LMs oriented to the judicial domain and broadcast news transcription in the Slovak
language.
6.1 Language modeling
Experiments have been performed with trigram LMs which were created using tools
contained in the SRI Language Modeling (SRILM) Toolkit (Stolcke, 2002) with vocabulary
mentioned in Section 3. The complete process of building the reference LM of the Slovak
language can be resumed into following steps:
• extraction the statistics of trigram counts from each of the domain-specific corpora;
• calculation the statistics of counts-of-counts for estimating Good-Turing discounts needed in
the process of smoothing LMs;
• calculation the discounting constants used in smoothing LMs by the modified Kneser-Ney
algorithm from obtained discounts;
• computing the perplexity of each domain-specific LM for each sentence on held-out
(development) data set;
• computing the parameters (interpolation weights) for individual LM by minimization of
perplexity on held-out data set using EM algorithm from obtained files with PPL;
• creation the final domain-adapted LM by the weighted combination of particular
domain-specific trigram LMs combined by linear interpolation;
• pruning the resulting LM using algorithm based on relative entropy in order to use it in the
real-time application in domain-specific task of Slovak LVCSR.
6.2 Acoustic modeling
The triphone context-dependent acoustic models based on the hidden Markov models (HMM)
have been used, where each state have been modeled by 32 Gaussian mixtures. The
models have been generated from feature vectors containing 39 mel-frequency cepstral (MFC)
coefficients. They have been trained on two databases of annotated speech recordings.
The first broadcast news speech database contains about 60 hours of readings mostly by
professionally trained speakers recorded from Slovak TV broadcast news from 2007 to 2009
year. The database is characterized by gender balanced speakers, contains read, spontaneous
and in a small amount also telephone speech with 48 kHz sampling frequency and 16 bit
resolution.
269
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
10 Will-be-set-by-IN-TECH
The second judiciary speech database contains about 120 hours of reading real adjudgments
from the court with personal data changed, recorded in studio conditions and about 130
hours of read phonetically rich sentences, newspaper articles, internet texts and spelled items,
recorded in offices and conferencial rooms. The database, total size of 250 hours, was recorded
from 250 gender balanced speakers with 48 kHz sampling frequency and 16 bit resolution. It
has been then extended with about 100 hours of 90%male spontaneous speech, recorded from
120 speakers at council hall with 44 kHz sampling frequency and 16 bit resolution.
All recordings were later downsampled to 16 kHz for training and testing. The databases
were annotated by teamof trained annotators using the Transcriber annotation tool (Barras et al.,
2001), slightly adapted to our need, twice checked and corrected.
For acoustic modeling rare triphones the effective triphone mapping algorithm was used (Darjaa
et al., 2011). With reference to the authors, this knowledge-based triphone tying, which allows
the synthesis of unseen triphones, outperforms standard tree-based state tying for acoustic
models with 4 000 states and more, whereas for acoustic models with smaller number of states
the performance is equal.
6.3 Phonetic transcription
Phonetical transcription selected words contained in vocabulary was performed using
data-driven approach to orthoepic transcription in the Slovak language (Cer ˇ nak et al., 2003) with
slight modifications. It has been trained using the phonetically rich sentences from the
SpeechDat-E and MobilDat-SK Slovak speech databases (Rusko et al., 2006) with a new
sentence-based pronunciation lexicon, and additional sentences with manually annotated
pronunciation from a regional broadcast news speech corpus.
6.4 LVCSR decoder
For decoding, the high-performance LVCSR engine Julius (Lee et al., 2001) with recognition
algorithm based on the two-pass strategy has been used. The input data using this algorithm
are processed in the first pass with left-right bigram LM, and the final search for reverse
right-left trigram model is performed again using the result of the first pass to narrow the
search space.
6.5 Test data set
The first test data set was represented by 240 minutes of recordings obtained by randomly
selected segments from broadcast news speech database. These segments were not used in
the training acoustic model and contain 40 656 words in 4 343 sentences.
The second test data from the field of judicature were represented by 315 minutes of
recordings obtained also by randomly selected segments from each speaker contained in the
second read (250 hours) speech database. As well as in the first case, these segments were not
used in training and contain 41 878 words in 3 426 sentences and phrases. We have decided
to use also phrases in the second test set, because in real conditions, people make pause not
only on the sentence boundaries, but also on phrase boundaries, usually before conjunctions.
270 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 11
6.6 Evaluation
Two standard measures have been used for evaluation of the LM: (a) extrinsic evaluation using
word error rate (WER) and (b) intrinsic evaluation based on perplexity (PPL) calculated on a test
data set. WER is a standard measure of the performance of the LVCSR system, computed
by comparing reference text read by a speaker against the recognized result and takes into
account insertion, deletion and substitution errors. If the LVCSR system is not available, the
perplexity is often used for evaluation. It is defined as the reciprocal of the (geometric) average
probability assigned by the LM to each word in the test set. This measure does not necessarily
evaluate the accuracy of recognition itself, but usually highly corelates with it.
7. Experimental results
The experiments were oriented on the evaluation of WER and PPL on the test data set to
discover the effect of proposed optimization techniques and principles in Slovak language
modeling on the overall recognition accuracy of the LVCSR system. As it was mentioned
in Section 6.1, the experimental results were performed with trigram LMs created with
vocabulary size of 348 255 unique words or more, listed in the Table 3, and smoothed by
using modified Kneser-Ney algorithm in any case. For adaptation and combination LMs trained
independently on text corpora mentioned in the Table 1, standard linear interpolation have been
used, where interpolation weights were adjusted to the selected domain using EM algorithm.
The experiments were oriented to the off-line testing of LMs, where the emphasis is focused
on the best recognition accuracy than to the memory requirements of application as in on-line
speech recognition, where it is necessary to use one of the pruning techniques of LMs. In the
case of pruned models, it would be difficult to find appropriate pruning threshold, to maintain
the equal number of n-grams in LM and compare the contribution of given LM to the speech
recognition.
To observe the impact of selected optimization techniques and principles to the area of speech
recognition training and testing of LMs were performed in two independent areas: (a) for
broadcast news transcription task and (b) in judicial domain. This step also includes the usage
of appropriate acoustic model and speech recordings for testing, described in Section 6.2 and
Section 6.5, respectively. Experimental results for both tasks in Slovak LVCSR are described in
following sections.
7.1 Broadcast mews transcription
Broadcast news transcription task is directed to the general area of the speech recognition,
usually for recognition and transcription of a continuous spontaneous speech. In modeling of
the Slovak language and adaptation to this domain we achieved following results. As we can
see in the Table 3, using adaptation into the general area of speech recognition represented by
randomly selected sentences from broadcast news text corpora not used in training process,
we achieved almost 1.39% decreasing in WER and 17.36% of PPL, relatively. In the next step,
modifying rules of phonetic transcription for spelled abbreviations, we observed moderate
improvement rather in subjective than in objective point of view. This fact is caused also by the
undesirable shortening of the history for some abbreviations such as P. O. Box, M. D., etc., and
reducing predictive ability of the LM. Extending the training data set by the text data obtained
271
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
12 Will-be-set-by-IN-TECH
size of broadcast news judicial domain
language model vocabulary PPL
test
WER [%] PPL
test
WER [%]
base without adaptation 348 255 401.105 10.78 126.720 7.92
domain adaptation 348 255 331.478 10.63 100.383 6.96
pronunciation modification 348 468 332.974 10.62 96.1422 6.97
added noise events (1) 348 473 326.519 10.54 57.2970 6.26
added multiwords (2) + (1) 351 473 336.558 10.52 62.7111 6.22
added classes (3) + (2) + (1) 351 493 302.711 10.50 56.1970 6.05
augmented (1) 348 473 308.113 10.31 56.7245 6.27
statistics (2) + (1) 351 473 319.578 10.37 64.2670 6.26
of n-grams (3) + (2) + (1) 351 493 287.815 10.18 55.5591 6.05
Table 3. Experimental results for off-line testing of the Slovak LVCSR system
from annotations of speech recordings we achieved additional decreasing, relatively 0.75%
WER and about 2% of PPL. Taking into account that the testing data from general domain
contained only small amount of selected MWEs and names or surnames, the contribution to
the speech recognition of established multiwords and word classes into LM was too small.
Variations were observed only in perplexity, which was increased due to the shortening of the
history for MWEs and on the contrary decreased by more fixed connections between word
classes. The significant improvement we achieved mainly in the case of augmentation of
trigrams from the SNC database. Decreasing of about 3% WER and 5% of PPL relatively,
results in the fact that the SNC database contained mostly the text data from newspapers or
fictions. The impact of selected optimization techniques to the broadcast news transcription
task in Slovak LVCSR brought overall reduction approximately 5.57% WER and 28.24% of
PPL, relatively.
7.2 Speech recognition in judicial domain
This domain was selected as one of the most challenging acoustic and linguistic environments
fromthe research point of view, and based on market demand, fromthe development point of
view. Regarding adaptation into the judicial domain, we achieved significant improvement,
relatively 12.12% in WER and 20.78% of PPL even if a small amount of adaptation data
was added. As it was in the previous case of broadcast news transcription, by modifying
pronunciation of spelling items, there were not observed any notable variations in WER or
PPL. The impact of the text data from annotations of speech recordings results in significant
decreasing of both values, more than 10% in WER and 40% of PPL, relatively. This fact
is caused mainly by larger amount of text data (more hours) from annotations of speech
recordings from judicial domain than in broadcast news transcription task. Multiwords
brought an improvement in just about 5% of cases at the beginning of the speech or after
long pause, what did not produce significant changes in the overall result of the speech
recognition. Due to the fact that the testing data contained a large amount of names and
surnames, we achieved additional decreasing, relatively 3% WER and more than 10% of PPL
in the case of word classes. Augmentation statistics of trigrams did not improve resultant LM,
because mentioned database does not contain any text data from the field of judicature. The
contribution of mentioned optimization steps to the domain-specific task of Slovak LVCSR
yield overall reduction approximately 24% in WER and 56% of PPL, relatively.
272 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 13
7.3 Discussion
Using selected methods, principles and approaches in statistical modeling of the Slovak
language and proposed optimization techniques we achieved the recognition accuracy of
our LVCSR system almost 94% in domain-specific task from the field of judicature and
approximately 90% in the case of broadcast news transcription. The vocabulary used in
experiments covers about 99% commonly used words in the Slovak language.
As regards the experimental results, the recognition accuracy could be increased by extending
word classes with names of cities, streets, institutions, and other name entities in their
inflected form. Regarding memory requirements, it could be more suitable to use only
class-based approach in Slovak language modeling. However, absence of any available
morphological tagger for Slovak language limits the utilization of this approach, although
first steps in this area have already been done.
Contemporary research in Slovak language modeling is also oriented on different areas such
as vocabulary selection in specific domain, topic detection in web corpora, augmentation
statistics of the LM using machine translation systems or web engines, on-line adaptation
of LMs, modeling of unknown words in spontaneous speech, morphologically motivated
class-based modeling, discovering the influence of the morpheme-based models, and
eliminating errors caused by used vocabulary or language modeling in speech recognition.
As regards the real application of domain-oriented speech recognition, nowadays, a new
version of our LVCSR system for the purpose of the Ministry of Justice of the Slovak Republic
is being finalized, in which these knowledges about the modeling of the Slovak language and
LMs described in this chapter have been used. It is important to say, that at the time of the
preparation of this chapter proposed LVCSR systemhas been installed and used by more than
50 persons (judges, court assistants and technicians) at 9 different institutions belonging to the
Ministry of Justice for testing. The results of tests will be taken into consideration in the final
version of the Slovak LVCSR system coming into everyday use at the organizations belonging
to the Ministry of Justice of the Slovak Republic by the end of the year 2011.
8. Conclusion
In this chapter a brief summary of current methods and principles used in Slovak language
modeling has been presented. By combination of standard statistical methods and proposed
language dependent optimization techniques bringing an additional information into training
process of LM, often linguistic regularities as well, we achieved notable improvement in
recognition accuracy of our LVCSR system of the Slovak language in the task of broadcast
news transcription as well as in domain-specific speech recognition from the field of
judicature. We have discovered that using several different approaches oriented to the
specific problem in language modeling, we can better eliminate errors arising in the speech
recognition of such inflective language as is the Slovak language. The major contribution in
the area of Slovak language modeling is the fact that current language models are also used
in development and application of the Slovak automatic transcription and dictation LVCSR
system for the judicial domain.
273
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
14 Will-be-set-by-IN-TECH
9. Acknowledgement
The research presented in this paper was supported by the Ministry of Education under
research projects VEGA-1/0065/10 and MŠ SR 3928/2010-11 and by EU ICT Project INDECT
(FP7–218086).
10. References
Barras, C., Geoffrois, E., Wu, Z. & Liberman, M. (2001). Transcriber: Development and use of
a tool for assisting speech corpora production, Speech Communication 33(1-2): 5–22.
Byrne, W., Hajiˇ c, J., Krbec, P., Ircing, P. & Psutka, J. (2000). Morpheme based language models
for speech recognition of Czech, Proceedings of 3rd International Workshop on Text,
Speech and Dialogue, TSD’2000, Brno, Czech Republic, pp. 211–216.
Cer ˇ nak, M., Rusko, M., Trnka, M. & Daržagín, S. (2003). Data-driven versus knowledge-based
approaches to orthoepic transcription in Slovak, ICETA’2003: The 2nd International
Conference on Emerging Telecommunications Technologies and Applications and the 4th
Conf. on Virtual University, Košice, Slovakia, pp. 95–97.
Chen, S. F. & Goodman, J. (1996). An empirical study of smoothing techniques for language
modeling, Proceedings of the 34th Annual Meeting on Association for Computational
Linguistics, ACL’96, Santa Cruz, CA, USA, pp. 310–318.
Creutz, M., Hirsimäki, T., Kurimo, M., Puurula, A., Pylkkänen, J., Siivola, V., Varjokallio,
M., Arisoy, E., Saraclar, M. & Stolcke, A. (2007). Analysis of morph-based
speech recognition and the modeling of out-of-vocabulary words across languages,
Proceedings of HLT-NAACL’2007, Rochester, NY, USA, pp. 380–387.
Creutz, M., Virpioja, S. & Kovaleva, A. (2009). Web augmentation of language models for
continuous speech recognition of SMS text messages, Proceedings of the 12th Conference
of the European Chapter of the ACL, EACL’2009, Athens, Greece, pp. 157–165.
Darjaa, S., Cer ˇ nak, M., Trnka, M., Rusko, M. & Sabo, R. (2011). Effective triphone mapping
for acoustic modeling in speech recognition, Proceedings of INTERSPEECH’2011,
Florence, Italy, pp. 1717–1720.
Gao, J., Suzuki, H. & Yuan, W. (2006). An empirical study on language model adaptation,
ACM Transaction on Asian Language Information Processing, TALIP’2006 5(3): 209–227.
Hládek, D. & Staš, J. (2010a). Text gathering and processing agent for language modeling
corpus, Proceedings of the 12th International Conference on Research in Telecommunication
Technologies, RTT’2010, Vel’ké Losiny, Czech Republic, pp. 200–203.
Hládek, D. & Staš, J. (2010b). Text mining and processing for corpora creation in Slovak
language, Journal of Computer Science and Control Systems 3(1): 65–68.
Hsu, J. B. (2009). Language modeling for limited-data domains, PhD thesis, Department of
Electrical Engineering and Computer Science, Massachusetts Institute of Technology.
Jurafsky, D. & Martin, J. H. (2009). An introduction to natural language processing, computational
linguistics, and speech recognition (2nd edition), Prentice Hall.
Kolorenˇ c, J., Nouza, J. &
ˇ
Cerva, P. (2006). Multi-words in the Czech TV/radio news
transcription system, Proceedings of the 11th International Conference Speech and
Computer, SPECOM’2006, Sankt Peterburg, Russia, pp. 70–74.
274 New Technologies – Trends, Innovations and Research
Recent Progress in Development of Language Model for Slovak Large Vocabulary Continuous Speech Recognition 15
Lee, T., Kawahara, T. & Shikano, K. (2001). Julius - An Open Source real-time
large vocabulary recognition engine, Proceedings of EUROSPEECH’2001, Aalborg,
Denmark, pp. 1961–1694.
Manning, C. D. &Schütze, H. (1999). Foundations of Statistical Natural Language Processing, MIT
Press.
Nouza, J., Zdansky, J., Cerva, P. &Silovsky, J. (2010). Challenges in speech processing of Slavic
languages (Case studies in speech recognition of Czech and Slovak), A. Esposito et al.
(Eds.): Development of Multimodal Interface: Active Learning and Synchrony, LNCS 5967,
Springer-Verlag, Heidelberg, pp. 225–241.
Oger, S., Popescu, V. & Linarès, G. (2010). Combination of probabilistic and
possibilistic language models, Proceedings of INTERSPEECH’2010, Makuhari, Japan,
pp. 1808–1811.
Rusko, M., Trnka, M. & Daržagín, S. (2006). MobilDat-SK - A mobile telephone extension
to the SpeechDat-E SK telephone speech database in Slovak, Proceedings of the 11th
International Conference Speech and Computer, SPECOM’2006, Sankt Peterburg, Russia,
pp. 485–488.
Seymore, K. & Rosenfeld, R. (1996). Scalable backoff language models, Proceedings of the 4th
International Conference on Spoken Language Processing, ICSLP’96), Philadelphia, PA,
USA, pp. 232–235.
sk-spell (2010). Slovak support in Open Source applications, Projekt sk-spell. (in Slovak).
URL: http://www.sk-spell.sk.cx/
Staš, J., Hládek, D. & Juhár, J. (2010a). Language model adaptation for Slovak LVCSR,
AEI’2010: International Conference on Applied Electrical Engineering and Informatics,
Venice, Italy, pp. 101–106.
Staš, J., Hládek, D., Pleva, M. & Juhár, J. (2011a). Slovak language model from Internet
text data, A. Esposito et al. (Eds.): Toward Autonomous, Adaptive, and Context-Aware
Multimodal Interfaces. Theoretical and Practical Issues, LNCS 6456, Springer-Verlag,
Heidelberg, pp. 340–346.
Staš, J., Hládek, D., Trnka, M. & Juhár, J. (2011b). Automatic extraction of multiword
expressions using linguistic constraints for Slovak LVCSR, Proceedings of the 6th
International Conference on NLP, Multilinguality, SLOVKO’2011, Modra, Slovakia,
pp. 1–8.
Staš, J., Trnka, M., Hládek, D. & Juhár, J. (2010b). Text preprocessing and language
modeling for domain-specific task of Slovak LVCSR, Proceedings of the 7th International
Workshop on Digital Technologies, Circuits, Systems and Signal Processing, DT’2011,
Žilina, Slovakia, pp. 1–4.
Stolcke, A. (1998). Entropy-based pruning of backoff language models, Proceedings of DARPA
Broadcast News and Understanding Workshop, Lansdowne, VA, pp. 270–274.
Stolcke, A. (2002). SRILM - An extensible language modeling toolkit, Proceedings of the 7th
International Conference on Spoken Language Processing, ICSLP’2002, Denver, Colorado,
USA, pp. 901–904.
Venkataraman, A. & Wang, W. (2003). Techniques for effective vocabulary selection,
Proceedings of EUROSPEECH’2003, Geneva, Switzerland, pp. 245–248.
Šimková, M. (2006). Slovak National Corpus - History and current situation, M. Šimková (Ed.):
Insight into the Slovak and Czech Corpus Linguistics VEDA- Publishing House of Slovak
Academy of Sciences, Bratislava, pp. 151–159.
275
Recent Progress in Development of Language
Model for Slovak Large Vocabulary Continuous Speech Recognition
16 Will-be-set-by-IN-TECH
Zhu, X. & Rosenfeld, R. (2001). Improving trigram language modeling with the world wide
web, Proceedings of the IEEE International Conference on Acoustic, Speech and Signal
Processing, ICASSP’2001, Salt Lake City, Utah, USA, pp. 533–536.
276 New Technologies – Trends, Innovations and Research
Part 9
Agriculture Technologies

13
The Use of High-Speed Imaging Systems
for Applications in Precision Agriculture
Bilal Hijazi
1,2
, Thomas Decourselle
2
, Sofija Vulgarakis Minov
1,2
,
David Nuyttens
1
, Frederic Cointault
2
, Jan Pieters
3
and Jürgen Vangeyte
1

1
Institute for Agricultural and Fisheries Research (ILVO)
2
AgroSup Dijon, UP GAP
3
Faculty of Bioscience Engineering, Ghent University
1,3
Belgium


2
France
1. Introduction
The evolution of digital cameras and image processing techniques over the last decade has
inspired researchers in many fields, particularly agricultural research. Agricultural
researchers have used imaging systems in diverse applications, including a multispectral
system in viticulture (Hall et al., 2003) and an imaging system to count wheat ears (Cointault
et al., 2008).
High speed imaging (HSI) has been widely used for industrial and military applications
such as ballistics, hypervelocity impact, car crash studies, fluid mechanics, and others. In
agriculture HSI is mainly used in two domains that both require fast processing: fertilization
and spraying.
- Fertilization, be it organic or mineral, is essential to agriculture. Over-fertilization can
reduce yield and lead to environmental pollution (Mulligan et al., 2006). To prevent
these consequences, the fertilization process must be controlled. In Europe and
worldwide, mineral fertilization is performed using centrifugal spreaders because they
are more cost-efficient than pneumatic spreaders. The process of centrifugal spreading
is based on spinning discs which eject large numbers of grains at high speeds (30 to 40
ms
-1
). To control the spreading process and to predict the distribution pattern on the
soil, several characteristics need to be accurately evaluated, i.e., ejection parameters
such as velocity and direction, plus granulometry and the angular distribution.
- The spray quality generated by agricultural nozzles plays an important role in the
application of plant protection products. The ideal nozzle-pressure combination should
maximize spray efficiency by increasing deposition and transfer of a lethal dose to the
target (Smith et al., 2000) while minimizing residues (Derksen et al., 2008) and off-target
losses such as spray drift (Nuyttens et al., 2007a) and user exposure (Nuyttens et al.,
2009a). The most important spray characteristics influencing the efficiency of the
pesticide application process are the droplet sizes, the droplet velocities and directions,
the volume distribution pattern, the spray sheet structure and length, the structure of

New Technologies – Trends, Innovations and Research

280
individual droplets and the 3D spray dimensions. The mechanism of droplets leaving a
spray nozzle and their impact on the surface are very complex and difficult to quantify
or model. Accurate quantification techniques are therefore crucial.
Without accurate quantification techniques, it is not possible to evaluate the characteristics
of the processes in question. Both fertilization and spray processes occur with a relatively
high speed. We therefore developed HIS with adequate image processing techniques to
characterize the process of centrifugal spreading and the process of pesticide spraying.
This chapter addresses the application of HIS in fertilization and pesticide spraying. To
begin, we present the state of the art of characterization methods. A presentation of the
devices of acquisition, the applied image processing techniques, and the obtained results
follows. We end by discussing these results and present possible future avenues of research.
2. The state of the art of characterization methods for pesticide spraying and
fertilizer centrifugal spreading
2.1 Centrifugal spreading
Persson (1998) evaluated the quality of the spread pattern for different settings by collecting
the spread grains in trays. Piron & Miclet (2006) developed a new concept: the spreader
rotates over a radial placed single row of collector trays. Instead of the normal transverse
distribution in a cartesian coordinate system, a polar measurement system is used. These
methods can be used only for pre-calibration, they are done in test halls and the correct
adjustment of the spreader is generally not verified by the farmers.
Grift & Hofstee (1997) proposed a completely different approach, i.e., a combination of a
ballistic model and optical sensors. These sensors determine the initial conditions of flight
(velocity, direction) of the particles and their size. Subsequently, the spatial distribution of
particles is calculated by introducing the calculated parameters in the ballistic model. This
system provides only information for one individual granule and not for the entire flow,
however, which makes it inapplicable to real fertilization conditions.
The evolution of digital cameras and imaging techniques have made it possible to surpass
the limitations of previous methods. Several new approaches using imaging systems have
been investigated (Cointault et al 2003; Vangeyte & Sonck, 2005; Villette et al. 2007; Bilal et
al., 2010, 2011). Villette et al. (2007) developed a method based on blurred images from
which the outlet angles of particles can be determined. The angles are introduced in a
mechanical model (Olieslagers et al., 1996; Van Liedekerke et al., 2008) to calculate the
spread pattern. This method is not yet able to determine all parameters of interest such as
granulometry. Cointault & Vangeyte (2005) used a multi-exposure imaging system that
differs in the field of view (1 m² and 0.01m²) and in the illumination system used (flashes or
LEDs). These systems are very sensitive to noise and are limited by image acquisition
conditions (they require a darkened hall to prevent the influence of daylight).
2.2 Pesticide spraying
In the past, mainly intrusive methods, also called sampling techniques, were used for spray
characterization. With these techniques, droplets were collected and analyzed using

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

281
mechanical sampling devices. However, these sampling devices may affect the spray flow
behaviour and can only be used to evaluate spray deposition and estimate droplet size
(Rhodes, 1998).
Due to the development of modern technology such as powerful computers and lasers,
quantitative optical non-imaging light scattering droplet characterization techniques have been
developed for non-intrusive spray characterization. Although these techniques are able to
measure some specific spray characteristics, none of them are able to fully characterize a
spray application process. Moreover, these techniques are complex, expensive and (in most
cases) limited to small measuring volumes. They are not able to accurately measure non-
spherical particles. The most important types of non-imaging light scattering droplet
characterization techniques are the Phase Doppler Particle Analysers (PDPA) (Nuyttens et
al., 2007b, 2009b), the laser diffraction analyzers, e.g., Malvern Analyzer (Stainier et al.,
2006), Particle Tracking Velocimetry (PTV), and the optical array probes (Teske et al., 2000).
Several studies have shown a wide variation in mean droplet sizes for the same nozzle
specifications while using different techniques (Nuyttens, 2007).
The limitations of the non-imaging techniques and the recent improvements in digital image
processing, sensitivity of imaging systems and cost reduction, have increased the interest in
high-speed imaging techniques for agricultural applications in general, specifically for pesticide
applications. Another major advantage is that a visual record of the spray under
investigation is available, providing a simple means to verify what is being measured, and
perhaps more importantly, what is not being measured (Kashdan et al., 2004 a).
Furthermore, another fundamental limitation of light scattering techniques is the inability to
accurately measure non-spherical droplets. For this reason, measurements must be obtained
sufficiently far downstream from the primary sheet or jet break-up region where ligaments
and initially large and often non-spherical droplets are formed. This is an unfortunate
limitation, since the near-orifice region is where the process of atomization is occurring and
the initial droplets are formed (Kashdan et al., 2004 a).
Recent developments in nozzle technology produce sprays with droplets containing air
inclusions. Because these internal structures can cause uncertainty with techniques that rely
on diffraction or scattering, interest has been renewed in droplet sizing using imaging
techniques. Moreover, imaging techniques offer greater simplicity over light scattering
techniques. One of the main issues using imaging techniques not only the need for
automated processing routines but also the problem of resolving the depth-of-field (DOF)
effect and its inherent influence on measurement accuracy (Kashdan et al. 2004b).
3. Overview of high-speed imaging used for spraying and spreading
Generally speaking, high-speed imaging analyzers are spatial sampling techniques
consisting of a (strobe) light source, a (high-speed) camera and a computer with image
acquisition and processing software. The image frames from the video are analyzed using
various image processing algorithms to determine particle (fertilizer grain or spray droplet)
characteristics. The imaging techniques have the potential to determine the particles’
velocity and other important characteristics like ejection angle and the distribution of the
particles.

New Technologies – Trends, Innovations and Research

282
Several industrial imaging techniques (PDIA, PIV, LIF) are used for particle
characterization. Although these techniques are not applicable to characterize the fertilizer
spreading process, they have the potential to fully characterize spray characteristics in a
non-intrusive way. For pesticide applications, however, technical and financial challenges
make this impossible to put into practice. These techniques are currently mainly used for the
characterization of small sprays, e.g., paints, medical applications, fuel injectors, etc.
Some of the available imaging techniques for industrial spray characterization are discussed
below (3.1.1 – 3.1.3).
Other interesting techniques were proposed to characterize pesticide sprays and fertilizer
spreaders using either a high-speed camera with a high-power light source (3.1.4) or a high-
resolution standard camera with a strobe light (3.1.5). These techniques can give additional
information about the particles’ trajectory, which is needed to predict the outcome on the
plant (spraying) or in the field (spreading).
3.1 Imaging techniques
3.1.1 Particle/Droplet Imaging Analyzers (PDIA)
Particle Droplet Imaging Analyzers (PDIA) automatically analyze digital images of a spray
(Fig. 1). A very short flash of light illuminates a diffusing screen to back-illuminate the
subject. A digital camera with a microscope lens captures images of the subject. Different
magnification settings can be used to measure a very wide range of droplet sizes. Image
analysis software analyses the images to find drop size. Shape data for the particles can also
be measured and recorded. By using dual laser flashes in short succession and measuring
the movement of the particle, it is possible to measure the particle velocity. Information on
spray geometry can be provided by switching to light sheet illumination. The most common
PDIA in use is the Visispray developed by Oxford Laser and is used by Kashdan et al.
(2007). This system measures cone angle, drop size and drop velocity and other key
parameters of the spray. Kashdan et al. (2004 a; b) made comparisons between the PDIA,
PDPA and Laser Diffraction and found good correlation between the results.

Fig. 1. Typical Particle droplet imaging analyzer (PDIA) (Schick, 1997).

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

283
3.1.2 Particle Image Velocimetry (PIV)
Particle Image Velocimetry (PIV) is an optical method used to obtain velocity measurements
and related properties of particles. It produces two-dimensional vector fields, whereas other
techniques measure the velocity at a point. In PIV, the particle size and density makes it
possible to identify individual particles in an image, but not with enough certainty to track it
between images. This technique uses laser light and it is well adapted to laboratory
conditions but cannot be used in the field. It is rather used as a reference method and not for
pesticide spray characterization under practical conditions. Particle Tracking Velocimetry
(PTV) (Hatem, 1997) is a variant which is more appropriate with low seeding density
experiments, and Laser Speckler Velocimetry (LSV) with high seeding density. Like PIV,
PTV and LSV measure instantaneous flow fields by recording images of suspended seeding
particles at successive instants in time. Hence, LSV, PTV and PIV are essentially the same
technique, but are used with different seeding densities of particles (Paul et al., 2004).
3.1.3 Laser Induced Fluorescence (LIF)
Laser Induced Fluorescence (LIF) is a spectroscopic method used to study the structure of
molecules, detect selective species, and to perform flow visualization and measurements
(Cloeter et al., 2010). The particles to be examined are excited with a laser. The excited
particles will, after a few nanoseconds to microseconds, de-excite and emit light at a
wavelength larger than the excitation wavelength. This light (fluorescence) is then
measured. One advantage that LIF has over absorption spectroscopy is that LIF can produce
two- and three-dimensional images, as fluorescence takes place in all directions (i.e., the
fluorescence signal is isotropic). By following the movement of the dye spot using high
speed camera and image processing, the particle velocity can be determined (Mavros, 2001).
LIF can minimize the effect of multiple scattering found with laser diffraction analysers and
can minimize the interference between the reflection and refraction lights (Hill & Inaba,
1989). The drawback of this method is that the particles reflect the LIF signal of the tracers,
which can cause error in the measurement signal of the liquid flow.
3.1.4 High-speed camera with high-power light source
An alternative method to analyse spray/spreading characteristics is to use a high-speed
camera combining high resolution images with a high frame rate. Because of the short
exposure time inherent to high-speed imaging, very high illumination intensities are
needed. The advantage of this system is the possibility to be adapted to the application
condition, the frame rate and the resolution of the image.
Vangeyte et al. (2004) used a high-speed camera (MotionXtra HG 100K, 1504x1128 pixels
and frame rate of 1000 images/s) to make a comparison with a multi-exposure imaging
system for determination of the trajectories of fertilizer grain ejected from a centrifugal
spreader. However, the field of view was small (10x10 cm²). To characterize the full process,
all the ejected grains need to be visualized.
Massinon and Lebeau( 2011) used a high-speed camera (Y4 CMOS, Integrated Design Tools)
with a high magnification lens (12 x zoom Navitar, 341 mm working distance) coupled with
high-power LED lighting and image processing to study droplet impact and spray retention
of a real spray application. Camera resolution was reduced to 1016 x 185 pixels to acquire

New Technologies – Trends, Innovations and Research

284
20 000 images per second with a spatial resolution of 10.58 μm.pixel
-1
. A background
correction was performed with Motion Studio embedded camera software to get a
homogeneous image. Nineteen-LED backlighting (Integrated Design Tools) with a beam
angle of 12.5° was placed 0.50 m behind the focus area to provide high illumination and a
uniform background to the images. Based on the pixel size of the droplet as determined
manually from the pictures with Motion Studio software, together with the spatial
resolution, the diameter of the droplets was calculated. Similarly, droplet velocities were
calculated in a very-time consuming and visual way, based on the distance between the
position of the droplet between two consecutive frames and the frame rate. In this way, only
the 2-dimensional velocity was calculated.
Many others, like Šikalo et al. (2005) also studied the impact of droplets with a high-speed
CCD camera but in these studies, single droplets were produced using a microdrop
generator in an on-demand or continuous mode.
3.1.5 High-resolution standard camera with a strobe light
This technique combines a high resolution standard (slow speed) camera with a strobe light
for tracking high-speed particles. The principle is that a series of light flashes is triggered
one after the other over a single camera exposure. The number of flashes determines the
maximum number of particle positions that can be recorded on each image.
Cointault et al.(2002 proposed a system combining a monochrome camera (1008x1018
pixels) with a strobe light consisting of photograph flashes to determine the trajectories and
velocities of the spread grains in a field of view of 1mx1m. Vangeyte and Sonck (2005) also
used a similar system but with a LED stroboscope and a small field of view (0.1m x 0.1m) to
capture the grain flow.
This technique was already used by Reichard et al. (1998) to analyse single droplet
behaviour combining a monochrome video camera (60 fields per second) with a single
backlight stroboscope (Type 1538-A, Genrad, Concord, MA 01742) at a flash rate of about
seven times the field-sequential rate used to drive the camera. This produced multiple
images of the same droplet.
Lad et al. (2011) used a high-intensity pulsed laser (200 mJ, 532 nm) as a backlight source
which was synchronized with a firewire type of digital camera (1280 x 960 pixels) to analyze
a spray atomizer. The laser beam was converted to a laser cone using a concave lens, and
then it was diffused by a diffuser. A 200 mm micro-lens equipped with a spacer was used to
get a magnification of 2.6 of the image resulting in a field of view of 1.82 x 1.36 mm for a
working distance of 250 mm. The digital camera captured shadow images which were
analyzed to determine droplet sizes. The system is capable of performing an online
characterization of spray droplets and an image calibration was performed using graph
paper. A calibration method of an imaging system in the diameter range 4 to 72 µm has been
reported by Kim and Kim, (1994).
Malot and Blaisot (2000) developed a particle sizing method based on incoherent backlight
images using a stroboscope with two fibers synchronized with two cameras. This technique
was used to project 2D images of drops on a video camera, which led to two-dimensional
images.

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

285
3.2 Adopted solution
In both domains (fertilization and spraying), events are relatively fast; typical speeds are 1 to
15 ms
-1
in spraying and 30 to 40 ms
-1
in fertilization. High-speed cameras with frame rate
between 500 and 1000 images per second are needed to capture movement of the particles.
However, the size and transparency of the particles are different between the two applications.
- The fertiliser grains are opaque and their diameters are between 3 and 6 mm
- The spraying droplets are translucent and their diameters are between 10-1000µm
These differences between the physical characteristics of the particles thus require different
setups.
- In fertilization a front-light is adequate and a lens with a focal between 16 and 28 mm is
sufficient.
- Illumination of translucent spray droplets with a front-light is not practical. Hence
back-light is used. Because of the small droplet size, a macro lens with a high focal
length should be used.
4. Imaging device and results
4.1 Fertilization application
The aim is to determine the spatial fertilizer distribution on the ground by calculating the
ballistics of the particles from their initial conditions of flight (velocity, direction), their
properties and geometrical parameters (topography, height and tilt of the discs, etc.). To
determine the velocities and the trajectories of the grain at the ejection, imaging devices
combined with a image processing techniques can be used. Given that the grains are ejected
with a speed of 30 - 40 ms
-1
, a HSI system at a minimum rate of 500 images per second is
used to film at least the same scene in two different instants. The resulting frame of the same
scene is used to estimate the motion of the fertilizer grains.
The fertiliser grains are actually ejected in an arc (Fig 2). To ensure the filming of the same
arc, the HSI system has to visualise a field of view of 1x1 m².

Fig. 2. Image of ejected fertiliser grain.

New Technologies – Trends, Innovations and Research

286
Therefore, our system consists of a high-speed camera with a frame rate of 1000 Hz, a sensor
of 1280x1042 pixels², a pixel size of 12µm and a lens with 28mm focal length. The camera is
placed two meters above the field of work.
After image acquisition, the image must processed. During this essential phase, the
velocities and the trajectories must be predicted in order to determine the spatial
distribution of the fertiliser grain on the ground (Fig. 3). We have therefore investigated
several motion estimation techniques in order to achieve high accuracy.

Fig. 3. The images on the left are images of fertilizer grain ejection at the instant t and At; the
middle image shows the displacement vector determined by the motion estimation algorithm,
and the right image shows the spread pattern determined from the ballistic model.
Barron et al. (1994) divided the optical flow method into four categories: (1) differential
methods, (2) region-based matching, (3) energy-based techniques and (4) phase-based
techniques. The difference between these methods is the way to resolve the image constraint
equation (1):
I (x, y, t ) = I (x + dxAt, y + dyAt, t +At) (1)
I is the intensity of pixels and dx and dy are the displacement after At (for more details see
(Barron et al., 1994)).
The fertilizer grain displacements in pixels/image are very large compared to the
displacements generally estimated with classical motion estimation methods. These
displacements can therefore not be estimated directly using methods such as Markov
Random Fields or optical flow measurement; the maximum displacement detectable by
these methods is too small to detect the fertilizer granules’ path. Therefore, a theoretical
model of the movement of the grains was first combined with a Markov Random Fields
method to estimate the motion of the grains on high speed images of the grain flow. This
technique had a good accuracy but it was not sufficient to have a very accurate prediction
of the spatial distribution. An improved method was needed.

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

287
We then investigated whether Block Matching or motion estimation methods based on
Gabor filters could improve the accuracy and eliminate the modeling and minimization
steps of the MRF technique.
Although the block matching techniques are able to detect large displacements between
different frames, our experiment using block matching techniques showed that it is not
suitable for our application (Hijazi et al., 2008). These techniques only give good results
when scenes are highly textured, which is not the case for the fertilizer images. In reality, the
fertiliser grains have all a similar shape. The probability of erroneous estimation is therefore
too high.
For the motion estimation method based on Gabor filters, Spinei’s method (Spinei et al.,
1998), a triad of controlled Gabor filters was implemented. To expand the range of
detectable displacements, this method uses a multi-resolution representation of image
sequences. The higher level has a lower resolution. When the resolution is decreased, the
displacement decreases with the same ratio. We showed, however, that this method did not
improve the accuracy on the measurement of the displacements (Hijazi et al., 2008).
Because of the similarity between the fertiliser grain images and the images used in PIV to
study the turbulence phenomena in fluid, it is possible to apply the proven high-accuracy
PIV algorithms to estimate the movement of the fertilizer granules.
A two-step cross correlation algorithm with sub-pixel accuracy for motion estimation was
applied to the fertilizer granules’ motion during centrifugal spreading. In this method, the
first step is to fit an arc of a circle in the grain region of each image (Fig. 3). These arcs are
used to divide the grain region in several smaller regions. For each region, a global motion
displacement is then determined. The second step uses the global displacement to determine
the local displacement using a normalized cross-correlation. The final results, with their
subpixel accuracy, created the possibility to develop a system based on a low-resolution
camera sensor. For more details about the techniques see Hijazi et al., 2010, 2011.
A comparison with the result of the MRF technique clearly shows that the cross-
correlation method determines very precisely the fertiliser granule velocities with an
average error of 0.4 pixel or less, and 90% of the granule velocity with a rate of error less
than 0.2 pixel (Table 1).


Cross-correlation MRF
horizontal vertical horizontal vertical
Mean velocity
modulus (pixel)
62.402 61.453
Bias error (pixel) 0.085365 0.099817 1.624881 0.800443
Error maximum
(pixel)
0.384418 0.330194 5.549145 3.431636
Standard
deviation (pixel)
0.073746 0.080768 1.399179 0.834144
Accuracy 90%
(pixel)
0.17261 0.21957 3.65780 2.34400
Table 1. Comparison between the cross correlation method and the MRF method.

New Technologies – Trends, Innovations and Research

288
4.2 Spray application process
In a precision spraying context, the analysis of droplet behaviour on the leaves (adhesion,
bounce or splash) and the link with leaf surface features, particularly its roughness, is one of
the most important steps. Our study features two main parts. One aspect is to analyze the
surface and to extract features using texture analysis methods. This characterizes the leaf
roughness. The other aspect is to analyze the droplet and its behaviour using HSI and
associate image processing techniques. This chapter only discusses the analysis of the
droplet and its behavior. We use a system composed of a high-speed camera with a high-
power light source and a droplet generator (Figs. 4 and 5).

Fig. 4. Scheme of the system for single spray droplet characterization.

Fig. 5. Picture of the system.
The droplet generator runs in “on demand” mode and creates single droplets. Depending
on its features (size, velocity, surface, composition), a droplet can have different behaviours
after impact such as adhesion, bounce or shatter. We influence the size and velocity of the
droplet by using several nozzles and changing the height of fall of the droplet.
The small size of the droplets (80-400µm) requires use of a macro lens with a high focal
length. In addition to these constraints, we have to set up the camera with a high frame rate
(1000 frames/s) and a low exposure time (16 µs) in order to extract accurate information of

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

289
size, velocity and behaviour of the droplets. Consequently we illuminate the scene with a
LED system which provides high illumination and uniform background that leads in well-
contrasted images for easier tracking of the droplets.
Object tracking is an important task within the field of computer vision. Computer
performance has increased and high-quality cameras are now available for a reasonable
price. These advancements have led to increased interest in object tracking algorithms.
Video analysis has three key steps: (1) detection of moving objects of interest, (2) tracking
such objects from frame to frame, and (3) analyzing object tracks to recognize their
behaviour (Yilmaz et al., 2006).
The first task is to define a suitable representation of the object. The object can be
represented in several ways, such as points, primitive geometric shapes or object contours.
The point is the simplest representation. The point representation is not suitable here
because we need to extract the size of the droplet from the video. A circular shape as
primitive geometric shape for droplet representation could be a good solution in order to
extract the size, but it may lead to wrong interpretation of the behaviour of the droplet
because it may be hard to distinguish adhesion from bounce. We therefore use a contour
representation for the droplet.
The next task is to determine the way to detect the object. Almost all tracking algorithms
require detection of the objects either in the first frame or in every frame. Objects can be
detected in the video in different ways. For instance, we can use point detector algorithms to
find interest points in images. This method is well adapted for images with expressive
texture in localities, but this is not the case of our images. Another way could be to use
segmentation methodsn but this can lead to detection errors after impact, when the droplet
merges with the contact surface. To overcome these difficulties, we used background
subtraction. We acquire a first image corresponding to the background when the droplet is
out of the field of view. Then we subtract the background from next images that contain the
droplet. Finally techniques of supervised learning could have been used to detect objects
and correctly separate surface from droplets but we reject them because the learning step is
too time-consuming
We first perform an inversion of the image to get high intensity values for the pixels belonging
to the droplet. Then we apply the background subtraction, which allows us to detect only
moving objects in the scene. We now track these objects from frame to frame (Fig. 6). To do so,
we use a combination of two methods: shape matching and contour tracking.

Fig. 6. Sequence of droplet impact with adhesion.

New Technologies – Trends, Innovations and Research

290
We consider two main stages in the video, i.e., the time before impact and after impact.
Before impact, we use an algorithm of shape matching, because that the droplet keeps a
circular shape (Fig. 7). We compute an area-perimeter ratio I defined as:

2
4 A
I
P
t
=
(2)
with A : area of the object, P : perimeter of the object.
If I is equal to 1, the object has a circular shape and we can consider it as a droplet. We
include a tolerance of 5% for I in order to take into account small deformations of the
droplet.

Fig. 7. Droplet detection using shape matching.
Once the droplet reaches the surface, it is subject to bigger deformations during the steps of
spreading and recoiling. It is no longer possible to use shape matching for tracking the
droplet. We use contour tracking technique named Active Contour, also known as the snake
method. The development of active contour models results from the work of Kass et al.
(1988). A snake is an active (moving) contour, in which the points are attracted by edges and
other image boundaries. To keep the contour smooth, a membrane and thin plate energy is
used as contour regularization. Basically, snakes are trying to match a deformable model to
an image by means of energy minimization (Fig. 8). The energy functional which is
minimized is a weighted combination of internal and external forces. The internal forces
emanate from the shape of the snake, while the external forces come from the image and/or
from higher level image understanding processes. The snake is parametrically defined as
( ) ( ( ), ( )) v s x s y s = , where ( ), ( ) x s y s are , x y coordinates along the contour and s is from
| | 1, 0 . The energy functional relative to the snake is written:

1
int
0
( ( )) ( ( )) ( ( ))
snake image con
E E v s E v s E v s ds = + +
}
(3)
-
int
E : internal energy due to bending which serves to impose piecewise smoothness
constraint.

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

291
-
image
E : image forces pushing the snake toward image features (edges, lines,
terminations).
-
con
E : external constraints are responsible for putting the snake near the desired local
minimum
The internal spline energy can be written:

2
2
2
int
2
( ) ( )
dv d v
E s s
ds ds
o | = + (4)
where ( ), ( ) s s o | specify the elasticity and stiffness of the snake, respectively.
The second term of the energy integral is derived from the image data over which the snake
lies. A weighted combination of three different functionals is presented which attracts the
snake to lines, edges, and terminations:



image line line edge edge term term
E w E w E w E = + + (5)
The line-based functional may be very simple:
( , )
line
E f x y = (6)
where ( , ) f x y denotes image gray levels at image location ( , ) x y . The sign of
line
w specifies
wether the snake is attracted to light or dark lines.
The edge-based functional attracts the snake to contours with large image gradients, i.e., to
locations of strong edges:

2
-| ( , )|
edge
E grad f x y = (7)
Line terminations and corners may influence the snake using a weighted energy functional
term
E . Let ( , ) ( ( , ) * ( , ))² C x y G x y f x y
o
= be a smoothed image, with G
o
a Gaussian with a
standard deviation o . Let
1
tan
y
x
C
C
u
÷
| |
=
|
\ .
the gradient angle, (cos , sin ) n u u = unit vector
along gradient, ( sin , cos ) n u u
±
= ÷ perpendicular to gradient.
term
E is defined using curvature
of level lines in ( , ) C x y :

term
E
n
ou
o
±
= (8)
The snake behaviour is controlled by adjusting the weights
line
w ,
edge
w and
term
w .
For the moment, only
int
E and
image
E are used to define the energy of our snake. In order to
improve the process of energy minimization, i.e., to reduce the number of iterations in the
process of minimization, we plan to create a third energy based on a priori knowledge about
the deformation of the droplet.

New Technologies – Trends, Innovations and Research

292

(a) (b) (c) (d)
Fig. 8. (a) Previous contour displayed in image after inversion and background subtraction.
(b) Image representing external energy. (c) Image displaying snake evolution. (d) Current
contour displayed in original image.
Our tracking methods allows to extract information about size and velocity of the droplet and
then calculate the Weber number, We , which is a dimensionless number characterizing a
droplet. We is the ratio between kinetic energy and surface energy (Richard & Quéré, 2000):

2
0
D v
We
µ
o
= (9)
with µ : density of liquid,
0
D : diameter of the spherical droplet, v : velocity of the droplet
and o : surface tension of the liquid.
More than extracting droplet’s features, our tracking method can automatically determine
the behaviour of the droplet. For the moment, our algorithm only recognizes adhesion or
bounce. In future improvements are planned in order to manage other behaviours as
splashing or runoff.
5. Conclusion
The aim of this chapter is to show the potential of using high-speed imaging systems in
precision agriculture. Here, we present pesticide spraying and fertiliser spreading to
illustrate agricultural applications that where HSI can be used to characterise their
processes. In centrifugal fertilizer spreading, we developed a HSI device based on a high-
speed camera and a high-power light. The images are taken at a frame rate of 1000
images/s. Then a newly developed image processing algorithm is used to determine the
grain velocities and trajectories necessary for the characterization of the centrifugal
spreading.
In pesticide spraying, we used a HSI system based on a high-speed camera and a back-light
system based on power LEDs to determine the pesticide droplet impact. The captured
images are used in a tracking algorithm that determines the behaviour of the droplet on the
impact surface.

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

293
The results obtained in both applications were promising. More work is needed to fully
characterize the processes such as the determination of the granulometry of fertilizer grain,
displacement of pesticide droplets in a real spraying process, and the combination of the
calculated spray characteristics with leaf roughness.
Only two applications of HSI in agriculture were presented here. However, this technique
could be used in other areas of agriculture, such as harvesting, where a fast process needs to
be visualised or characterized.
6. Acknowledgment
Special thanks go to Mrs Miriam Levenson from ILVO for her help in reviewing this article.
7. References
Barron, J. L. & Thacker, N. A. (2005). Tutorial: Computing 2D and 3D optical flow. In
Imaging science and biomedical engineering division .pp 1–12. Manchester: Medical
School, University of Manchester. http://www.tina-vision.net/docs/memos/2004-
012.pdf.
Cloeter, M. D.; Qin, K.; Patil, P. & Smith, B. (2010). Planar Laser Induced Fluorescence (PLIF)
Flow Visualization applied to Agricultural Spray Nozzles with Sheet
Disintegration; Influence of an Oil-in-Water Emulsion. ILASS-Americas 22nd
Annual Conf. on Liquid Atomization and Spray Systems. Cincinnati, USA, May
2010.
Cointault, F.; Guérin, D.; Guillemin, J.P. & Chopinet, B. (2008). In-Field Wheat ears Counting
Using Color-Texture Image Analysis. Journal of Crop and Horticultural Science, Vol.
36,pp. 117–130.
Cointault, F.; Paindavoine, M. & Sarrazin, P. (2002). Fast imaging system for particle
projection analysis: application to fertilizer centrifugal spreading. Journal
Measurement Science and Technology, Vol. 13, pp. 1087-1093.
Cointault, F.; Sarrazin, P., & Paindavoine, M. (2003). Measurement of fertilizer granules
motion on a centrifugal spreader with a fast imaging system. Precision Agriculture,
Vol. 4, pp. 279–295.
Cointault, F., & Vangeyte, J. (2005). Development of low cost high speed photographic
imaging systems to measure outlet velocity of fertilizer granules during spreading.
In International fertiliser society meeting, Proceeding 555, London, UK. Available from:
http://www.fertiliser-society.org/Proceedings/US/Prc555.HTM.
Derksen R. C.; Zhu, H.; Ozkan, H. E.; Hammond, R. B.; Dorrance, A. E. & Spongberg, A. L.
(2008). Determining the influence of spray quality, nozzle type, spray volume, and
air assisted application strategies on deposition of pesticides in soybean canopy.
Transactions of the ASABE, Vol. 51, No. 5, pp. 1529-1537.
Grift, T. E. & Hofstee, J. W. (1997). Measurement of velocity and diameter of individual
fertilizer particles by an optical method. Journal of Agricultural Engineering Research,
Vol. 66, No. 3, pp. 235–238.
Hall, A.; Louis, J. & Lamb, D.(2003) Characterising and mapping vineyard canopy using
high-spatial-resolution aerial multispectral images. Computers and Geosciences, Vol.
29,pp. 813–822.

New Technologies – Trends, Innovations and Research

294
Hatem, A. B. (1997). Software development for particle tracking velocimetry. University of
Nottingham, United Kingdom.
Hijazi, B.; Cointault, F.; Yang, F. & Paindavoine, M. (2008). High Speed Motion Estimation of
Fertilizer Granules with Gabor Filters. In H. Kleine & M. Guille´n (Eds.), 28th
International congress on high speed imaging and photonics proceedings, SPIE, Canberra,
Australia.
Hijazi, B.; Cointault, F.; Dubois, J.; Coudert, S.; Vangeyte, J.; Pieters, J. & Paindavoine, M.
(2010). Multi-phase cross-correlation method for motion estimation of fertiliser
granules during centrifugal spreading. Precision agriculture, Vol. 11, No. 6, pp. 684-
702
Hijazi, B.; Vangeyte, J.; Cointault, F.; Dubois, J.; Coudert, S.; Paindavoine, M. & Pieters, J.
(2011), "Two-step cross correlation-based algorithm for motion estimation applied
to fertilizer granules' motion during centrifugal spreading," Optical Engineering.
Vol. 50, No. 6, pp. 067002.
Hill, B. D. & Inaba, D. J. 1989. Use of water-sensitive paper to monitor the deposition of
aerially applied insecticides. Journal of Economic Entomology, Vol. 82, No. 3, pp. 974-
980.
Kashdan, J. T.; Shrimpton, J. S. & Whybrew, A. (2004 a). Two-phase characterization by
automated digital image analysis. Part 2: Application of PDIA for sizing sprays.
Particle & Particle Systems Characterization, Vol. 21, No. 1, pp. 15-23.
Kashdan, J. T., Shrimpton, J. S. & Whybrew, A. (2004 b). Two-phase characterization by
automated digital image analysis. Part 1: Fundamental principles and calibration of
the technique. Particle & Particle Systems Characterization, Vol. 20, No. 6, pp. 387-397.
Kashdan, J. T.; Shrimpton, J. S. & Whybrew, A. (2007). A digital image analysis technique for
quantitative characterization of high-speed sprays. Optical Laser Engineering, Vol.
45, pp. 106-115.
Kass, M.; Witkin, A. & Terzopoulos, D. (1988). Snakes: Active contour models. International
journal of computer vision, Vol. 1, No. 4, pp. 321_331.
Kim, K. S. & Kim, S. S. (1994). Drop sizing and depth-of-field correction in TV imaging.
Atomization and Sprays. Vol. 4, pp. 65-78.
Lad, N.; Aroussi, A & Muhamad, M. F. S. (2011). Droplet size measurement for Liquid Spray
using Digital Image Analysis Technique Lad. Journal of Applied Sciences, Vol. 11, No.
11, pp. 1966-1972.
Mavros, P. (2001). Flow visualization in stirred vessels. Trans IChemE, Vol. 79, Part A.
Mulligan, D.; Bouraoui, F.; Grizzetti, B.; Aloe, A. & Dusart, J. (2006). An atlas of Pan-
European data for investigating the fate of agrochemicals in terrestrial ecosystems.
Available from:
http://www.environmentalexpert.com/sign_in.asp?vienede=http://www.enviro
nmental-expert.com/articleemailformbd_login.asp?cid=27957&codi=26379.
Nuyttens, D. (2007). Drift from field crop sprayers: The influence of spray application
technology determined using indirect and direct drift assessment means. PhD thesis
nr. 772, Katholieke Universiteit Leuven. 293 pp. ISBN 978-90-8826-039-1.
Nuyttens, D.; De Schampheleire, M.; Baetens, K. & Sonck, B. (2007a). The influence of
operator controlled variables on spray drift from field crop sprayers. Transactions of
the ASABE, Vol. 50, No. 4, pp. 1129-1140.

The Use of High-Speed Imaging Systems for Applications in Precision Agriculture

295
Nuyttens, D.; Baetens, K.; De Schampheleire, M. & Sonck B. (2007b). Effect of nozzle type,
size and pressure on spray droplet characteristics. Biosystems Engineering. Vol. 97,
No. 3, pp. 333-345.
Nuyttens, D.; Braekman, P.; Windey, S. & Sonck, B. (2009a). Potential dermal pesticide
exposure affected by greenhouse spray application technique. Pest Management
Science, Vol. 65, No. 7, pp. 781-790.
Nuyttens, D.; De Schampheleire, M.; Verboven, P., Brusselman, E. & Dekeyser, D. (2009b).
Droplet size-velocity characteristics of agricultural sprays. Transactions of the
ASABE, Vol. 52, No. 5, pp. 1471- 1480.
Olieslagers, R.; Ramon, H. & De Baerdemaeker, J.(1996). Calculation of Fertilizer
Distribution Patterns from a Spinning Disc Spreader by means of a Simulation
Model. Journal of Agricultural Engineering Research Vol. 63, No.2, pp. 137-152.
Van Liedekerke, P.; Piron, E.; Vangeyte, J.; Villette, S.; Ramon, H. & Tijskens, E. (2008).
Recent results of experimentation and DEM modeling of centrifugal fertilizer
spreading. Granular Matter Vol.10, pp. 247-255.
Paul, E. L.; Atiemo-Obeng, V. A. & Kresta, S. M. (2004). Handbook of Industrial Mixing: Science
and Practice. John Wiley & Sons, INC., Publication.
Persson, K. & Skovsgaard, H. (1998). Fertiliser characteristics and spreading patterns from
centrifugal spreaders. Proceedings of the International Conference on Agricultural
Engineering, AgEng, Oslo, Norway, Paper No. 98-A-058
Piron, E. & Miclet, D 2006. Spatial distribution measurement: a new method for the
evaluation and testing of centrifugal spreaders. In: Proceedings Second International
Symposium on Centrifugal Fertiliser Spreading, Cemagref, Montoldre, France, October
24-25, 2006
Reichard, L. D.; Cooper, J. A.; Bukovac, M. J. & Fox, R. D. (1998). Using a Videographic
system to Assess Spray Droplet Impaction and Reflection from Leaf and Artificial
Surfaces. Pesticide Science,Vol. 53, pp. 291-299.
Rhodes, M. (1998). Introduction to Particle Technology. John Wiley and Sons Inc.. New Jersey,
USA.
Richard, D. & Quéré, D. (2000). Bouncing water drops. Europhysics Letters (EPL), 50:769_775.
Schick, R. (1997). An engineer’s practical guide to drop size. Spraying Systems Co. Wheaton,
Illinois, USA.
Šikalo, Š.; Wilhelm, H. D.; Roisman, I. V.; Jakirlić, S. & Tropea, C. (2005). Dynamic contact
angle of spreading droplets: Experiments and simulations. Physics of Fluids, Vol.17,
No. 6, pp. 062103.
Smith, D. B.; Askew S. D.; Morris, W. H. & Boyette, M. (2000). Droplet size and leaf
morphology effects on pesticide spray deposition. Transactions of the ASAE, Vol. 43
No. 2,pp. 255-259.
Spinei, A.; Pellerin, D. & Herault, J. (1998). Spatiotemporal energy-based method for velocity
estimation, Signal processing, Vol. 65, pp. 347-362.
Stainier, C.; Destain, M. F.; Schiffers, B. & Lebeau, F. (2006 a). Droplet size spectra and drift
effect of two phenmedipham formulations and four adjuvant mixtures. Crop
Protection. Vol. 25, pp. 1238-1243.
Teske, M. E.; Thistle, H. W. & Hewitt, A. J. (2000). Conversion of droplet size distributions
from PMS optical array probe to Malvern laser diffraction. Proceedings ICLASS
2000, Pasadena, CA, USA.

New Technologies – Trends, Innovations and Research

296
Vangeyte, J. & Sonck, B. (2005). Image analysis of particle trajectories. In B. Tijskens, & H.
Ramon (Eds.), Proceedings of the 1st international symposium on centrifugal fertiliser
spreading. KULeuven: Leuven, Belgium.
Vangeyte, J. ; Sonck B.; Van Liedekerke, P. & Ramon, H. (2004). Comparison of two methods
to measure the outlet velocity of fertilizer grains from a rotary disc. Proceedings of
AgEng 200 edited by the Technology institute, pp. 366-337.
Villette, S.; Cointault, F.; Piron, E.;Chopinet, B. & Paindavoine, M. (2007). A simple imaging
system to measure velocity and improve the quality of fertilizer spreading in
agriculture. Journal of Electronic imaging, Vol. 17, No.3, pp. 1109–1119.
Yilmaz, A.; Javed, O. & Shah, M. (2006) Object tracking: A survey. Acm Computing Surveys
(CSUR), Vol. 38, No. 4, pp.13.
Part 10
Management

14
Team Building for Implementation of
Concurrent Engineering Loops
Lidija Rihar, Janez Kušar, Tomaž Berlec and Marko Starbek
University of Ljubljana, Faculty of Mechanical Engineering
Slovenia
1. Introduction
The essence of modern production is to make a product that a customer needs, as quickly
and as cheaply as possible. Under these conditions, only a company that can provide
customers with the right products, produced at the right time, at the right location, of
required quality and at an acceptable price, can expect global market success. A product that
is not produced in accordance with the wishes and requirements of customers, which hits
the market too late and/or is too expensive, will not survive competitive pressure (Kušar et
al., 2007; Dickman, 2009). The customer should therefore participate in the process of
concurrent realisation of a product as early as possible (Starbek et al., 2003; Kušar et al.,
2004) He can participate by expressing his wishes and requirements regarding project
definition. The customer should be a temporary member of project teams in concurrent
product realisation loops.
The main feature of sequential product realisation is the sequential execution of stages in the
product realisation process (Prasad, 1996). The observed stage of the product realisation
process can only begin after the preceding stage has been completed. Data on the observed
process stage are built gradually and are completed at the end of the stage—the data are
then forwarded to the next stage (Rihar et al., 2010).
In contrast with sequential product realisation, the main feature of concurrent product
realisation is the concurrent execution of stages in the product realisation process (Prasad,
1996). In this case, the observed stage can begin before the preceding stage has been
completed. Data on the observed process stage are collected gradually and are forwarded
continuously to the next stage (Rihar et al., 2010).
A transition from sequential to concurrent product realisation considerably reduces the time
and costs of product realisation (Rihar et al., 2010), as shown in Figure 1.
It can be seen from Figure 1 that product definition costs rise uniformly in sequential
product realisation, because of sequential execution of product definition activities
(marketing, product draft, product development, elaboration of design documentation,
material management), while production costs rise rapidly, due to long iteration loops for
carrying out changes or eliminating errors.

New Technologies – Trends, Innovations and Research

300

Fig. 1. Time and costs of sequential and concurrent product realisation
The cost of product definition is much higher in concurrent product realisation, due to the
parallel execution of activities (more work is done during this stage), while production costs
are much lower than in sequential realisation, due to short iteration loops for carrying out
changes and eliminating errors.
In concurrent product realisation, there are interactions between individual stages of the
product realisation process. Track-and-loop technology has been developed for executing
these interactions (Prasad, 1996; Dickman, 2009). The type of loop defines the type of co-
operation between overlapping stages of the concurrent product realisation process. Winner
(Winner et al., 1988) suggests that 3-T loops should be used where interactions exist between
three levels of a concurrent product realisation process).
A transformation of input into output is made in every loop, on the basis of requirements
and restrictions (Prasad, 1996) as it is shown in the information flow diagram in the track-
and-loop process of concurrent product realisation (Kušar et al., 2004).
In small companies, a two-level team structure is planned for execution of 3-T loops of a
concurrent product realisation process with a variable structure of core and project teams
(Duhovnik et al. 2001; Rihar et al., 2010). The task of the core team is process support and
control, while the task of (virtual) project teams is execution of the tasks defined within the
concurrent product realisation process.
It is obvious that concurrent product realisation is not possible without well-organised
teamwork or virtual teamwork, which is the means for organisation integration. It
incorporates:
 the formation of a core team, project teams or virtual project teams in product
realisation loops,
 the selection of communication tools for the core team, project teams or virtual project
teams,
 the definition of a communication matrix.

Team Building for Implementation of Concurrent Engineering Loops

301
2. Teamwork in concurrent product realisation
Teamwork is a precondition for transition to concurrent product realisation.
2.1 Forming teams or virtual teams for concurrent product realisation
Analysis of teams in small companies (Figure 2) led the employees of the LAPS laboratory at
the Faculty of Mechanical Engineering in Ljubljana, Slovenia, to the conclusion that
concurrent product realisation required a shift from the terms "team" and "teamwork" to
"virtual team" and "virtual teamwork" (Rad & Levin, 2003; Duhovnik et al., 2009; Köster,
2010) when forming project teams.

Fig. 2. Two-level team structure in the track and loop process of concurrent product
realisation
A team is defined as a small group of people with complementary abilities that are activated
in order to achieve the common goal for which they are all responsible. Team members are
at the same location, in the same room.
A virtual team is defined as a team consisting of members that are located in various
buildings, countries or states and their cooperation is not limited by distance, organisation
or national borders. Virtual teams are formed to carry out a specific project. The teams are
disbanded when the project is finished.
A geographically dispersed virtual team allows a company to select the best team members,
regardless of their locations. There is also a substantial saving in time and costs of virtual

New Technologies – Trends, Innovations and Research

302
team operation. Moreover, a virtual team can often have short meetings (if needed), which is
physically difficult to achieve with a "classical" team.
Experience in solving problems related to forming teams or virtual teams (Kušar et al., 2008;
Žargi et al., 2009; Palčič et al., 2010) led the laboratory researchers to the conclusion that a
virtual team should be formed in the following steps:
Step 1: Identifying the need for a virtual team
Globalisation, global competition and rapid market changes require high-quality
information to be relevant and cheap. If a company does not have the required experts in its
proximity, it has to form virtual team(s) for concurrent product realisation.
Step 2: Definition of virtual team tasks
Virtual team tasks must be clearly defined, with task execution processes described in detail.
All virtual team members must understand their tasks, roles and responsibilities in the same
way. The goals of the virtual team must be clearly defined and accepted by all members of
the virtual team.
Step 3: Definition of procedures and processes for achieving the common goal
Operative procedures and processes that will ensure perfect operation of the virtual team
must be developed and implemented in a virtual team. Members of the virtual team must
understand how and in what sequence the concurrent product realisation tasks will be
executed.
Step 4: Selection of virtual team members
In this step, it is necessary to decide what types of expert knowledge are required for
successful execution of activities in the loops of product realisation, and which experts
would be best for performing these activities. The selected members of a virtual team should
be able to work efficiently in a virtual environment with the aid of ITC infrastructure for
virtual team operation.
Step 5: Appointment of a virtual team leader
The success of a virtual team leader depends on his skills, tools, techniques and strategies in
a virtual environment. Because of many different forms of expert knowledge and leadership
abilities, it is possible to rotate the virtual team leader—various members of a virtual team
can undertake the role of team leader at various stages of the product realisation process.
2.2 Communication tools used in teams and virtual teams for concurrent product
realisation
Members of (virtual) teams must constantly communicate in order successfully to perform
their tasks and to achieve the common goal. This is possible by using the available hardware
and software (Duarte & Snyder, 2006).
Hardware includes telephones, modems and communication links (Internet connections).
These are used for data transfer and for video conferences. Software includes efficient
programs, LAN, communication and other tools for holding meetings.

Team Building for Implementation of Concurrent Engineering Loops

303
It is possible to achieve efficient communication between members of the core team and
virtual project teams by using the Internet. Several Internet-based communication tools exist
for efficient communication among team members.
Team meeting
The most common and efficient type of communication is a team meeting. The team
leader calls a meeting and sends the agenda, required material and proposals for
decisions.
The team members gather at the agreed time in the appointed room, which should be quiet,
pleasant and fitted with audio- and video equipment.
The team leader or moderator chairs the meeting. Team members deal with the problems
in accordance with the agenda and, as a rule, conclusions are adopted unanimously.
During the meeting, a record is kept and the minutes are sent to all team members after
the meeting.
Team members know each other well, which contributes to establishing good relations and
trust within the team.
It is possible to improve the efficiency of meetings by using methods of creative search and
evaluation of ideas (Scheer, 2007).
Video conference
If the team members are in the same room, when they create a document, they gather
around a PC. If they are at different locations but connected by the Internet, they need a tool
for bi-directional video and audio transfer—this is a video conference.
If a video conference is held via the Internet, a high performance PC, additional equipment
for high-quality video and audio processing and a high-speed Internet connection to the
distant system (the other point of the video conference) are required.
A video camera is used for filming, with its results shown on a monitor; sound cards and
microphones process audio signals and loudspeakers reproduce the sound.
A video conference can be organised in several different ways:
 a video conference between two users (full-duplex transfer of audio and video signals),
 a video conference between a single user on one side and several users on the other
(full-duplex distributed transfer of audio and video signals across the network),
 a video conference between several users, in which video and audio signals are
transmitted from more than two locations, but they are displayed on one monitor at a
time only (half-duplex mode).
Figure 3 shows the principle of video conference organisation.
In order to use video conference equipment via the Internet with anybody connected to the
Internet anywhere, it is necessary to use standard equipment. The H.323 standard defines
protocols for video conference communications via the Internet. All video conference
equipment should therefore be compatible with the H.323 standard.

New Technologies – Trends, Innovations and Research

304






Fig. 3. Video conference
Audio conference
An audio conference is similar to a video conference, but without video transfer. The
purpose of an audio conference is to hold an electronic meeting of two or more virtual team
members at different locations.
The following hardware is required for an audio conference:
 a gateway server connects PBXs to the conference bridge,
 PCs or PBXs are connected to the server via the Internet,
 fixed line or mobile phones.
Software for audio conferences is based on LAN and WAN Internet communications, as
well as IP and VOIP technologies. During an audio conference, the caller makes a
connection from a PC or PBX (which connects stationary and mobile phones) via a VOIP
output to the Internet. A gateway server enables connection with other audio conference
participants.
A user can join the audio conference system by entering a password (PIN code). Figure 4
shows the principle of audio conference organisation.

Team Building for Implementation of Concurrent Engineering Loops

305





Fig. 4. Audio conference
Voice mail
Voice mail is used for the transmission of short voice messages between virtual team
members. It is often used in combination with phone communications. If a virtual team
member is not accessible by phone, the caller can leave him a short message.
E-mail
E-mail allows the transmission of voice, pictures and text documents in electronic format
(paper documents can also be converted to electronic format). E-mail increases team
communication capacities.
An e-mail system consists of two servers:
 an SMTP server for sending outgoing messages,
 a POP3 or IMAP server for the transmission of incoming messages.
An e-mail consists of a short message text and attached documents (files). The problem with
e-mail is that messages can get lost or the server on the recipient side can decide that they
are spam and delete them. Another problem may be a vast number of e-mail messages, so
the recipient spends a lot of time reading and answering. E-mails are often integrated with
central web data warehouses that allow traceability and access to messages. Figure 5 shows
the principle of organising an e-mail system with a data warehouse.

New Technologies – Trends, Innovations and Research

306

Fig. 5. E-mail system with data warehouse
Groupware
Groupware is a universal system for joining virtual team members and can be used anytime
and anywhere. Groupware tools allow simple, rapid, reliable and cheap communication
among virtual team members without any limitations. An example of groupware use is
given in Figure 6.

Fig. 6. Groupware

Team Building for Implementation of Concurrent Engineering Loops

307
Groupware tools can be used to create a virtual office, which allows creative teamwork,
supported by the Internet and World Wide Web. The creation of a virtual office with
groupware is shown in Figure 7.

Fig. 7. Creation of a virtual office with groupware
Steps in the creation of a virtual office:
Step 1: The virtual team leader establishes contacts with other virtual team members, e.g. by
e-mail.
Step 2: The virtual team leader defines the communication process in the Internet
environment, which represents a virtual office.
Step 3: Other virtual team members log into the virtual office using their passwords.
Step 4: Cooperation and exchange of information among virtual team members is performed
only via the virtual office.
Electronic white board
An electronic white board is a combination of hardware and software tools that serve as a
support to team meetings. It can be portable or fixed. It allows writing and drawing during
a team meeting. The text on the white board can be stored in electronic format and sent via
communication channels to other virtual team members, e.g., during a video conference or
groupware use.
2.3 Advantages and drawbacks of communication tools
The research group at the Laboratory for Manufacturing Systems at the Faculty of
Mechanical Engineering in Ljubljana, Slovenia, decided to analyse the characteristics,

New Technologies – Trends, Innovations and Research

308
advantages and drawbacks of communication tools required in (virtual) teamwork of
concurrent product realisation.
On the basis of collected and verified data from vendors of (virtual) teamwork
communication tools, every team member made a list of the features, advantages and
drawbacks of these tools. The team leader then organized a creativity workshop to obtain a
coordinated proposal of the features, advantages and drawbacks of available
communication tools. The results of the creativity workshop are shown in Table 1.

Communication
tool
Features Advantages Drawbacks
TEAM
MEETING
on one location

Suitable for:
TEAMWORK
- Best tool for real-time
communication because
of personal contact and
visual & verbal
communication
between team members.
- Meetings can be formal
or informal.
- Visual and verbal
communication.
- Personal contacts
between team members.
- All team members know
each other.
- Participants can prepare
for a meeting.
- All team members
must have time to
attend the meeting.
- Much time needed
for travel.
- High travel costs.

VIDEO
CONFERENCE

Suitable for:
VIRTUAL
TEAM
- Good tool for real-time
communication because
of visual and verbal
communication and
possibility of
interactions between
team members.
- No direct personal
contacts between team
members.
- Visual and verbal
communication.
- Indirect personal
contact.
- Prompt communication.
- No expensive travel.
- Saving in time.
- Team members can
prepare for a meeting if
they know its purpose
and agenda in advance.
- The use of audio/video
equipment.
- All team members
must be in the
video conference
room at the same
time.
- Preparation in
advance is
required.
- Time delay of video
due to distance.
- High costs of hiring
communication
channels.
AUDIO
CONFERENCE

Suitable for:
VIRTUAL
TEAM
- Good tool for real-time
communication.
- Verbal communication
and possibility of
interactions between
team members.
- Functions in the
Internet environment.
- Reliable and always
available
communication tool.
- Participants are on
various locations.
- Participants only need
the Internet connection.
- Low cost of use.
- Only verbal
communication.
- Participants must
be simultaneously
present in the
communication
network.

VOICE MAIL

Suitable for:
VIRTUAL
TEAM
- Tool for impersonal
communication.
- For urgent messages
only.
- Message is sent to the
recipient regardless of
his presence.
- Recipient has time to
prepare an answer.
- Impersonal
communication.
- Suitable for urgent,
short messages.
E-MAIL

Suitable for:
VIRTUAL
TEAM
- Impersonal
communication without
visual and verbal
communication.
- No interactions
between team members.
- Useful for sending text
messages and
documents.
- Return receipt.
- Impersonal
communication.
- Limited size of
documents to be
sent.

Team Building for Implementation of Concurrent Engineering Loops

309
Communication
tool
Features Advantages Drawbacks
GROUPWARE

Suitable for:
VIRTUAL
TEAM
- Allows verbal
communication
between team
members.
- Exchange if
information in
real-time.
- Simultaneous
communication
between several team
members.
- During task execution
the system allows
simultaneous work of
several participants on
various locations.
- Common databases.
- Communication
process must be
defined in advance.
- Simultaneous
cooperation of team
participants on various
locations.
- Concurrent exchange of
data and information.
- Access to data on a
common server.
- Video communication is
possible with additional
video equipment.
- Information can be sent
to team members via
voice mail.
- High burden for
computer
communications.
- High data-
transmission costs.
ELECTRONIC
WHITE
BOARD

Suitable for:
TEAMWORK
and
VIRTUAL
TEAM
- Portable or fixed board
that allows electronic
data acquisition,
exchange and
archiving.
- Simple use.
- Intended for taking
notes on results.
- Rapid electronic transfer
of the board contents to
other team members.
- High investment
cost
- Expensive and
complicated
maintenance.
Table 1. Advantages and drawbacks of tools for (virtual) teamwork
It can be seen from Table 1 that only two types of communication tools are suitable for
teamwork (team meeting and electronic white board), while other tools are suitable for
virtual teamwork.
Analysis of several examples of virtual teamwork showed that virtual teamwork is
successful if four organisational roles are filled in the team:
Role 1: Convener of the virtual team meeting (defines goals, expected results and specifies
the agenda).
Role 2: Technical assistant (prepares the meeting, tests the operation of the communication
tools before the meeting and ensures flawless operation during the meeting).
Role 3: Virtual team leader (ensures the successful work of the virtual team by explaining
specific questions).
Role 4: Other virtual team members (prepare themselves for the meeting and participate
actively during the meeting).

New Technologies – Trends, Innovations and Research

310
2.4 Communication matrix in product realisation loops
The communication matrix defines the method of exchanging information and documents
in the execution of concurrent product realisation activity loops. A list (Table 2) must be
made for every activity:
 input information with required documents for beginning execution of the activity,
 output information with required documents that arise from execution of the activity,
 tools for creating and storing information,
 sender of the information or document,
 receiver of the information or document,
 communication tool used for information exchange.

ID
Input
information–
document
Activity
Output
information–
document
Tools
used
Information
(document)
sent by
Information
(document)
received by
Commu
nication
tool
1
Input
information of
activity 1
ACTIVITY 1
Output
information of
activity 1
... Sender 1 Receiver 1. Tool 1
2 ... ... ... ... ... ... ...
3 ... ... ... ... ... ... ...
4 ... ... ... ... ... ... ...
n
Input
information of
activity n
ACTIVITY n
Output
information of
activity n
Sender n Receiver n. Tool n
Table 2. Communication matrix in concurrent product realisation loops
3. Concurrent realisation of a pedal assembly
A company decided to make a project plan for concurrent realisation of a pedal assembly
and to carry out this project (Figure 8).

Fig. 8. Pedal assembly

Team Building for Implementation of Concurrent Engineering Loops

311
The goal of the project was to make a competitive pedal assembly, suitable in terms of
quality, reliability, mass, price and realisation time. Concurrent realisation of the pedal
assembly was divided into six stages:
Stage 1: Preparation of the pedal assembly realisation project,
Stage 2: Development of the pedal assembly,
Stage 3: Development of the pedal assembly realisation process,
Stage 4: Test production of pedal assembly,
Stage 5: Qualification of the pedal assembly realisation process,
Stage 6: Regular production of the pedal assembly.
There were 280 activities and five loops of concurrent realisation of the pedal assembly
within the six stages of pedal assembly realisation:
1. Order acquisition loop (3-T loop),
2. Pedal assembly development loop (3-T loop),
3. Pedal assembly process loop (3-T loop),
4. Pedal assembly qualification loop (3-T loop),
5. Completion of the project of pedal assembly realisation loop (2-T loop).
Figure 9 shows how the loops are formed, and the type of cooperation within realisation
stages.

Pre- series
Beginning of regular production
Prototype
Program
confirmation
Acquisition of
customer inquiry
FEEDBACK INFORMATION ANALYSIS AND CORRECTIVE MEASURES
TEST MANUFACTURING
PROJECT PREPARATION
Stage 1 .
PRODUCT DEVELOPMENT
Order acquisition
T-3 loop
PROCESS DEVELOPMENT
P
E
D
A
L

A
S
S
E
M
B
L
Y

R
E
A
L
I
S
A
T
I
O
N

S
T
A
G
E
PEDAL ASSEMBLY REALISATION TIME
Acquisition of
customer order
Product
development
T-3 loop
Process
development
T-3 loop
Product and
process qualification
T-3 loop
Project completion
T-2 loop
REGULAR PRODUCTION
Series
PROCESS QUALIFICATION
Stage 2
Stage 3
Stage 4
Stage 5
Stage 6


Fig. 9. Loops of concurrent realisation of pedal assembly

New Technologies – Trends, Innovations and Research

312
3.1 Forming teams / virtual teams for realisation of pedal assembly
After seeing the presentation of two- and three-level structures of (virtual) teams in product
realisation loops (Duhovnik et al., 2001; Kušar et al., 2004) the company management
selected a two-level team structure, whereby the core team is on the first level and five
virtual project teams are on the second level (Figure 10).




Fig. 10. Structure of teams for concurrent realisation of pedal assembly
Forming the core team
The core team for concurrent realisation of the pedal assembly will monitor the whole
project, solve organisational issues and coordinate the strategy of performing tasks. The
company management decided that the following people would be members of the core
team:
 project manager (PM)—permanent member,
 project team leader of a particular loop (VPL)—non-permanent member,
 head of supply department (external supply and sales of investment funds—
PUR+SIF)—permanent member,
 head of sales and sales logistics department (S+LD)—permanent member,
 head of development department (DEV)—permanent member,

Team Building for Implementation of Concurrent Engineering Loops

313
 head of industrialisation and development of manufacturing technology department
(IND+MTD)—permanent member,
 head of manufacturing planning and supply, maintenance and manufacturing centre
(MP+MNT+MC)—permanent member,
 head of quality control department (Q)—permanent member,
 head of suppliers (SUP)—permanent member,
 head of customers (CUS)—permanent member.
Figure 11 shows the structure of the core team for concurrent realisation of the pedal
assembly.




Fig. 11. Core team structure
Core team members (with the exception of the project manager) will work on the project
part of their working time and the rest of the time they will perform tasks in their
departments. The project team manager will be outside his department throughout the
project duration and will work full time on the project. When the project is finished the
project team manager will return to his department.
Forming virtual project teams for the loops of concurrent realisation of the pedal
assembly
As shown in Figure 10, there will be five virtual project teams in loops of concurrent
realisation of the pedal assembly. Members of virtual teams will be experts from 14
company departments and two representatives from strategic suppliers and customers,
depending on the level of assigned responsibility for execution of activities within a
particular loop. Figure 12 presents a Gantt chart of the first loop of concurrent realisation of
the pedal assembly: "Order acquisition loop".

New Technologies – Trends, Innovations and Research

314
Intensity of responsibility of virtual team members during execution
of loops of concurrent realisation of pedal assembly
POINTS
Member is informed 1
Member participates 3
Member has responsibility 9
Table 3. Intensity of responsibility of virtual team members


Fig. 12. Gantt chart of the "Order acquisition loop"

Team Building for Implementation of Concurrent Engineering Loops

315
When the company obtains an offer, loop 1 activities (Order acquisition loop) are started; its
three stages are: project preparation, development of the pedal assembly and development
of the pedal assembly process. This loop is executed when the sales department considers
that it is sensible to make an offer for the realisation of the pedal assembly.
Loop 1 is followed by loops 2, 3, 4 and 5. The project manager decided (in agreement with
the company management) that the intensity of responsibility of each virtual team member
during the execution of activities would be marked by a 1-3-9 method, as shown in Table 3.
A creativity workshop was organised with 14 representatives from company departments,
as well as representatives from suppliers and customers. The goal of the workshop was to
score the intensity of responsibility of virtual team members when executing the activities of
the five loops in concurrent realisation of the pedal assembly.
The results of scoring the intensity of responsibility of virtual team members during execution
of the first loop of concurrent realisation of the pedal assembly are presented in Table 4.
It can be seen from the Table 4 what are the responsibilities of each virtual team member for
the execution of activities in the first loop of pedal assembly realisation.
The procedure of scoring the intensity of responsibility of virtual team members was also
carried out for the other loops.
From the sum of points assigned to the i-th team member during execution of activity in the
j-th loop, a factor of total intensity of responsibility of the i-th member in the j-th loop can be
calculated as:

ij
ij
j
SMP
FTI
SAP
 (1)
FTI
ij
factor of total intensity of responsibility of the i-th team member in the j-th loop
SMP
ij
sum of the points assigned to the i-th member in the j-th loop
SAPj sum of all points assigned in the j-th loop
The results of the calculation of the total intensity of responsibility factor of virtual project
team members during execution of activities in all five loops of concurrent realisation of
pedal assembly are shown in Table 5.
After they had made an overview of the total intensity of responsibility factors of virtual
team members during execution of activities in the loops of pedal assembly realisation, the
creativity workshop participants reached the following conclusions:
 the i-th member of the virtual project team (VPT) of the j-th loop of realisation of the
pedal assembly, with the maximum factor of total intensity of responsibility, would be
appointed as team leader of the j-th loop of PTL,
 representatives from departments with a total intensity of responsibility factor above
5% would also be included in the j-th loop of pedal assembly realisation,
 representatives of suppliers and customers would also be included in the j-th loop of
pedal assembly realisation, regardless of their total intensity of responsibility factor, in
order to avoid misunderstanding suppliers' and customers' requirements.

New Technologies – Trends, Innovations and Research

316




Team Building for Implementation of Concurrent Engineering Loops

317
LEGEND:
1
MNG –
Management
5
IND – Industrialisation
department
9
PUR – Supply
department
13
AD – Accounting
department
2
S – Sales
department
6
Q – Quality control
department
10
MC – Manufacturing
centre
14
LD– Sales logistics
department
3
PM – Project
manager
7
MTD – Manufacturing
technology development
department
11
MP – Manufacturing
planning and supply
department
15 SUP– Suppliers
4
DEV –
Development
department
8
SIF – Investment funds
supply department
12
MNT – Maintenance
department
16
CUS – Customer

Table 4. Scoring the intensity of responsibility of virtual team members in the "Order
acquisition loop"
Figure 13 presents the structure of virtual project teams of five loops in concurrent
realisation of the pedal assembly.


SIF
15%
PM
12%


Fig. 13. Virtual project teams in the loops of concurrent realisation of the pedal assembly

New Technologies – Trends, Innovations and Research

318


Team Building for Implementation of Concurrent Engineering Loops

319

Table 5. Factors of total intensity of responsibility of virtual project team members during
execution of loops of pedal assembly realisation

New Technologies – Trends, Innovations and Research

320


Team Building for Implementation of Concurrent Engineering Loops

321


New Technologies – Trends, Innovations and Research

322


Team Building for Implementation of Concurrent Engineering Loops

323

Table 6. Communication matrix for execution of "Order acquisition loop" activities

New Technologies – Trends, Innovations and Research

324
3.2 Forming the communication matrix
A creativity workshop was organised with 14 representatives from company departments,
as well as representatives from suppliers and customers. The goal of the workshop was to
define for every activity in the loops of concurrent realisation of pedal assembly:
 input information with required documents for beginning execution of an individual
activity,
 output information with required documents that arise from execution of an individual
activity,
 tools for creation and storage of information,
 senders of information or documents,
 receivers of information or documents, and
 the mode of sending the information or documents.
Table 5 shows some results of the creativity workshop regarding the formation of the
communication matrix for execution of activities of the "Order acquisition loop".
The communication matrix defines in advance the mode of information exchange and
communication tools required.
4. Conclusion
The paper emphasises that concurrent product realisation is not possible without well-
organised teamwork or virtual teamwork.
A two-level team structure of a track-and-loop process of concurrent product realisation,
suitable for small companies, is presented. An overview is given of available communication
tools for teamwork/virtual teamwork, with the advantages and drawbacks of individual
tools. The content of the communication matrix of concurrent product realisation is formed,
defining the exchange of information/documents in the execution of concurrent product
realisation activity loops.
Special attention in this paper is given to the presentation of the methodology for design of
concurrent engineering loops and to the determination of team members / virtual teams for
concurrent product realisation process.
The core team members and the project team members determination are based on the
calculation of total intensity factor of responsibility of the participants on the project of
concurrent product and processes realisation (functional units of the company, customer,
suppliers, subcontractors).
The suggested methodology of forming teams or virtual teams and communication matrix
of concurrent product realisation was tested on a study case of a pedal assembly.
The project of pedal assembly is divided in five concurrent engineering loops. Member of
the team / virtual team which has the maximal intensity factor leads the project team for
concurrent engineering loop realisation (sales department leads the team for realisation of
the first concurrent engineering loop).

Team Building for Implementation of Concurrent Engineering Loops

325
Further work on solving concurrent product realisation problems will be focused on making
a catalogue of the entire concurrent product realisation process using ARIS—a tool for
process modelling and reengineering (Scheer, 1999).
5. References
Dickman, P. (2009). Schlanker Materialfluss, Springer-Verlag, ISBN 978-3-540-79514-8, Berlin –
Heidelberg
Duarte, D.L., Snyder, N.T. (2006). Mastering Virtual Teams, Jossey-Bass, cop., ISBN 0-7879-
8280-6, San Francisco, CA
Duhovnik, J., Starbek, M., Dwivedi, S.N., Prasad, B. (2001). Development of New Products
in Small Companies, Concurrent engineering: Research and Applications, Vol.9, No.3
(September 2001), pp 191-210, ISSN 1063-293x
Duhovnik, J., Žargi, U., Kušar, J., Starbek, M.(2009). Project-driven concurrent product
development. Concurrent engineering: Research and Applications, Vol. 17, No 3
(September 2009), pp. 225-236, ISSN 1063-293x
Köster, K. (2010). International Project management, SAGE Publications Ltd, ISBN 978-1-4129-
9, London, UK
Kušar, J., Duhovnik, J., Grum, J., Starbek, M. (2004). How to reduce new product
development time, Robotics and Computer –Integrated Manufacturing, Vol. 20, No. 1 (
February 2004), pp.1-15, ISSN 0736-5845
Kušar, J., Duhovnik, J., Tomaževič, R., Starbek, M. (2007). Finding and evaluating customers
needs in the product-development process, Journal of Mechanical engineering, Vol.
53, No. 2 (February 2007), pp. 78-104, ISSN 0039-2480
Kušar, J., Rihar, L., Duhovnik, J., Starbek, M. (2008). Project management of product
development, Journal of Mechanical engineering, Vol. 54, No. 9 (September 2008), pp.
588-606, ISSN 0039-2480
Palčič I., Buchmeister B., Polajnar A. (2010). Analysis of innovation concepts in Slovenian
manufacturing companies, Journal of Mechanical engineering, Vol. 56, No. 12
(December 2010), pp. 803-810, ISSN 0039-2480
Prasad, B. (1996). Concurrent Engineering Fundamentals, Volume I, Integrated Product and
Proces Organization, Prentice Hall PTR , ISBN 0-13-147463-4, New Jersey
Rad, F. Parviz, Levin, G. (2003). Achieving Project Management Success using Virtual teams, J.
Ross Publishing, ISBN 1-932159-03-7, Boca Raton, Fla.
Rihar L., Kušar, J., Duhovnik, J., Starbek, M. (2010). Teamwork as a precondition for
simultaneous product realisation, Concurrent engineering: Research and Applications,
Vol. 18, No. 4 (December 2010), pp. 261-273, ISSN 1063-293x
Scheer, A.W.(1999): ARIS – Business Process Modeling, Springer-Verlag, ISBN 1-932159-03-7,
Berlin – Heidelberg
Scheer, J. (2007): Kreativitätstechniken, GABAL Verlag, ISBN 978-3-89749-736-8, Offenbach
Starbek, M., Duhovnik, J., Grum, J., Kušar, J.(2003). How to achive a competitive position
with a small company, Journal of Mechanical engineering, Vol. 49, No. 4 (April 2003),
pp. 200-217, ISSN 0039-2480

New Technologies – Trends, Innovations and Research

326
Winner, R. I., J. P. Pennell, H. E. Bertrand, and M. M. G. Slusarezuk (1988). The Role of
Concurrent Engineering in Weapons System Acquisition, IDA Report R-338, Institute
for Defense Analyses, Alexandria VA
Žargi, U., Kušar, J., Berlec, T., Starbek, M. (2009). A company's readiness for concurrent
product and process development, Journal of Mechanical engineering, Vol. 55, No.
7/8 (July/August 2009), pp. 427-437, ISSN 0039-2480
15
The Development Process
as a Complex and Interdisciplinary
Team Based Challenge
Michael Bader
1,3
and Mario Fallast
2,3

1
Institute for Machine Components and Methods of Development
2
Research and Technology House,
3
Graz University of Technology,
Austria
1. Introduction
When looking at current development processes of technical products, general trends can be
observed. These trends do not only influence the development process itself, but also make
specific demands on the developers.
These trends are for example the reduction of development times due to the increasing
pressure to put products on the marked before the competitor, the globalization of the
market and the partially high degree of specialization of the enterprises while product-
complexity increases.
Hence interdisciplinary knowledge and experience are fundamental requirements for
efficient and effective work in the development project. Aside from technical competence
special social skills are an important requirement for the development team.
Therefore it is increasingly important to come back to the collaboration with experts or
specialized organizations in innovation projects. This means – especially for small and
medium enterprises (SMEs) – that one has to leave his own company and organization in
order to fully use the know-how of others. Quite often, the cooperation with research
facilities such as universities is being sought.
This chapter will focus on the collaboration of the participating parties during the
development process of technical products. Special emphasis will be put on design-projects
while long-term research cooperation is not subject of this publication.
Due to the authors’ experience with research and development projects realized at a
technical university, the statements made in this chapter hold universal validity when
applied to the collaboration of SMEs and research institutions. This also applies to
development and redesign projects ordered by SMEs, especially when these do not have
much experience with research- and development processes as well as the cooperation with
research facilities.

New Technologies – Trends, Innovations and Research 328
2. Current state
The classic methods of development are widely spread and have – depending on
discipline, corporate culture and other influencing factors – adapted to the above listed,
changed requirements. Thereby, not only problem-oriented analytic and abstracting
methods, which need to be followed step by step, but also creative-synthetic methods are
applied.
As a matter of principle, a methodological procedure should not leave the finding of
solutions to pure luck but consider and assess all possible options.
A methodic-systematic approach will help to reach the goal of a development- and design
process more efficiently. Furthermore, it will ensure that the finished product meets all the
user’s expectations.
Some methods are universally applicable while others are limited to the individual phases
of the project.
Exemplarily some methods like ABC-analysis, value-analysis, TRIZ, SWOT-Analysis,
morphological matrix, various creativity techniques, the abstraction and compilation of
function and effect structures, basic design rules, principles and guidelines are mentioned.
In project management (as well as product development) methods like simultaneous or
concurrent engineering and systems engineering are well established and successfully used.
Especially SMEs quite often act as producers and sellers, but not so much as developers of
products. In many cases the limited resources and the focus on production rather than
development does not leave the possibility to work analytically and methodically on new
product development. This is why methods of development quite often can’t be exploited
to their full potential or sometimes are not beneficial at all. Due to the limited manpower
it is sometimes impossible for SMEs to analyze competing products and markets with
sufficient depth in order to take over good existing solutions. Quite often the SME has to
rely on solutions that are customary within the industry, or even the company. These
solutions may be well understood, but examined closely they may not represent the
optimal solution.
When faced with a concrete problem, one tends to intuitively make the first steps before
getting in touch with a professional development team, such as the institute of a technical
university. Consequently no performance specifications are defined, but one starts to think
in solutions right away. (I.e. a few, specific solutions because a holistic view of the problem
with all its possibilities may not be possible).
It can be witnessed very often that the early phases of the development projects are
neglected, don’t receive sufficient resources (time, money, degrees of freedom) or that there
is a significant lack of open-mindedness towards new, untypical approaches.
Fig. 1 (Ehrlenspiel et al., 2007) shows very well how low the relative effort for changes in
an early stage is compared to later in the lifecycle of a product. The development process
has enormous influence on the overall costs, but in this phase just a small fraction is
invested.

The Development Process as a Complex and Interdisciplinary Team Based Challenge 329

Fig. 1. Possibilities to influence costs and emergence of cumulated costs „the dilemma of
product development“ (cp.: Ehrlenspiel et al., 2007, p.11)

Fig. 2. Phases of product development (cp.: Ehrlenspiel, 2009, p.2)

New Technologies – Trends, Innovations and Research 330
Even though the early development phases have a significant influence on the quality and
the costs of the final product, they typically do not receive enough attention and resources;
this can be seen in Fig. 2.
Fig. 3 (Reinhart et al, 1996) shows how costs for fault correction increase, depending on the
state of development. The graph also represents how serious consequences can be when
know-how from additional contributors is included too late in the design process.

Fig. 3. “Rule of ten” of the costs for fault correction over time (cp.: Reinhart et al., 1996)
For several decades methods of development and design have tried to lead away from an
isolated technical design process towards a systematic approach for the development of
technical products.
The acceptance standard VDI 2221 – Methods for the development and design of technical systems
and products – serves as a guideline during the development process and divides it into four
phases: Definition of the task, Finding a rough concept, designing and elaboration of the
actual solution. These four phases contain seven design states, each generating its own work
result. While going through this process a flexible advancement and iterative steps back and
forth into different design phases are recommended. This can be helpful to answer
interdisciplinary questions and requires working in interdisciplinary teams. Aside from
professional competence, the employee and its socials skills are of great importance. In
order to successfully go through a project it is therefore necessary to understand the
terminology and problems/objectives of other disciplines. Only then an ideal solution
according to all aspects – the best compromise – can be achieved.
A lack of subject-specific competence and poor communication can easily lead to wrong
estimates on the potential and the required effort of problems that cannot be solved with
business-internal knowledge. It is imperative to not only include specialists into the team,
but also generalists who keep an overview. The weighting of the individual disciplines
during the distribution of resources and decisions-making is of great importance.

The Development Process as a Complex and Interdisciplinary Team Based Challenge 331

Fig. 4. systematic process model (cp.: VDI 2221, 1993, p.3)
The VDI 2221 refers to the systematic-technical execution of the individual phases (i.e. lifecycle
phases of the system) where no iterations on the same level are intended. A well-structured
approach has of course been established in many technical disciplines. If a company that has
been active in its branch of trade for many years decides to design a new product it will
consult the same employees and departments as usual. The whole process will occur just like a
routine. Clear goals and rules exist, the employees are used to working with each other; they
understand their “language” and their way of posing a problem quite well.
The situation tends to be entirely different in a newly compiled team, like for example when
SMEs get in touch with a research facility for the first time. Here it is necessary to include all
participating parties and also the user into the development process. This, for example, is
explained in (Pugh, 1990) but often hard to put into practice. However, this is contradictory
to the general recommendation to change only one element in the system “Problem-
employee-method”. This means that no new problem should be solved with new employees
or a new method (e.g. simulation tool) should not be validated by solving a new problem.
The following chapters will refer to these special constellations on several occasions.
To get a better overview about the position of the development steps we are focusing on the
three phases of the innovation process according to (Thom, 1980). The development process
is situated mostly in the phases “idea acceptance” and “idea realization”.
For the following considerations we would like to define three stakeholder groups, which
act in the phases we have defined before:
The customer or user: He uses the product or technical system which has to be designed. The
identification and consequently the fulfillment of the customer request are the goal of a
successful business.

New Technologies – Trends, Innovations and Research 332

Fig. 5. The three phases of the Innovation Process (according to Thom, 1980, p.53)
Very often an organization (or an organizational unit) is not capable of developing a
product, which entirely fulfills the customer requirements without external support. The
organization then turns into an ordering party, which transfers a certain part of the task to a
development contractor. The development contractor hereby takes over a certain part of the
development effort.

Fig. 6. The three stakeholder groups involved
This transfer of development projects or parts of them requires multiple “translations”. The
customer request needs to be correctly interpreted and formulated in technical specifications
and properties.
In this context some general requirements of the development process shall be listed:
 Branch-specific solutions, resulting from the historic development which represents a
local optimum already exist. At the same time better solutions may be available in other
industrial sectors and even considered standards but they are not noticed.
 The strong specialization of many technical fields makes it impossible to keep an
overview of all the branches and technical solutions that appeared in them. On the
other hand, problems or rather technical systems and products are getting more
complex and interdisciplinary. This results in the necessity to link various disciplines
which results in new fields such as mechatronics and domotronics.
 Development cycles are getting shorter and shorter. Therefore concurrent engineering
in different special fields and disciplines is necessary.
 The available knowledge is growing constantly. This makes it increasingly hard to keep
a significant part of this knowledge inside the own company. It is therefore important
and inevitable to include external know-how and facilities into the development
process.

The Development Process as a Complex and Interdisciplinary Team Based Challenge 333
 Every company and every individual only has a very limited overview. Every
employee is used to apply certain solutions depending on his educational background
or work experience. For example: A mechanical engineer will most likely solve a
problem with a mechanical solution while an electric, electronic, hydraulic or
mechatronic solution might achieve better functionality.
 There is enormous potential for new products as a result of interdisciplinary tasks.
Examples are medical engineering and veterinary telematics.

Fig. 7. For various reasons the basic shape of the development funnel as a result of concepts
and solutions that are not to be realized.
The number of possible solutions is restricted by the walls of impossibility. The position and
stiffness (resistance against moving) of these walls is defined by:
 the performance specifications
 personal assessment
 experience with specific solutions
 personal preferences of the participating employees
This means that hard- and soft-facts determine the restrictions represented by the “walls of
impossibility”.
If knowledge and ideas of all the contributing parties are fully exploited, the number of
solutions for seemingly simple functions increases enormously. But this only applies when
certain open-mindedness is kept during this project phase. The probability that very good
solutions are found is correspondingly higher. In order to discover and take advantage of all
these solutions the following prerequisites are necessary:
- Involvement of all participating parties (user – ordering party- development contractor)
- Openness towards untypical or unusual solutions

New Technologies – Trends, Innovations and Research 334
- The customer request has to be determined and represented correctly. For example:
Representing the customer request by playing a question-answer-game and thereby
generating specification requirements.
- To verify development steps concrete solutions need to be presented to the customer in
early design phases.
- Willingness to invest a lot of resources in an early project phase. (Exploration- and idea-
finding phase).
- Willingness to question established solutions
- Willingness to question the validity of the specifications in the customer requirements.
The models mentioned above as well as (Eppinger and Ulrich 2003) put their focus mostly
on the actual product itself and see all sub-tasks directly linked to a product (and its
marketing, manufacturing, customers,…).
In cases where a development partner joins the project in a later stage, the understanding of
customer requirements may not be enough.
3. But the following problems may occur
Some specific examples and problems that frequently occur shall be listed below. It is
important to note, that the terms “customer” and “contractor” may also apply to
organizational units inside the company and only describe how their services are related.
The in VDI 2221 mentioned task is quite often already a modified version of the original task.
The “actual problem” is the fulfillment of the customer requests.
But also in the case of in-company relationships of services like it can be found in contracted
developments this means that the customer/user of the product/service shall always be seen as
(part of) the ordering party and never be excluded completely from the design process. If the
task itself is not defined correctly the customer requests will not be fulfilled either.
 Frequently the actual problems of the customer have already been interpreted wrong and
translated into faulty technical functions. The abstraction of the problem (i.e. the reduction
of the system to the basic level of functions) enables/alleviates the understanding of the
problem, opens the horizon of options and facilitates the structured search for solutions.
To cite an example the application of TRIZ shall be mentioned.
 During the early stages of a project it is very important to focus all actions and
communication on the definition of the actual problem and not on concrete solutions.
The discretization of predefined solutions will only exclude other possibilities.
Sometimes a solution that has been carried out elsewhere is examined and discarded
right away even though its principle would just have needed minor modification in
order to function properly.
 None of the parties (user, ordering party and contractor) should select solution
principles or carry them out practically at a too early design phase (Avoid premature
decision making).
 The customer requirements are not wrong but kept too general. They contain
simplifications and assumptions, which don’t describe the problem to the contractor in

The Development Process as a Complex and Interdisciplinary Team Based Challenge 335
enough detail. (In this case the task can be defined more precisely if the contractor
checks back with the ordering party)
 The customer request contains requirements with too much detail in a way that already
suggestions for solutions or concepts created by the ordering party were integrated. In
the worst case and therefore unjustified, this means that variations of solutions (that
might have been successful) have been removed from the pool of ideas. A problem that
frequently occurs is the presentation of solutions simultaneous to the presentation of
the actual problem when the contract is placed.
 The contractor is consulted and included too late into the development process. This
means that - according to Fig. 8 - the transfer of the task occurs too close to the
introduction of the product to the market. In this case some solutions (and maybe even
the best one) might already have been eliminated by the customer or they have never
been in the pool of feasible ideas. Non-sufficient “expert-knowledge” could result in
wrong decisions  Places the filter at a wrong position and generates a blind spot.
A typical statement is: “We have already tried this and it didn’t work.”
But, when has it been tried? Maybe the available technologies have changed or
improved since.
How was it tried? Maybe the principle of solution has not been adapted correctly to
perform well in the actual construction.
Why exactly did it not work? Maybe some details were not taken into account.
What exactly didn’t work? Maybe the error is not to be found in the principle but rather
the detail design.
 A problem-oriented approach (on the contrary to a product oriented approach which
considers the borders of departments and components as interfaces) is necessary. This
approach requires interdisciplinary collaboration and communication between
departments and participating parties.

Fig. 8. Too narrow view on the various possibilities to solve a problem

New Technologies – Trends, Innovations and Research 336
 On the one hand “organizational blindness” helps to build up a routine to handle and
improve a system/method/technology but hinders the change to a better
system/method/technology. The technical progress or the continuous change of the
customer’s needs on the market can lead to the replacement of a formerly good solution
by an alternative implementing various advantages, for example lower costs or better
fulfillment of the customer requirements.
 The existing guidelines, schemes and structured advancement-models are essential
requirements guiding the development process. However, the structure of the
personality of all participating employees and parties plays a crucial role. But quite
often, the individual human is neglected. If – for example – technical criticism is
interpreted as a personal issue the improvement of a solution is threatened. This applies
to the ordering party, which already has a first solution in mind when entering the
design process, the contractor who has to defend his solution as well as to the user who
may or may not consider his requirements fulfilled. A factual, objective and problem-
oriented communication is therefore very important.
 A team often requires specialists and generalists. The well-structured specialist with his
focused expert-knowledge works on details that can be dispatched in a straight-forward
way. The “chaotic generalist” can be seen as the motor that pushes the whole project
onward and is responsible for the distribution of resources and setting the goals. Each
individual will lean towards one or the other role depending on their personality.
The necessary qualities of the developer as an “individual human being” are discussed
explicitly in (Pahl, 1995) but only with regard to the constructing engineer:
A good constructing engineer is capable of transferring factual knowledge (facts,
experience, principles) and methodological knowledge (the handling of a complex,
simultaneous course of events) to the current problem/situation by applying his
personal abilities. This requires intelligence (meaning the ability to understand and
judge = analytical step-by-step thinking) and creativity (synthetic and intuitive thinking
in order to discover new, so far unknown interrelationships). Hence, the ability to
switch flexibly between analysis and synthesis is absolutely necessary. It is called
heuristic intelligence, and enables the effortless finding of good solutions in little time. It
usually results in a solution that represents a compromise of quality and effort.
4. Therefore
The problems listed in section 3 can often be observed throughout the development process.
The following paragraph will present three principles and methods that can help to lead the
innovations process to a successful finish.
The result of a technical development process is always a compromise. It is of major
importance to compile a catalog of requirements, criterions and weighting-factors which are
constantly verified throughout the process in order to achieve the best possible compromise.
Iterative loops under the comprehension of the user and the ordering party are required.
4.1 The paradigm of open innovation
The before mentioned aspects refer to the activity inside the development-funnel. This funnel
is shown once more below:

The Development Process as a Complex and Interdisciplinary Team Based Challenge 337

Fig. 9. The development funnel (cp.: Wheelwright, Clark, 1992, p.112)
Even though most pictures of development-funnels refer to the design process of a whole
product, they can also be applied very well to the search for details, specific parts or sub-
functions. Looking at these sub-functions it can clearly be seen how the number of possible
solutions decreases dramatically. Ideally, the criteria, which lead to the systematic
elimination of possibilities are defined at a very early stage of the project but kept flexible
throughout its course. In reality, many restrictions are unknown when the project is
launched and will only be discovered during later project phases, maybe because some
properties of solutions have been neglected (for example „harmful substances“) prior to
realization. But also the outer circumstances might have changed in the meantime because
of the dynamics of the market (competitor releases a better product) causing the walls of the
development-funnel to remain flexible throughout the design process.

Fig. 10. Walls that narrow the development-funnel

New Technologies – Trends, Innovations and Research 338
The following graphic shows how an incomplete collection of solutions at project start may
not even contain the optimal solution. Only during a later project phase (and therefore much
too late) another study considering possibilities outside the search-field that was defined by
the ordering party reveals the best solution. Including new partners or employees for
example can lead to a sudden, drastic increase in ideas for solutions. The image below
shows, how the ideal solution can now be found among the ones that have been discarded
or neglected before.

Fig. 11. ideal solution found among the ones that have been discarded or neglected before
While the image above refers to the development of sub-functions it should be seen as a
microscopic view of the „Open Innovation“-approach.
Nowadays it is very unlikely that all the knowledge necessary for the development of a
highly innovative product can be provided by one company only („closed innovation“).
According to (Chesbrough, 2006, p.177), the classical model of “closed innovation” is
applied when product and business ideas are mostly developed inside the company’s own
R&D-departments. The “Open Innovation” paradigm – also affected by the changed
knowledge landscape in the beginning of the twenty-first century – merges external ideas
and knowledge with the internal R&D. It raises external ideas – as well as external paths to
markets – to the same level as internal ones during the era of closed innovation.
The two different models can be combined in Figure 12 showing the development-funnel
and the exchange of ideas across company boundaries:
It describes the enterprises´ changed behavior in dealing with intellectual property, ideas in
general and the opportunities of the nowadays widespread distribution of useful
knowledge.
The following Figure 13 compares the approach of “Open Innovation” to the one of “Closed
Innovation”.

The Development Process as a Complex and Interdisciplinary Team Based Challenge 339

Fig. 12. The Knowledge Landscape in the Closed and Open Innovation Paradigm (cp.
Chesbrough, 2006, p.31, 44)

Closed Innovation Principles Open Innovation Principles
The smart people in the field work for us. Not all the smart people in the field work
for us. We need to work with smart people
inside and outside the company.
To profit from R&D, we must discover it,
develop it, and ship it ourselves.
External R&D can create significant value:
internal R&D is needed to claim some
portion of that value.
If we discover it ourselves, we will get it
to the market first.
We don't have to originate the research to
profit from it.
The company that gets an innovation to
the market first will win.
Building a better business model is better
than getting to the market first.
If we create the most and the best ideas in
the industry, we will win.
If we make the best use of internal and
external ideas, we will win.
We should control our IP, so that our
competitors don't profit from our ideas.
We should profit from others' use of our IP,
and we should buy others' IP whenever it
advances our business model.
Fig. 13. Closed Innovation Principles versus Open Innovation Principles (cp.: Chesbrough,
2006, p. xxvi)
4.2 Dynamic performance specifications
The performance specifications represent the concrete definition of the ordering party’s
requirements for the product and services delivered by the contractor. Therein the
properties are described qualitatively and/or quantitatively. The contractor on the other

New Technologies – Trends, Innovations and Research 340
hand defines his duties in his own specification sheet and thereby decides how he wants to
fulfill the performance specifications.
This approach may but reveal several problems:
 The compilation of the performance specifications is a “translation” of the customer
requirements. But if these are not defined completely and unmistakably even a total
fulfillment of the criteria will not lead to total customer satisfaction.
 If the contractor itself is an ordering party and not the user, the before mentioned gains
twice the importance. First the customer requirements have to be translated by the first
ordering party and then again by the contractor. It is obvious, that the chances are very
high that the performance specifications do not match the customer requirements. The
problem has already been discussed before.
 Especially in long-term projects a modification of the general requirements can be
useful because of the advancement of technological standards. If, for example, a
mechanical solution has been agreed on in the performance specifications but a new
electronic solution appears on the market, then this new advantageous option cannot be
used. Apart from that, also preferences of the user can change during the course of the
project. But when the properties of the product are defined rigidly at the beginning of
the project this condition cannot be considered.
Therefore, dynamic performance specifications are beneficial.
 Due to multiple translations of the customer requirements “translation errors” can
occur. Hence it is recommendable to match the requirements among all participating
parties (i.e. user, ordering party and contractor). The earlier it is realized that different
ideas of the result exist within the whole development team, the easier it is to adapt the
performance specifications, redefine the task and continue effectively and efficiently
with the development process.
 At this point it shall be mentioned that VDI 2221 describes the “early iterations under
implementation of the user” but these are not necessarily required. One must also note
that the systems technology mentioned in chapter 2 does also not imply iterations in the
same development phase.
The complete understanding of the customer requirements is a prerequisite for a correct
translation. This is why the contractor needs to be able to see through the eyes of the user as
well as the ordering party. This means that the problem cannot only be approached from an
engineer’s point of view. Here, one should deliberately take a step back. Approaching the
ordering party in order to discuss whether to redefine the customer requirements (because
the ordering party may already have filtered the requirements wrongly) demands
appropriate social skills and a fine intuition. The influence of individual human properties
has already been mentioned chapter 3.
If the development process is started by only reading the performance specifications, it is
sometimes not unmistakably noticeable whether the customer prefers a simple, a clean, a
smart, or a “different” solution, if he prefers a certain technology or a specific way of solving
the problem. This matter will be discussed in greater detail below, in section 4.3 dealing
with the “one step back approach”.

The Development Process as a Complex and Interdisciplinary Team Based Challenge 341
“Problem solved” can mean something different to each party. The producer’s considers
“just as good as absolutely necessary” the cheapest and therefore most adequate solution for
serial production. From the customers point of view this is the fulfillment of his
requirements. The developer on the other hand considers a problem “solved” if he knows
how it can be solved theoretically, whereas the technology-addicted engineer seeks the “best
possible” solution – a prototype with features exceeding the ones manifested in the
performance specifications.
 The ways of thinking and the language of the participating parties, fields and branches
can differ significantly. This does not only apply to the level of user – ordering party –
development contractor, but also within the individual fields themselves. This does not
only apply to specialists or expert engineers from different departments and
development divisions, but also employees from the fields of calculations, construction,
design, assembly, cost accounting, controlling, sales, etc.
During an early project stage play-models or CAD-models can help reveal differences
in the understanding of the problem. They can also make non-experts find out whether
all parties follow the same goal and the predicted result meets the expectations.
 Aside from the above mentioned arguments – such as the appearance of a new
technology – other possibilities to fulfill the functionality of the product might emerge
and favor a modification of the performance specifications during the development
process. As the project advances, one gains an increasingly clearer view on the crucial
influencing factors. Problems often occur when a theoretical solution is put into
practice. These problems can of course only be realized in the current work step. But the
first results of the projects might also reveal that initial requirements increasingly lose
importance while others demand more emphasis. Some requirements might even
emerge all of a sudden and therefore have to be included retrospectively into the
performance specifications. This represents the moving of walls in the development-
funnel model. An approach like that implies shifting the emphasis and resources which
could be the consulting of further experts belonging to other special fields, or the proof
of suitability of a new material in a practical experiment.
 The mentioned course of action consequently demands the development of multiple
variations and proper documentation. Multiple re-evaluations and iterations may
suddenly make formerly discarded variants relevant again. Not only the approach
and the successful solution, but also previous, discarded ideas have to be
documented. If a variant is discarded, the decision has to be backed by arguments.
Hereby the property, which leads to the conclusion that the variant is unsuitable, has
to be described. Maybe exactly this property might hold great potential to apply this
solution to another project.
 All participating parties need to be included in the important iterations that are
necessary during the development process. These iterations should begin with the
definition of the task. Consequently the performance specifications should not only be
adapted when a change in the course of the project requires doing so. Questioning the
newly assessed and required properties is not only allowed but recommended. The
compilation of a flexible and therefore “dynamic” list of performance specifications is
encouraged and should be supported by the authorities.

New Technologies – Trends, Innovations and Research 342
 All the above mentioned points are motivated by a result-oriented operation method,
but imply the involvement of all parties (user, ordering party, development contractor).
General definitions should or must be questioned. This though, should not lead to
getting stuck in the definition-phase, but on the contrary help to fulfill the customer
requirements unerringly and under the best possible exploitation of resources. Apart
from the personal willingness to act flexibly a trusted relationship between all the
participants is necessary. A goal-oriented course of action may ask to leave the
structures defined by the contract. It is necessary to be constantly focused on “what we
actually want” to reach the goal together by optimally fulfilling the customer
requirements.
4.3 One step back approach
One conclusion that can be drawn based on the statements mentioned above, is the idea to
question the task itself. Most of the times it is a good investment, to carefully scrutinize the
restrictions listed in the performance specifications. These lists often do not represent what
the customer wants, but what the ordering party believes. Quite often they already contain
hints for concrete solutions and therefore already restrict the development process.

Fig. 14. Suggested step back to understand user needs
Even though direct communication between the contractor and the user is often impossible
it is necessary to interpret and look behind the performance specifications to try and find
out the initial customer requirements, which determine the functions and features of the
product.
The question “In what way is this requirement beneficial to the customer/user?” is of major
importance and should be kept in mind whenever the performance specifications are
discussed. Ideally the handover of the project takes place at a very early project stage, in
which the change of implicitly or explicitly formulated concepts of solutions can be changed
with ease and without financial consequences. According to the authors’ experience, the
opportunities emerging from such an open-minded consideration of the actual task in this
project-phase surpass the possible risks by far.
But especially contracted developments, where the ordering party looks for external support
only long time after concept development has started, this is incomparably more difficult.
Costs and, above all, personal engagement have already been invested in one particular
solution. It is therefore especially hard to leave the path that has been chosen. In this case it

The Development Process as a Complex and Interdisciplinary Team Based Challenge 343
mostly depends on the human and personal factors of the development team whether
switching to an objectively better solution in a later project phase appears realistic or not.
4.3.1 In practical experience there are several resisting factors against „the one step
back“
Due to the focus on the tight schedule and the enormous pressure to quickly introduce a
product to the market in order to achieve high revenues, the willingness to take one step
back is usually rather limited. Here, the problem of including a development contractor too
late becomes evident. Quite often contractor’s task in the development is limited to a certain
detail, like for example choosing the correct types of bolts, choosing materials and
optimizing their strength.
Questioning general, standardized and established solutions usually encounters harsh
criticism, even though the continuous advancement of technology itself might have made
new solutions favorable in the meantime. Here, investing time and resources at an early
project stage will surely pay off regarding the quality of the solution while at the same time
reducing the overall effort. One can find this recommendation repeatedly in literature such
as (Altshuler, 1986) where the research for patents their analysis is clearly stated as the first
step in the development process.
Breaking into the new topic, getting to know and analyzing existing solutions as well as the
problem itself at the beginning of the project can be called “exploration- and idea-finding
phase”. This phase is located in the wide part of the neck of the development-funnel. It can
also be beneficial to look into other branches. Adapted versions of solutions may represent
goal-oriented and effective solutions for one’s own regarded problem.
Two examples for methods of development used in software engineering are „test driven
development“ and “agile development”.
“Test driven development“ divides the development process of the whole product into
many subsystems. For each of these subsystems a test procedure is defined. The
development of every individual cycle is repeated until all subsystems pass the test.
„Agile (software) development“ requires the creation of agile principles and agile values.
Based on these values agile methods are derived.
Agile values are:
 Individuals and interactions are considered more important than processes and tools.
 Well running software is more important than extensive documentation.
 Collaboration with the customer is more important than the negotiation of the contract.
 Reacting to sudden changes is more important than following a strict plan.
Whoever is developing a solution will be proud of it and will defend it. Discarding or
modifying this solution will of course encounter resistance from the developer/creator. This
especially applies when the ordering party immediately presents “their” idea to the
developing party only asking the contractor to put the idea into practice without
questioning it.

New Technologies – Trends, Innovations and Research 344
A new technology, for example the emergence of a new manufacturing method, is capable
of moving the “walls of impossibility” and therefore increasing the number of possible
solutions drastically. But the emergence of a new technology also holds certain risks. Apart
from possibly high costs the lack of long-term experience are both factors against the use of
a new technology. Objective judgment of the opportunities and threats is hence necessary.
5. Conclusion
Pahl (Pahl, 1995) states that methods of development are tools introduced by
methodological approaches, as the name already suggests. It is not enough anymore to be a
“pure technician”. Pahl also states that increasing experience enables the engineer to
automatically apply the development methodology and therefore put more capacity into the
actual problem solving. In general, methods should only be followed to a certain, reasonable
extent but not blindly.
Open Innovation should also move into the classic branches on the market. Well established
processes usually work inside a company or branch. Due to the above described modified
general requirements the implementation of external partners should not be reduced to
sudden evens like “consulting days” or the outsourcing of individual development tasks.
Thinking “outside the box” should be a permanent, continuous process.
If – for whatever reason – it is not possible to continuously look beyond the boundaries of
one’s own company or organizational unit, it should at least become a habit during the early
stages of product development. The developers should pay maximum attention to the
opening of the development-funnel at the beginning of the project in order to include as
many solutions as possible.
Quite often the assignment of task already contains “translation errors” when transferred
from the „manufacturer – ordering party” to the „development contractor“. Recognizing
and correcting these errors is absolutely vital in order to broaden the horizon of possible
solutions and finding the best one of them.
This article is an open invitation to advance to a more abstract level, where even before the
start of the actual development time is invested to fully understand the actual problem.
Preventing small mistakes in this step can prevent big mistakes in later phases.
Nowadays solutions can be found in other technological fields, which do not match the tools
and knowledge of the ordering party. For example, complex purely mechanic solutions
might be simplified significantly with the support of, or replacement by electronic
components – in many branches this is by now the only way be stay competitive.
In order to avoid a too narrow search field for solutions it can sometimes be necessary to
question even very detailed performance specifications. Ask the questions: What is the
actual problem of the customer and its requirements? Working with an interdisciplinary
team during this phase has been successfully approved.
But when questioning or critically evaluating the suggested solutions of the manufacturer –
ordering party, a subtle feeling and social skills are needed in order to avoid insulting the
cooperation partner. If it turns out later, that the originally suggested solution is indeed the

The Development Process as a Complex and Interdisciplinary Team Based Challenge 345
best possible one, early criticism may be considered unnecessary and misunderstood. Never
the less it can be said that high investments at the beginning of the project (i.e. opening the
development-funnel) is useful in any case; may it be only to sharpen the view for solutions
and concepts outside the well-known company-internal possibilities.
Up to now, the suggested approach has not made it into company-guidelines and there also
seems to be a lack of understanding from the persons in charge. Communicating inside an
interdisciplinary development team and addressing the needs of the user are not yet part of
a standardized development process.
Especially SMEs have very low chances to search an extremely broad field of options to find
solutions. To them, it is very difficult to compile interdisciplinary teams. But at least the
need for the above mentioned approach should be recognized. Only then better ways to
achieve even more satisfying solutions can be found.
In many cases it depends on motivated and dedicated individuals, whether potentials in the
approach of development projects can be found and exploited. Depending on the conditions
this may lead to a change in the schedule of the project implementing an additional step like
critically analyzing the actual functions behind the customer request. This especially applies
to contracted developments where many decision might have been made beforehand.
Not everyone may be suitable, but whoever thinks he is capable of leaving the beaten path
should be motivated and encouraged to do so – even though it might only be on a small
scale.
The job of an engineer or developer is far more then generating technical drawings. A
creative, open-minded engineer is nowadays more important than a highly specialized
technician working on details isolated from the rest of the team. The properties of a good
engineer are certainly properties of his personality, but can also be trained to a certain extent
(Pahl, 1995).
The implementation of external resources will in future also capture the traditional
disciplines of engineering and its available public processes.
Literature and education have so far mainly focused on the solving of details and problems
in a standardized way. The identification, the understanding and consequently the
formulation of the actual customer requirements should be addressed to a greater extent in
education and in the real economy. It will help to exploit the sheer endless potential of
possible solutions and technologies and will allow us to successfully master the challenges
of the future.
6. References
Altshuler, G.S. (1986). Erfinden - Wege zur Lösung technischer Probleme, PI - Planung und
Innovation, ISBN-10: 3000027009
Chesbrough, H. (2006). Open Innovation – A New Imperative for Creating and Profiting from
Technology, Boston, Massachusetts, ISBN 1578518377
Ehrlenspiel K. Kiewert A., Lindemann U. (2007). Kostengünstig Entwickeln und Konstruieren,
Springer, ISBN 978-3-540-74222-7, Berlin, Heidelberg, New York

New Technologies – Trends, Innovations and Research 346
Ehrlenspiel, K. (2009). Integrierte Produktentwicklung. 4. Auflage, Hanser Verlag, ISBN 978-3-
446-42013-7, München
Eppinger, S.T., Ulrich, K.T. (2003). Product Design and Development, 3
rd
edition, ISBN-10:
0073404772, Boston
Pahl, G. (1995). Ist Konstruieren erlernbar oder doch eine Kunst?, in Effizienter Entwickeln und
Konstruieren, p. 27ff., VDI-Bericht 1169, VDI-Verlag GmbH, ISBN 3-18-091169-7,
Düsseldorf
Pugh, S. (1990), Total Design; Integrated Methods for Successful Product Engineering, Reading:
Addison-Wesley
Reinhart, G., Lindemann, U., Heinzl, J. (1996). Qualitätsmanagement, Springer , ISBN-10:
3540610782, Berlin
Thom, N., (1980). Grundlagen des betrieblichen Innovationsmanagements, 2.Auflage,
Königstein/Ts., page 53, ISBN 3-7756-6208-1, Hanstein
VDI-Richtlinien 2221, (1993). Methodik zum Entwickeln und Konstruieren technischer Systeme
und Produkte, VDI-Verlag, Düsseldorf
Wheelwright, S.C., Clark, K.B., (1992). Revolutionizing Product Development, The Free Press,
ISBN 0-02-905515-6, New York
16
Risk Management in Area of Security and
Protection of Health During the Work
Andrea Seňová and Katarína Čulková
The Technical University of Kosice,
Slovakia
1. Introduction
The risk comes along with a particular form of responsibility for a chosen entrepreneurial
decision, the success of which is influenced by current and future position of the firm. That
is why all employees should think about the consequences of their acting before making any
decisions. It is not enough to select the solution but it is also necessary to analyze all possible
varieties and to choose the one that will be suitable from long-term point of view and not
only from short- term one. In the developed countries, where the market mechanisms have
been operating for many years, the management and decision making of the company
representatives in the field of anxiety is called ´ risk management´, i.e. risk management in
much wider sense. It is a process of risk management having influence on the success of the
company.
Risk management is systematic process, in which risk is identified, analyses and defines
optimal way for risk managing during minimal cost aspects and respecting of systematic
goals of the subject. Task of risk management is mainly achieving of maximal security and
property protection by elaboration of optimal strategy for risk management as main bearer
of possible future damages. Risk management can save great volume of finances for the firm
in case of negative risk events. At the same time it is necessary that risk management could
solve risks completely, not only superficially, and that it could calculate with mutual
influencing of risk factors.
The main aims and concerns of the chapter will be as follows:
- According newest knowledge to contribute to the solving of risk evaluation and
increasing of firms effectiveness from the long term view
- To show to the necessity of new evaluation processes in conditions of market economy
and to underline their contribution
- To analyze mentioned theoretical processes of management for risk influence and their
practical solution
- To define methods for risk management
- To identify possible risks appearing in the firms
- To analyze and evaluate risk influence to the conditions of the firms
In present varied time there is not possible to predict facts and events with 100% certainty.
In practice it is therefore not possible to state ahead possibility of most proper risk covering

New Technologies – Trends, Innovations and Research

348
in the firm. Advantages, respectively disadvantages resulting from every possibility are
raising in connection with concrete conditions in which firm exists.
It is necessary that management of the firms would follow during risk management
following activities (Seňová, Antošová, 2007):
- Risk analysis, monitoring and measurement, respectively evaluation of the risk in
internal or also in external environment of the firm, meanwhile managers would state
conclusion and recommendation for top management
- Definition of the goals in area of risk decreasing (that are consistent with definition of
risk strategy in the firm – for example what risk to omit and what risk to decrease, how
to minimize cost connected with risk strategy application to the growth of the firm),
stating of the most proper strategy of the risk decreasing.
- Consequently stating and implementing of most proper method for risk decreasing to
the conditions of concrete firm – with aim to state or diversify revenues (strategy of
expansion of narrow group of client) or diversification of business suppliers, etc.
- Evaluation of applied risk strategy of the firm in practice and consequently application
of chosen method for risk decreasing. (It is also necessary to comment that concrete
using of such method can bring new risks!) Person (or group of employees) – so-called
risk manager is responsible for such risk policy of the firm.
Harmonization of the risk management with firm’s strategy is extraordinarily important.
Many firms applied basis of risk management and they plan to orientate to their better using
and obtaining of higher value added. How can firm achieve it? By higher standardization,
conception Access, by limitation of activities duplicity and by using of department for risk
management with aim to support and coordinate such processes.
Main function of department for risk management is securing of control and processes in
area of risk management in the whole firm in mutual agreement. That means complex
problematic when majority of the firms does not have enough experiences. There is
therefore proper to count in some cases with experts that will help company to avoid serious
problems.
Comprehensive upgrading of safety in the beginning of the 21st century is one of the most
important tasks facing the whole society, from national governments, but also from
management of each company. Occupational Safety and Health is an area, which until
recently was underestimated and neglected. This trend ceases to be valid. Organizations are
well aware of the need for changes in access to health and safety, because the level of safety
and health at work significantly influences whether the organization becomes recognized,
successful and well-established on domestic and foreign markets (Šolc, 2007).
At the beginning of industrial revolution technical equipment were at very low level as for
the safety. With time development technique puts in connection with growing level of
complexity of the system and equipments always higher demands for security and therefore
it must be evaluated systematically with regard to the man that works at this place, as well
as to the man that is only moving over working place as third person. From the mentioned
results that goal of every activity must be effort not to threaten man. European Agency for
SPHW collects statistics in area of SPHW and researches from whole world. One of the ways
for comparison of SPHW systems with EU member states is annual „weak of SPHW“. Some
of the statistics during last years in area of SPHW are as follows (OSHA, 2008):

Risk Management in Area of Security and Protection of Health During the Work

349
- every three and a half minutes in EU somebody dies due to the work reasons,
- every year in EU die 142 400 people due to the occupational diseases and 8 900 people
due to the consequence of working accident,
- yet one third from total number of 150 000 deaths can be allotted every year to the
dangerous elements at working place in EU, including 21 000 deaths, caused exposed
asbestos.
2. Determination of risk conception
Conception of risk is generally known. In spite of risk knowledge there is not existing
generally accepted definition that would define this conception expressly. Expression
„risico“ comes from the Italian language and it means primary stumbling block that must
seafarers overcome during their voyages. In older encyclopedias risk is explained as courage
to overcome danger, resp. dare to make something. Two main elements of the risk are as
follows:
- Occurrence of unwanted consequences;
- Uncertainty (probability) that such consequences will increase.
Manifestation of risk is danger. Goal of risk analysis is to identify real danger. During
danger identification it is necessary to look after future, but sometimes we must look also
after past and to find out reasons why risk was underestimated, or badly estimated, resp.
neglected. Appropriate risk can be given to every single danger. For danger identification it
is important expert experience, and counting also with relationship of individual to the
danger. Danger is every real threat of inspected object or process damaging.
We know following danger:
- Absolute – threatening everybody;
- Relative – its realization influences only some groups of inhabitants, for some other
group it can be positive (for example hurricane – threatens inhabitants of the country,
but for building firms and insurance company it can be positive).
Generally in the firm’s practice danger is relatively high.
Every activity, but also inactivity in the firm brings along higher or lower level of risk. Way
of such risks management decides considerably about success or non-success of the firm at
the market and therefore it is one of the important criteria for evaluation of management
effectiveness. It is important to evaluate these parameters due to the risk determination.
When any of the risk elements exists, risk does not exist. Risk is therefore combination of
uncertainty and unwanted consequences that can be represented by the way of symbolic
equation:
RISK = UNCERTAINTY x UNWANTED CONSEQUENCES (1)
According dictionaries dangers can be many times described as „risk sources“, meanwhile
risk is considered as „chance of unwanted consequences rising.“ Danger simply presents
origin and source of the risk. It includes „probability“, by which such source can create real
damages. Risk depends not only on the danger, but also on the protection measurements,
accepted against danger. Risk can be expressed in symbolic equation:

New Technologies – Trends, Innovations and Research

350
RISK = DANGER / PROTECTION MEASUREMENTS (2)
Such equation establishes concept of human intervention and risk management. Danger is
always source of the risk. It is physical situation with potential to influence negative facts for
people, property and living environment.
Lately concept of risk has obtained economical dimension that means possibility of loss.
Explanation in MacMillan dictionary about modern economy results from risk conception as
possibility that some event will arise with certain probability that will be different from the
expected state or development (Pollio, 2003).
In present time risk can be viewed also in negative sense as something that could restrain
achieving of stated goals. Negatively limited risk is connected mainly with short term risks
at the level of operation management, when risk has form of defects, error, cheat and
accident. With growing time horizon and growing level of management risk is defined also
as something positive that firm can use for its future possible development. Business risk
presents danger for business non-success, connected at the same time with hope for
achieving of good economical results. Loss extend can be evoked by violation of financial
stability of the firm and it can lead to its decrease.
3. Processes of risk analysis
General process of risk analysis must include following parameters – risk detection,
evaluation of reasons and probability of risk occurrence, evaluation of possibility for rising
of damages and consequences, evaluation of possibility for risk decreasing, evaluation of
risk influence to the cost and profit of the firm.
In mentioned model its individual elements are illustrated and explained according
following principle (Figure 4):
1. Internal environment – it gives direction for whole organization existence and it serves
for every other elements of firm’s risk management. It includes philosophy for risk
management in organization, attitude of organization against risk, its integrity and
ethical values;
2. Stating of firm’s goals – such goals must exist before management will identify events
that could influence their achievement. System of risk management must support
choice of the goals that are correspondent to the aim of organization and they are
consistent with attitude of organization against the risk;
3. Identification of events – internal and external events, influencing goals achieving in the
organization, must be identified and risk and occasion must be distinguished. Occasions
are backwardly included to the strategy creation or to the process of goal determination;
4. Risk evaluation – risk are analyzed, considering their probability and possible impacts,
and it presents basis for training how they can be managed. Risks are evaluated
according basis and residual base;
5. Reaction to the risk – decision about answer to the identified risks. Here we can have
avoiding the risk, acceptance of the risk, decreasing or sharing the risk. decision will
depend on the attitude of organization against risk;
6. Control activities – they are orientated for using of policies and processes that verify if
reaction to the risk is effectively realized;

Risk Management in Area of Security and Protection of Health During the Work

351
7. Information and communication – they are important as identified, obtained and
provided in demanded time by this way that people could fill their tasks. Effective
communication must be in the whole organization;
8. Monitoring – system for management of firm’s risk that is followed as a whole and in
case of necessity there can be changes in this system. Monitoring is made through
permanent managerial activities or by specific way (Čunderlík, 1993).



Fig. 1. Process of risk analysis
Monitoring
Information and communication
Stating of organization goals
Identification of events
Risk evaluation
Reaction to the risk
Control activities
Internal environment

New Technologies – Trends, Innovations and Research

352
Risk analysis should serve to the following:
- for review providing about risk evaluation at the individual professions during making
of working activities and about demanded security measurements,
- for proving of process and results from evaluation (for example to the control
organization) with concrete results,
- for review of risks at the working places and during activities that should be available
for responsible employers that manage and assign the work,
- for basis for training, informing of employees about the risks, how they can prevent
such risks and how to work safely, information about risks is necessary to provide to
the employees mainly during acceptance to the employment, during rendering to the
other working place, during conversion to the other work, during installing of new
working processes, etc., (Seňová, 2008).
4. Conditions for successful risk management in the organizations
Risk management is not successful in every organization. In monograph „Risk management
in the firms and other organizations“(Smejkal and Rais, 2003) there are specified conditions
that organization must fill with aim to manage possible risk successfully:
1. there is clear defined strategy of the subject regarding its main goals including risk
strategy,
2. there is existing complex system for risk management, supported by proper information
system (it can be replaced by system for decision support, expert system, etc.)
3. management put enough attention to the risk management and there are persons that
have responsibility for risk management
4. there is existing functional firm´s culture and ability to develop in the future and to
adapt to the new risk possibilities.
Functional firm’s culture is necessary due to the work with people that are main potential
for successful risk management. Firm’s management in area of risk management must
secure following activities:
1. risk analyzing, its measuring and monitoring (evaluation) in the internal as well as
external environment of the firm (including determination of conclusions and
recommendations for the firm);
2. goals defining in area of firm’s risk decreasing (corresponding with defined risk
strategy of the firm – for example which risks can be neglected, which risks can be
decreased, how to minimize costs, connected with application of risk strategy to the
conditions of firm’s growth, etc.), determination of most proper strategy for risk
decreasing (for example counting also with revenues that could be achieved during risk
decreasing). But manager will receive such risk in advance, commonly they are stated
by superior strategy of the firm;
3. consequent determination and implementation of most proper method for risk
decreasing to the conditions of concrete firm (for example to determine if revenue or
business suppliers will diversify, or if risk will be retain);
4. evaluation of applied risk strategy of the firm in practice and consequent application of
chosen method for risk decreasing (risk manager is responsible for this activity).

Risk Management in Area of Security and Protection of Health During the Work

353

Fig. 2. Process of risk analysis in the organization
4.1 Necessity of risk management
In the firms that are near crises and bankruptcy there is generally situation, when middle
managers will not deliberately warn top management about the problems. Most common
reason is fear about working place, effort to make no problems, or own comfort. Top
management in such precise situation is characterized by not reacting to the problems.
Therefore there are arising such situations, when everybody knows about the problems, but
nobody wants to speak or solve it. Such problems are called „quiet killer“of the firm.
Therefore main condition for risk management is open communication and its configuring
on the principles of cooperation and open environment.
Basic problems of firms´ bankruptcy is disorderliness, abandonment of financial management,
late payment of taxes and fees, long time of invoice payment, interruption in the production,
not qualified production and fluctuation of key employees. Among external reasons of future
problems belong stagnated or very unstable markets, increased pressure of competition that is
still better and better, decreasing of lasting employees and negative external economical
influences. When organization wants to defend from such negative changes (risk) without
problem, it must from the beginning to count with risks and prepare for the problems. Risk
managers must know how to predict consequences and probability and know how to remove
such consequences from organization successfully (Al – Zabidi, Čulková, 2011).
During risk management it is most important to respect law of effectiveness. Costs for risk
management minus costs for risk bearing must be lower then profit from risk management.
Such situation is applied also in so-called secondary risk impacts that are not visible at the
beginning. Risk management must be positive in the cycle of following eight areas: finances
– public relations – SPHW – ethics – internal environment of the firm – employees –
collective – nature. When risk management will be positive for every mentioned phase and
costly acceptable, risk can be considered as successfully solved (Seňová et al., 2008).
Risk evaluation
Firm’s
processes
Firm’s strategy
Risk
analysis
Choice and implementation
of methods for risk decreasing
Process
improve-
ment
Risk managing
Risk measurement,
monitoring

New Technologies – Trends, Innovations and Research

354
5. Rising and development of the system for SPHW Management
Formalized managing systems have been appearing yet at the beginning of 80-ties from last
Century. In present time there are existing three areas, in which there are implemented
managing systems:
- Management of quality,
- Environmental management,
- SPHW management.
In the history of managing systems management quality systems have appeared at first,
they are orientated to the product, resp. to its quality. Historically younger then
management of quality systems are systems for environmental management that deal with
impact of the whole life cycle of the product to the living environment, from this result that
such systems are orientated to the process of production. Youngest systems are systems for
SPHW management that are orientated to the organization employees (man). Every human
activity bears also risk of various type and volume. Therefore it is necessary to know such
risks, eliminate them and manage them (Šolc, 2007).
5.1 Development of SPHW in European Union
Area of SPHW registered in EU at the end of 80-ties extended transformation. In 1985 for
area of SPHS there were more then 300 directions that described detail security demands.
But experts determined that this system of prescriptions stopped to be functional for
application in the practice. European Commission decided to cancel every direction and to
create new system of directions for area of SPHW. Obligation of every employer and
employees has been included in one prescription that determined installing of tools for
management and support of SPHW improving. Such new philosophy for improving of
SPHW was constructed on three basic principles:
- Work security must be organized with regard to every aspects, connected with the
work
- Employer must know and evaluate what can be real danger for employees at working
place and to accept responsible measurements – he must evaluate risks,
- For improving of SPHW level it is necessary cooperation of employers and employees,
therefore employer is obligatory to involve employees to the solving of SPHW tasks.
New prescription was accepted yet in 1989 – Frame Direction No 89/391/EEC about
performing of measurements for improving of SPHW. This direction defines basic principles
for prevention; states frame responsibilities of employer and employees. Performing
direction results from this direction that determine minimal demand for security and
protection of employees health, working condition and working environment with
orientation to the working tools, working environment, personal protection working tools,
manipulation with burdens, work with screens, asbestos, chemical elements, etc. Technical
direction about machines No 89/392/EEC has also frame character, since it install system
for balance showing and marking of products as CE. This direction is connected with
following directions about technical claims for individual types of products and technical
equipments. There is for example electric equipments, elevators, cranes, pressure vessels,
etc., but also toys, fire arms, etc. Such directions are marked as maximal security demands.

Risk Management in Area of Security and Protection of Health During the Work

355
When these demands are fulfilled, any member state can prevent or limit installment of the
products to its market and to the service. That means securing of free moving of the goods
(STN OHSAS 18001:2009).
5.2 Development of SPHW in Slovakia
One of the conditions for agreement about affiliation to the EU that Slovakia enclosed with
member states was demand to harmonize Slovakian legislative with legal system of EU.
Law about SPHW installed new institutions that had not any tradition in Slovakia:
- risk evaluation
- policy of SPHW
- representatives of employees,
- SPHW commission,
- plant health services,
- etc.
In 2001 there was published performing Instruction of Government, by which there were
implemented individual directions of EU. For proper application of new prescriptions in the
practice it is necessary to find out solutions in the technical norms, manuals for good
practice and expert literature. By entrance of Slovakia to EU also European legislative
started to be balanced and harmonized gradually with Slovak legislative. Basic Slovak law is
law of National Council No 124/2006 from Body of Law about SPHW (Law no.124/2006).
Subject of the law is general principle for prevention and basic conditions for providing of
SPHW and for excluding of risks and factors, causing rising of working accidents,
occupational diseases and other damages of health during the work. Slovak organ that is
responsible for control of respecting the measurement in area of SPHW and that performs
inspection is National Inspection of Work (NIW). Experts for area of SPHW are covered in
not profit organization Slovak association for security and protection of health during the
work and protection against fire (Šolc, 2003).
According direction about SPHW No 89/391/EHS article 6, employer is obligatory in the
frame of his responsibility to make measurements that are necessary for securing of safety
and protection of employees health in the frame of prevention from threatening during the
work and securing of information and training, as well as providing of information and
trainings and securing of necessary organization and tools during changed situations with
goal to improve existed situation.
Employer is evaluated risks, connected with safety and health of employees, he is providing
improving of the protection level, that is made for employees with regard to the ability of
the workers and he makes proper measurements that only trained workers have access to
the space, where there is serious danger. Measurements connected with security, hygiene
and health protection during the work cannot be included in any case to the financial costs
for workers.
According article 12 employees are obligatory to take care of their own security and health
protection. They must know to use properly tools and equipments, dangerous elements and
transport vehicles. They must inform employer immediately or other worker with specific
responsibility about safety and health protection in working situations that are causally
considered as situations threatening safety of the workers.

New Technologies – Trends, Innovations and Research

356
Review of some other law in area of SPHW is following:
- Law No 547/2009 from Code that fills and changes law No 311/2001 from Code, Labor
Code as amended by late prescription,
- Law No 140/2008 from Code that fills and changes law No 124/2006 from Code about
security and health protection during the work and change and filling of some law as
amended by law No 309/2007 from Code about change and filling of law No 355/2007
from Code about protection, support and development of public health and about
change and filling of some laws,
- Law No 126/2006 from Code about public health and change and filling of some laws,
- Law No 125/2006 from Code about inspection and change and filling of law No
82/2005 from Code about illegal work and illegal employment and about change and
filling of some law,
- Institutional Law No 323/2004 that changes and fills Institution of Slovak republic No
460/1992 from Code in amendment of later prescriptions,
- Law No 261/2002 from Code about prevention against serious industrial accidents
(Šolc, 2003).
5.3 What is system of SPHW management?
For securing of permanent organization prosperity it is necessary that there would exists
leading managing mechanism able to secure proper functioning of organization. Generally
there is applied principle that only 15% of problems must be properly placed on the
employees and 85% of problems should be secured by managing system. As in other areas
of organization management also in area of SPHW it is necessary to install effective system
of management.
5.3.1 Holistic access to SPHW
When policy SPHW is determined by so-called holistic access, that means orientation to the
SPHW solving with regard to every aspects, connected with work. In first step there was
single understanding of security and health area, physical, psychical and social comfort.
Appeal to holistic access is in certain sense appeal to integration of every aspect, connected
with the work. By this way there is motivation for applying of integrated system of
management.
5.3.2 Systematic access to SPHW
Present practice in SPHW management was orientated mainly to the fact that situation at
working places; state of technical equipments and works performing would be according
prescriptions for SPHW providing. New process of SPHW is orientated to the emphasis of
new ways finding how to avoid shortages. Such process results from following principles:
- not observing security rules is not accidental, but it is consequence of not proper work
organization,
- also working accidents, occupation diseases and not proper working conditions are
greatly consequences of not proper work organization,
- asserting of SPHW is effectively only in case when it means not only amending of
individual shortages, but also searching the reasons of their appearance and performing
the measurements for removing of shortages rising,

Risk Management in Area of Security and Protection of Health During the Work

357
- system security depends from the level of technical solution, but also from working
environment and from people that create part of the system, that means from total
elements of the system: man – machine – working environment, that means also the fact
that organization solution is as important as technical solution,
- effective avoiding accidents can be achieved also by targeted analysis of shortages and
unwanted events that yet not caused damages (semi accidents, events without
consequences),
- solution of the SPHW tasks is orientated primarily to the organization and system
measurements.

OLD ACCESS NEW ACCESS
ACCESS: technical
systematic

Methods:
solving of negative aspects
(accidental consequences)
prevention, avoiding,
negative aspects
Orientation:
machines, equipments,
working environment
human factor, culture of
work

Principle(comparison): hardware
software


Management: active participative
Responsibility for
SPHW:
safety technician
management and every
employee
Experts:
technicians, engineering,
hygienists, psychologists,
sociologists, risk experts
systematic analytics
Table 1. How SPHW process is changed
6. Safety and protection of health during the work – Legislative demands,
claims and duties of employees and employers
SPHW is part of employee’s protection that is obligatory in Slovak legislative. Every
employer must have elaborated risk references that influence employees at the working
place. Security and protection of health during the work (SPHW) is such position of working
conditions that excludes acting of dangerous and damaging factors to the employees. Main
goal of measurements for providing of SPHW is to prevent rising of working accidents and
occupational diseases. Every employer is therefore obligatory to perform measurements
with goal to remove reasons of threatening of life and health of employee and to create
secure working conditions (Čulková, Teplická, 2008).
Working accidents or harms belong among not proper working conditions that are
connected mainly with objective and subjective reasons:
- Objective reasons means not proper working conditions that are connected mainly with
not proper technical level of machines and equipments, protection equipments and
personal protection instruments, bad space arranging of the working place, negative

New Technologies – Trends, Innovations and Research

358
acting of physical factors at working space, as well as objective reasons, resulting from
social and psychological conditions at the working place.
- Subjective reasons are caused by human factor (Antošová, Csikósová, 2007).
Securing and protection of health during the work is necessary in every production firm.
Decreasing of various working injuries, and by this way also to avoid various dangerous
situation and risks at the working place can be achieved only by knowing and performing of
basic rules for behavior at the working place, by performing of claims and duties. It is
especially important that leaders of the organization would know these rules perfectly and
that they would observe them. Only these people can proper direct, warn their subordinates
and by this way to secure fluent service without working injuries. Due to the mentioned
reason leaders must be experts to this area and therefore they must proceeds training
regularly. Elimination of the risks that threaten health and lives of the employees should be
main task of every employer. Necessary step for such important task is identification of
every dangerous step, connecting with individual working activities and stating of risk
sources that result from every identified danger, including present as well as planned
security measurements according Labor Code and Code about Security and Protection of
health during the work. Employer must therefore accept effective measurements with aim to
decrease risk appearance to the minimal level (Drahten, Hermann, 2007).
Risks are connected mainly with:
- Threatens, resulting from working activity,
- Threatens, resulting from negative influences of industrial prisoners and other factors
of working environment (including ergonomics),
- Threatens, resulting from the suggestion, construction, installation, standard activation,
standard service (failure free service or situation when there was any defection),
standard disconnection, maintenance, repair, liquidation and demounting (life phases
of technical equipments, machines, tools, buildings, etc.).
Team that secure evaluation consists from minimal following persons:
- Leaders at the corresponding level of organization unit,
- Employees that perform evaluated activity (including every activities, performed at the
technical equipments during regarding of every situations on the equipments),
- Representative of the employees for Security and Protection of health during the work,
- Expert employees for SPHW (from area of security),
- According the need also specialists from other expert department (maintenance,
reserved technical equipments, fire protection, etc.).
Process of risk evaluation is made at least once a year and in following cases:
- Preliminary inspection of working place, resp. Installing of equipment of the service,
- Change of legal or other claims that could have impact to the risk evaluations,
- Change of the activity, practice, service conditions, products and services,
- Change of technology, processes and equipments,
- Change of purchased and used raw material and material, including products of
production processes,
- Changes following the results of the management exploring,
- Appearance of shortages following the results from the SPHW verification,

Risk Management in Area of Security and Protection of Health During the Work

359
- Appearance of shortages following the results from observation, inspection, employees
or their representative initiative,
- Appearance of accident or near accident,
- Direction of the organs of state department for control over SPHW.
Results from the process of risk evaluation must be provable consulted (report about
acquaintance) with employees that are exposed to these risks. Managing of SPHW must be
dynamical process that secures permanent improving. Rules of the SPHW managing system
result from the following principles (Balážiková, 2009):
- policy of the organization SPHW contents basic aims that have to be achieved in SPHW
and program of its realization includes mainly process, tools and way of its performing,
responsibility for SPHW conception is on the highest level of management, that means
management have to develop and state own SPHW conception that is in balance with
the conceptions of the organization,
- management should also secure that this conception will be understandable, applied
and unbroken,
- system of SPHW management must put emphases mainly to the prevention, damages
prognosis not to the removing of the shortages, system must be active not reactive,
- it is necessary to applied the system in every area of the organization activity:
development, projection, construction, input material, used technology, machines,
tools and equipments, control, service, maintenance, human sources management,
etc.,
- there should be secured responsible, specialized working powers, clearly stated their
responsibilities, competences, work description, vertical and horizontal relations,
organization structure should be properly stated in the frame of total organization
management,
- system must have stated flow of information and secured feed back that enables to
compare system with achieved results and with level of techniques and science,
- important element of the system is documentation, every principles and processes have
to be written, every activities should be documented and marking of the products have
to be secured,
- principle of the system is also plan method with aim to secure that production
operation could run in managing conditions, by prescribed way and by this way there
can be achieved possibility of adequate operative management,
- acting of the management system demands also control system after any operation,
special attention is given to the choice and preparation of the employees at every level:
methodology of preparation, motivation and employees involvement,
- application of corresponding security prescriptions, norms and processes, identification
and evaluation of the risk, results analysis is main methods during creation of the
system,
- feed back.
Installing of the system for SPHW management and its pragmatic linking with management
of other firm’s activities creates assumptions for:
- Risk minimization of employees health damaging and losses on lives,

New Technologies – Trends, Innovations and Research

360
- damages minimization and losses caused by working disability due to the injuries and
occupational diseases and by interrupting of the production due to the technical
equipment damage,
- Optimization of working process, orderliness, plan method, installing of order and
discipline at working place,
- engaging of the employees to the task of SPHW, increasing of motivation and creativity of
employees and their responsibility for own health,
- improving of working and social comfort of employees, improving of working conditions
and labor relation,
- work culture increasing and improving of firm’s image and competitiveness (Seňová,
2008).
6.1 Risk matrix, process diagram of risk management SPHW, program HELP in
Slovakia firms
Employees that made risk analysis must have competence and they must know how to
manage problems of the given area in the firm. Risk evaluation can be managed also by
single employer, mainly when there is small firm or service. Employer can use also external
services for risk evaluation through various experts and advisors (for example certified
specialists from area of work security). But employer should avoid using of external services
in areas that should be solved only in the frame of the organization.
In present time we can call persons that deal with risk management in the firm as risk
managers or they can be security techniques. These people need information during risk
analysis and evaluation at working place:
- Risk factors and risks that exist yet as well as information about reasons of their rising,
- Used materials, machines and equipments, Technologies,
- Working processes that are used during the work and information about employees
that use such working processes,
- Development of accident rate at the individual working places,
- Number of threatened persons, extend of anticipated damages,
- Legislative and technical norms and demands for security, etc (Seňová, Antošová,
2007).
6.1.1 Point method for evaluation of individual factor from working environment

Type Level Description of the event General description
Often A It will appear probably often Expected continually
Probable B It will occur several times
during the living period
Frequent
Casual C It will occur occasionally
during the living period
Several times
Rare D It is not probable, but possible Expected only rarely
Not probable E Almost excluded It is possible only very rarely
Table 2. Probability table

Risk Management in Area of Security and Protection of Health During the Work

361
Type Category Health damaging Technology damaging
Catastrophic I Killing Loss of the system
Critical II Serious injury, affection Large damaging of the system
Marginal III Easier injury or affection Low damaging of the system
Negligible IV Less then easier injury Negligible damage of the
system
Table 3. Consequences table

RISK MATRIX
Probability /
consequence
I
catastrophic
II
Critical
III
Marginal
IV
Negligible
1 often 1 3 7 13
2 probable 2 5 9 16
3 casual 4 6 11 18
4 rare 8 10 14 19
5 not probable 12 15 17 20
Table 4 Risk values determined by point method (combination of probability and
consequence)
Scale of the risk
Number values of the risk can be ranked to the four groups that characterize level of the
risk.

Point range Level of risk Criteria of security
1-5 Not acceptable Dangerous system, permanent threat of damage,
necessity to end activity immediately
6-9 Unwanted Not proper security, probable possibility of
damages, measurements with short term
determination
10-17 Acceptable with
inspections
Risk cannot be accepted in spite of the low possible
consequences, measurements must be accepted
18-20 Acceptable without
inspections
System is classified as secure, but it can be
improved by planned reformation
Table 5. Scale of the risk according point method
Risk evaluation is process of probability evaluation and evaluation of seriousness of
damaging effect to the people due to the exposition of dangerous factor during defined
condition from defined source that consists from determination of danger, exposition
evaluation, estimation of relation of amount and effect and risk characterization and
determination of evaluation uncertainty. Acting of individual factors of working
environment depends mainly on the way and length of exposition and on the reaction of
employee’s organism, resp. on the measure of his tolerance or resistance against given
factors. That means not only single risk factor is influencing the employee, but commonly
several factors influence him at the same time.

New Technologies – Trends, Innovations and Research

362

Fig. 3. Process diagram for risk management during SPHW
According the fact that we demand from the evaluation of working environment quality we
can define following goals of evaluation process:
- Evaluation of critical, typical and prescript factors of working environment,
- Evaluation of chosen factors, Evaluation of factors classes,
- Evaluation of complex quality of working environment.
One of the modern tools that are used in the business practice is in present time in Slovakia
program H.E.L.P. that combines principles of work security, industrial hygiene and health.
Application of this system enables to avoid losses at the equipments, interrupting of the
working activity and injuries of the employees. The system is define following:

Risk Management in Area of Security and Protection of Health During the Work

363
- Principles – principles, at which program H.E.L.P. is based, proved liability of the
employer to take care of the health, security and comfort of every employee.
- Strategy – strategies show to the way how to achieve successful prevention of the
losses. The limit by proper and single way what employee must made when he have to
achieve losses prevention.
- Techniques – systematic processes, according which there are implemented strategies
for loss prevention. They are determined by this way that they could help to perform
concrete task connected with loss prevention.
- Methods - instruction how to install program H.E.L.P. at the working place. Tools of the
program are formulary, documents and other information sources, used during
realization of instructions mentioned in the program methods (Seňová, Antošová, 2007).
Risk management can be realized in any firm according following process diagram:
Description of process diagram for risk management during SPHW:
1. Determination of examined space – in this phase there are determined margins of
evaluated space.
2. analysis of examined environment, that means:
- analysis of every persons (employees, clients, visitors) that could be in the space or
that are in the space,
- analysis of any working activity that is performed in the examined space, or that
could be performed in this space,
- content of examined working space, that means what exists in this space – energetic
distribution, technical and technological equipments, materials and raw materials,
dangerous chemical elements, etc.
3. identification and analysis of dangers and threats means finding of real and potential
dangers, threats and their characteristics – for example searching of present experiences
with service of given system, searching of documentation, direction, inspections,
investigation of accidents etc.
4. during identification of dangers and threats that connect with service of technical
equipment it is necessary to analyze every phases of equipment service – delivery,
installation, maintenance, damage, etc.
5. in case when identified danger or threat can be immediately removed, it is necessary to
realize measurements for its removing immediately and by this way process of risk
management is performed. This step means removing mainly of common, immediately
removable defects.
6. according risk character to state risk type (security risk, health risk, technical risk, etc.)
and goal of threatening (people, material, production) in accordance to which goal
given threatening can have negative influence.
7. to determine probability of threatening occurrence.
8. to determine severity of threatening and its consequence. During determination of
severity it is necessary to count always with worst reliable consequence that can appeal.
9. determination of risk level (for example by the way of higher mentioned risk matrix).
Best way is to divide this matrix to three risk levels:
- acceptable risk
- temporary acceptable risk (marked for example by blue color)
- not acceptable risk (marked for example by red color).

New Technologies – Trends, Innovations and Research

364
10. in case when risk is acceptable – there are not demanded any measurements for its
further management. In case when risk is temporary acceptable – there is demanded
timely limited measurement for its further management and registering to the risk
register. In case when risk is evaluated as not acceptable – it is demanded immediate
making of relevant measurements and also evidence to the risk register.
11. working activity that connects with not acceptable risk cannot be performed to the time
of measurements realization, which will decrease risk value to the temporary acceptable
level.
12. risk management is orientated to the management of not acceptable and temporary
acceptable risks with goal to remove them completely or decrease them to the level of
acceptable risk (Seňová, Antošová, 2007).
6.2 Evaluation of risks in the practice
Risk evaluation for the firms makes specialized firms or special trained workers. Suggestion
for risk evaluation in the frame of SPHW can be as in the following example:
Method: Simple point method for evaluation of risks SPHW
Single point method is comprehensible and simple for evaluation of threatening measure. It
is proper method for risk revision at the working place that is stated as basis for safety
measurements at the service. It is expressed by semi quantitative way – by ranking of the
points 1-5 during probability evaluation and by points 1-4 during consequences evaluation
and in this case also by word description of consequences severity. During risk measure
evaluation there is used definition of risk by matrix of numerical risk evaluation that is
stated from the values of consequences and frequency.
R = P x C (3)
P – Probability of rising and risk existence – it determines estimation of possibility that there
will be any unwanted event and probability that there will be unwanted event. This
parameter results from frequency of risk situation rising in the frame of evaluated system.
The more is employee exposed to the influencing of various risk factors, the higher is
probability of risk rising.

Value Probability Frequency of origin Time period of
threatening
1 Very low Event origin is almost excluded Almost impossible
threatening
2 Low Event origin is low probable, but
possible
Very rare threatening
3 Middle Event will arise sometimes during life
cycle of the equipment, or activity
Rate threatening
4 High Event will arise several times during
life cycle of the equipment or activity
Time threatening
5 Very high Event will arise very often Continual threatening
Table 6. Parameters of point method – probability

Risk Management in Area of Security and Protection of Health During the Work

365
C – Consequence that expresses level, severity of consequence from unwanted event. By this
parameter there is evaluated measure of employee health damaging that result as an
influence of unwanted event, caused by risk situation.

Value Consequence Characteristics of consequence
1 Negligible Less then easy injury, negligible damage of the system
2 Few important Easy injury, beginning of the employment disease or
lower damages of the system, financial losses
3 Critical Heavy accident, employment disease or extend
damaging of the system, losses in production, big
financial losses
4 Catastrophic Killing as a consequence of working accident or total
damaging of the system, losses that cannot be replaced
Table 7. Table for parameters of point method – consequences
R – Risk – combination of two parameters – probability (P) and consequence (C) – it
determines resulting value of risk. Lowest level can be 1 and highest value is 20.

Consequence/
frequency
1 2 3 4
1 1 4 6 12
2 2 7 11 13
3 3 10 15 17
4 5 12 16 19
5 8 14 18 20
Table 8. Numeric expression of risk value – point method
According point range risk during simple point method is ranked to the four categories.
Resulting value of risk determines reality, if given risk is accepted or if it is necessary to
accept some measurements for removing and minimizing of the risk.

Point
extend
Evaluation
(criteria)
Necessity for security measurements
1 – 3 Acceptable System is secured, common processes
4 – 11 Mild System is secured with condition of service training,
inspections, etc.
12 – 15 Unwanted System is not secured, it is necessary to accept technical,
other measurements
16 – 20 Not acceptable System is not acceptable – immediate applying of
protection measurements
Table 9. Point range of the risk and necessity for security measurements

New Technologies – Trends, Innovations and Research

366
Method: Extensive point method
During risk evaluation by extensive point method there is used extensive definition of the
risk by following expression:
R = P x D x I (4)
Single measure of risk during extensive point method is calculated by single multiplication
of three parameters and difference against single point method (where R = P x C) it is
extended by parameter „I“– influence of SPHW level (opinion of evaluator). This method of
risk evaluation is expressed by semi quantitative way – by adding of point value 1-5 during
probability evaluation and point value 1-5 during consequence evaluation and point value
1-5 during evaluation of SPHW influence with following description by words.
From risk value R and classification of objects security results that it is necessary to make
safety measurements for risk decreasing or removing.
P – Probability of risk rising and existence – is determines estimation of possibility that
unwanted event will arise, and probability how often this unwanted event will appear. This
parameter results from the frequency of risk situation rising in the frame of evaluated
system. The more and more frequently is employee exposed to the risk factors influencing,
the higher is probability of risk rising.

Class Probability Characteristic of probability
1 Not probable Undesirable event is almost excluded
2 Random Undesirable event is low probable, but possible
3 Probable Undesirable event can arise
4 Very probable Unwanted event will probable arise
5 Permanent Undesirable event will arise probably very often
Table 10. Evaluation table for extended point method
During evaluation of probability for accident and unwanted event rising we come out from:
- data about past accident rate
- estimation during working place inspection
- data about control – internal, external, performed expert inspection, exams
Probability of accident is influence by following factors:
- measurable factors: duration of danger influencing, time of exposition, system
parameters, temperature, noise, dust, speed, speed of unwanted event rising, etc.
- not measurable factors: human factor, qualification, attention, stress, quality of control,
revise and experimental measurements, reliability and maintenance of safety
measurements, etc (Mikloš, 2004).
Determination of the influence of individual factors severity to the probability of concrete
negative event rising is subjective view of evaluators according higher mentioned factors.

Risk Management in Area of Security and Protection of Health During the Work

367
C – Consequence that determines level, consequence severity of unwanted event. Measure
of employees health damaging is evaluated by this parameter that results from influence of
unwanted event, caused by risk situation.

Class Consequence Characteristic of consequence
1 Negligible Small injury – less then easy injury, negligible financial
and material losses
2 Low important Easy injury, disease, beginning of occupational disease,
small financial and material losses
3 Important Serious injury demanded hospitalization, bigger material
and financial losses
4 Critical Heavy occupational injury with permanent consequences,
occupational disease, great financial and material losses
5 Catastrophic Deadly, mass injury, losses leading to liquidation
Table 11. Characteristics of risk consequences through extended point method
During estimation of accident consequence we come out from:
- severity of accident or health damaging – deadly, mass, heavy, serious accident
demanded hospitalization, or ease, small accident,
- extend of damaging – one person, more persons, material damage,
- measurable factors: type of accident: other, heavy, deadly, number of threatened people,
system parameters (height of working place, weight of manipulated burden, etc.),
- not measurable factors: relationship between danger and its effect
I – influence of SPHW level that determines evaluation of risk situation by own evaluator.
This parameter includes regarding management level, time of threatening influence,
qualification of employees, working ethic, using of protection working tools, level of
prevention, state and age of technical equipments, severity of accident or health damaging,
level of maintenance, performing of control, revision and examination of technical
equipments, influence of working environment, separation of working place, stress, etc.

Level Influence of SPHW level
1 Negligible influence to the probability and consequences
2 Low important influence to the probability and consequences
3 Not negligible influence to the probability and consequences
4 Important, great influence to the probability and consequences
5 More important influences to the probability and consequences
Table 12. Level of risk according extended point method
R – risk – it is simple multiplication of every three parameters – probability (P), consequence
(C) and influence of SPHW (I) that presents resulting risk measure (R = P x C x I). Lowest
value can be 1 and highest value is 125. According point extend risk during point method is

New Technologies – Trends, Innovations and Research

368
ranked to the five categories. Resulting risk value expresses reality if given risk is acceptable
or if it is necessary to accept measurements for risk removing or minimizing.

Risk
category
RISK Point
extend
Evaluation of
security (criteria)
Necessity of security
measurements
1 Negligible 1 – 4 Acceptable
security
It is not necessary to make
measurements, but informing is
necessary
2 Middle 5 – 15 Acceptable risk
during increased
attention
It is necessary to plan
improvement, trying to achieve
improving, training of employees
for risk managing
3 Precarious 16 – 50 Risk is not possible
to accept without
protection
measurements
It is necessary to accept technical,
organizational, security
measurements
4 Unwanted 51 – 100 Not proper
security, big
volume of injuries,
unwanted events
It is necessary to make immediate
improving measurements, or
measurements with short term
filling
5 Not
acceptable
101 – 125 Permanent threat
of injury,
uncovered losses
It is necessary to stop activity
immediately, displacing from the
service
Table 13. Risk evaluation according extended point method
Method: complex method for risk evaluation SPHW
Common practice in small and middle firms demands such methods that are not
sophisticated as for the time and expert knowledge, but on the other hand methods that
assume knowing of real state of existed technology. During application of the method it is
necessary to realize what elements of the analyzed system it is possible to neglect and what
elements must have increase attention. System according this method means data of
elements that provides certain activity. Common systems, in which human risk are existing,
are created by human factor (man with his abilities) that acts in certain working process and
uses working subjects. Principle of the method lies in proper ranking of point value to the
individual elements of the system and defining of acceptable risk (Mikloš, 2004).
Using of this method is applied mainly in area of human risks. Method includes also some
of basic elements of human factor analysis, as well as evaluation of working environment
and working subject risk. Possible applying has in every periods of technical life of given
system. It is proper mainly for immediate risk evaluation with aim to apply immediate, not
complex measurements.
Point values are allotted to the concrete risk that exists in the working process and is
function of individual elements of the system. Such values enable then evaluation of total
risk. Process of risk evaluation in working process is characterized by following steps:

Risk Management in Area of Security and Protection of Health During the Work

369
- Evaluation of total risk for working subject,
- Evaluation of environment influence,
- Evaluation of ability to manage the risk,
- Calculation of value for result risk,
- Comparison of calculated value of risk and acceptable value of risk,
- Performing of measurements.


1. Determination of possible damages Suggestion for
evaluation
Total
value
Dangerous injuries that have easy consequences
(impact, easy cutting, contusion)
1

10
S=



Dangerous injuries that have heavy consequences
(fractures, deep cutting, etc.)
Dangerous injuries that have permanent consequences


2. Exposition of threatening (frequency and time)
Suggestion for
evaluation
Total
value
Temporary middle exposition (for example automatic
machines that are without failure, rare intervention, etc.)
1

2
Ex=



Very often repeated exposition (intervention of hands
during every working cycle, for example molding)
Often or permanent exposition (for example machines with
manual leading – automatic machines, saws that are failed
and therefore interventions are necessary, etc.)


3. Probability of injuries occurrence (connected with factor
„equipment“)
Suggestion of
evaluation
Total
value
Low (unavailability of secured elements, reliable, practical
and secure protection equipment, during intervention
necessity to secure switch off) 0,5

1,5
Wa=



Middle (complete protection equipment, in good state, but
not practical, therefore many working moving are realized
without protection equipment)
Great (lacking or not sufficient protection equipment,
possible dangerous intervention during machine services)

New Technologies – Trends, Innovations and Research

370
4. Possibility to avoid or minimize the loss Suggestion of
evaluation
Total
value
Big (by timely informing of persons it is possible to avoid
losses)

0,5

1
Ve=


Small (mechanism of threatening is very rapid and
unexpected)
Table 14. Risk evaluation caused by equipment (machine)
5. Total evaluation of factor „equipment“
M = S x Ex x Wa x Ve (5)
M=




1. Arrange of working place and zone of intervention Suggestion of
evaluation
Total
value
At one level
0,5

1
Ua=



On many stable levels
Using of tools (ladder, foot step, ...)
Visible and spacious working ways
Near and not correspondent working ways

2. working environment Suggestion of
evaluation
Total
value
Not sufficient lighting
0,3

0,6
Ub=



Not disturbing noise (acoustic signals are very good
absorbed)
Disturbing noise (acoustic signals are absorbed not
sufficiently)
Comfortable climate (temperature, dust, dampness, air
circulation)
Disturbing, heavy climate

Risk Management in Area of Security and Protection of Health During the Work

371
3. Other charging Suggestion of
evaluation
Total
value
Proper arranging of elements for service, screen, indexes,
information offer and material flow
0,2

0,4
Uc=



Not proper arranging of elements for service, screen,
indexes, information offer and material flow
Easy physical charging (lifting and moving of the charge ...)
Heavy physical charging (lifting and moving of the charge
...)
Table 15. Evaluation of environment influence
4. Total evaluation of factor „environment“
U = Ua + Ub + Uc (6)

U=




1. Qualification of the person Suggestion
of
evaluation
Total value
Expert qualification, educated person with skills and
experiences
10

0
Q =



Expert qualification, educated or skilled person
Expert qualification, educated, but not skilled and
experienced person

2. Physical and psychical factors Suggestion
for
evaluation
Total
value
Proper psychical ability of the person for responsible work 3

0
j =


Not proper psychical ability of the person for responsible work

New Technologies – Trends, Innovations and Research

372
3. Job organization Suggestion
of
evaluation
Total
value
Formalized and used written working directive (firm’s
directive), prescription that will work safely 5

0
O =



Formalized, but not always used written working directive
that will work not safely
Not formalized, not used written working direction or firm’s
prescription that is not effective
Table 16. Ability of the person to manage the risk
4. Total evaluation of factor „person“
P = Q + j + O (7)
P=



Risk evaluation at working place by complex method:
During risk evaluation at working place it is necessary to determine at first values for
acceptability, that means values for risk acceptance. According this method level of
acceptance is during acceptable risk at level 10 points (next illustration at the figure).

Fig. 4. Level of risk acceptance
Resulted risk value is calculated according equation:
R = M × U – P × ( M/30 )* (8)
* Comparable value M/30 counts with significant ability of the person to manage the risk
during his increased level.
Violation of rules for SPHW leads to the employees’ accidents or possible death of
employee. In this case working place is automatically inspected by inspectors from
Institution for work safety that put penalty and make investigation of life and health
threatening. At the same time they make also preventive inspections at the working place
with aim to control observation of SPHW rules. Employees many times put protection tools

Risk Management in Area of Security and Protection of Health During the Work

373
due to their hindering, they consciously ignore them or they forget them and work without
them (Pačaiová et al., 2009).
7. Conclusion
In the present time job security and medical care of personnel isn’t only the question by
meet legislative requirement of society, but the question of total firm culture too. The role of
management is manage and make a decision about questions of society prosperity and
together admits and educates personnel to assume the responsibility for quality and job
security. Culture of security is term, which show obvious, but action, which must be realize
to effective implementation to firm, is very difficult. To know what is safe, what involve
menace and risk, and know these terms to apply, increase claim for safety inspector and
other personnel too. In today's 21st century, the time improving the technology and its
expanding use among more workers, the labor force continues to threaten various risks. It is
therefore necessary to address the organizational issues, how to prevent or eliminate the
impact of these risks.
Effective system of SPHW management is basis for good working conditions, security and
protection of health during the work. It leads to higher effectiveness, productivity and
quality of work, it means success of organization. By good level of SPHW can avoid
irreparable losses on human lives and health during working accidents, occupational
diseases and material damages. System of SPHW management is part of top managing
activities in the organizations. High quality of life must result from permanent training and
education mainly in area of application of various methods for risk management systems, as
well as from knowing the reality that there is not existing zero risk but minimal risks are
existing. Investment to SPHW and preventive activity presents finally profit for whole
organization. Employers must know that bad working conditions and risk working places
can present for organization further expenses and business goals can be effectively
combined with care for security and protection of employees’ health.
It is necessary to deepen in the chapter maximum of the factors that yet influence and will
influence whole process of risk management. Due to the fact, that system of risk indexes is
permanently developed, chapter will mention basic aspects in area of risk management from
the global point of view in economically developed countries. During risk management it is
very important to respect laws of effectiveness. Costs for risk management versus costs for
risk bearing must be lower in credit of risk management. This is connected with so-called
secondary risk impacts that are not visible at the beginning. Risk management must be
positive in the cycle of minimal such eight areas: Finances – public relation – safety and
organization of health during the work – ethics – internal environment of the firm –
employees – collective – nature. When risk management will be positive for every area and
acceptable from the view of the costs, risk can be considered as risk that was successfully
solved.
Conclusion of the chapter gives to the definition of contributions for development of theory
and practice. Chapter can serve as a tool for easier defining and explanation of risk
management and also for obtaining of necessary information for performing of effective
decision of firm’s management.

New Technologies – Trends, Innovations and Research

374
8. Acknowledgements
The chapter is a partial output of a research project in Slovak republic: VEGA 1/4576/07 -
Analysis and application of risk management in enterprise environment of Slovakia
manufacturing corporations.
In the frame of the mentioned project there was realized research that was organized in five
production firms in the condition of Slovak republic, from which resulted higher mentioned
results, presented in the chapter. Authors thank to the firms for providing of information
necessary for the research.
9. References
AL – Zabidi, D. – Čulková, K.: Riadenie priemyselných rizík. In: Rozvoj manažmentu v
teórií a praxi. - Žilina: Žilinská univerzita, 2011 S. 51-56. - ISBN 978-80-554-0294-9
Antošová, M. – Csikósová, A.: Trendy v prístupe k ľudskému kapitálu v 21. Storočí.. In:
Aktuálne trendy na trhu práce a v politike zamestnanosti. - Trenčín : Trenčianska
univerzita A. Dubčeka, 2007 S. 28-32. - ISBN 9788080751951
Antošová, M.: Manažment ľudských zdrojov v praxi. 1. vyd - Košice : ES FBERG TU, - 2008.
- 155 s. - ISBN 978-80-553-0017-7.
Balážiková, M.: Vplyv ergonomických parametrov na úroveň BOZP. Bezpečnosť - Kvalita -
Spoľahlivosť. Košice: TU, 2009. ISBN 9788055301372
Cehlár, M. – Teplická, K. – Seňová, A.: Risk management as instrument for financing
projects in mining industry. 1 elektronický optický disk (CD-ROM). In: SGEM 2011:
11th International Multidisciplinary Scientific GeoConference : conference
proceedings: Volume 1: 20-25 June, 2011, Bulgaria, Albena. - Sofia: STEF92
Technology Ltd., 2011 P. 913-920. - ISSN 1314-2704
Certifikácia SM BOZP podľa OHSAS 18 001. Dostupné na internete:
http://www.elbacert.sk/OHSAS-18001-system-manazerstva-BOZP.html
Chiodo, E. – Pagano, M.: Human reliability analyze bay random hazard rate approach. The
International Journal for Computation and Mathematics in Electrical and Electronic
Engineering, 2004, vol.23, no.1, pp. 66-78
Čulková, K. – Teplická, K.: Evaluation of the health care from the view of quality
management system. In: Kvalita Inovácia Prosperita. Roč. 12, č. 1 (2008), s. 45-52. -
ISSN 1335-1745
Čunderlík, D.: Manažment rizika podnikania. Bratislava: Epos, 1998. ISBN 80-88810-95-7
Drahten, H. – Hermann, B. (2007). Relevant characteristics of the human system as
determining factors for the man – machine – interface in process plants. In OECD –
CCA Proceedings from Workshop on Human Factors in Chemical Accidents and
Incidents
Hannaman, G.W. – Spurgin, A.J.: Systematic Human Action Reliability Procedure, EPRI-NP-
3583, Electric Power Research Institute, Palo Alto, CA (USA), 1984
Hidekava Yoshikava, Wei Wu. An experimental study on estimating human error
probability. Ergonomics, 1999, vol.42, no.11. ISSN 0014-0139
ISO Guide 73:2009 Risk management- vocabulary
ISO/IEC 31011:2009 Risk management- Risk assessment techniques

Risk Management in Area of Security and Protection of Health During the Work

375
Kruliš, J.: Management rizik musí být prioritou. PREP Praha. In: Moderní rízení, 2010, č.3,
ISSN 0026-8720
Leiden, K. – Laughery, K.R. A Review of Human Performance Models for thy Prediction of
Human Error, Ames Research Center Moffett Field, CA 94035 – 1000, 2001
Majer, Ivan: Ako ovplyvní BOZP vstup do EÚ? Dostupné na internete:
<http://www.ebts.besoft.sk/index.php?kam=forum_bozp&sub_konf=70300>
Marek, J. – Skrehot, P.: Základy aplikované ergonomie. Praha: VÚBP, 2009. 118s. ISBN 978-
80-86973-58-6
Mikloš, V. a kol.: Workplace Stress – a Growing Problem. VI.International Conference
Metallurgy, Refractories and Environment, Stara Lesna, High Tatras, Slovakia, May
25-27,2004, p. 145-150
Mižíková, I. – Csikósová, A.: Insurance as an important factor reducing the risk in industry.
In: Acta Montanistica Slovaca. Roč. 14, č. 3 (2009), s. 260-267. - ISSN 1335-1788
Spôsob prístupu: http://actamont.tuke.sk/...
Pačaiová, H. - Sinay, J. - Glatz, J.: Bezpečnosť a riziká technických systémov. Košice:
Technická univerzita v Košiciach, Strojnícka fakulta, 2009. ISBN 978-80-553-0180-
8
Pollio, G.: International Project Management and Financing. London: MacMillan Press, 2003
Prínosy pre podniky vyplývajúce z vysokej úrovne bezpečnosti a ochrany zdravia pri práci,
FACTS, 77, Európska agentúra pre bezpečnosť a ochranu zdravia pri práci, ISSN
1725-7085, Dostupné na internete:
< http://osha.europa.eu/sk/publications/factsheets/77>
Rasmussen, J. Information Processing and Human – machine Interaction: an Approach to
Cognitive Engineering. New York: North – Holland, 1985
Seňová, A. - Slaninová, P. - Weiss, E.: Posúdenie rizika bodovou metódou pre vybranú
profesiu v ťažobnom priemysle. In: Acta Montanistica Slovaca. Roč. 13/2008, č. 2, s.
278-284
Seňová, A. : Appreciate of risk management of work-people professions in mining industry.
In: SGEM 2008. Volume 2. - Sofia : SGEM, 2008 P. 211-218. - ISBN 9549181812
Seňová, A.- Antošová, M.: Hodnotenie rizík možného ohrozenia bezpečnosti a zdravia
zamestnancov ako súčasť kvality pracovného života v podniku. In: Manažment v
teórii a praxi, roč. 3, č.1-2, (2007), ISSN 1336-7137
Šimák, L.: Manažment rizík. Fakulta špeciálneho inžinierstva, ŽU v Žiline, 2006
Skrehot, P.: Chyby lidského činitele a identifikace jejich príčin. Dostupné na internete:
http://www.bozpinfo.cz/josra/josra-01-2009
Smejkal, J.- Rais, K.: Rízení rizik. Praha: Grada Publishing, 2003. ISBN 80-247-0198-7
Šolc, M.: Aplikácie niektorých nových zákonov v integrovaných manažérskych systémoch.
KDP, Košice : TU Košice, 2003
Šolc, M.: Posúdenie rizika v elektro-montážnej spoločnosti. 3.medzinárodná vedecká
konferencia, Košice 2007 „Bezpečnosť*Kvalita*Spoľahlivosť, str. 264-268. ISBN 978-
80-8073-828-0
STN OHSAS 18001:2009- Systémy manažérstva a ochrany zdravia pri práci - Požiadavky
Walker, E.B. – Maune, J.A.: Creating an Extraordinary Safety Culture . professional Safety,
2000, no.5, pp.33-37

New Technologies – Trends, Innovations and Research

376
Zákon č.124/2006 Z. z., z 2. februára 2006 o bezpečnosti a ochrane zdravia pri práci a o
zmene a doplnení niektorých zákonov
Part 11
Technology Popularization



17
Open and Integral Innovation on
Tablet PC by Popularized Advanced
Media as Industrial Cradle
Makoto Takayama
1
Niigata University, Graduate School of Management of Technology
2
UCLA Medical School,
1
Japan
2
USA
1. Introduction
When advanced media are used near us through the tablet PC such as iPad, user-push
innovations prevail in the user industries by the following steps:
1. Simultaneously processing of multiple information by reciprocal information exchange
2. Integration of multiple knowledge into the conventional organizations/businesses
3. Intellectualization of users
4. Users push to innovate the use/utility of information
According to Shapiro (1999), internet created control revolution by relocating powers from
organizations to individuals. This resulted in open-modular innovation that had forced to
change the industrial model from centralized and vertically integrated system to
decentralized and horizontally specialized system. In the digital industry, it is believed that
the competitiveness is restored by open-modular system instead of closed integral system.
In case of newly emerging innovation caused by cutting-edge advanced media, the fact is
the opposite. Innovation is instantly handled by end-users in the open-integral manner on
tablet PC. Instant innovation is thus creating power shift. Such a new type of power shift is
undertaken by advanced media since new media are given a role as a tool for self-
transformational innovation. The power of the media moves to media-users and changes the
winner of business. This power shift made it possible for everyone to handle media easily
and instantly to handle innovation by tablet PC.
New trend of the innovators’ power shits are provided by new scheme of innovation
process model such as open-integral innovation by modularizing power of each function
through advanced media. As the most typical example, penetration of tablet PC is breaking
the closed system open even in the medical industry, which is well-known as the most
typical closed business area, by the following steps:
1. Open modularization of each function by dividing specialized function
2. Use of information for explanation and communication among specialists
3. Integral use of information for users

New Technologies – Trends, Innovations and Research

380
4. Open but integral use of information for everyone’s ordinary life assistance
In conclusion, the industrial structure is converted to open-integral through open-
modularizing power of the advanced media, on the basis of user-push innovation.
Especially in case of medical industry, it comes to have a crucial function as industrial
cradle. Industries of the next generation arise because the medical business becomes a
platform where the related industry is strongly drawn.
This chapter also described the mechanism of win or loss in the new born market. This
theory could clarify the reason why Apple could get success in developing platform by
combining modules and why other firms failed? On the whole, win or loss in the new born
market has a predetermined and preset agenda before starting competition.
2. Instant innovation on tablet PC
According to McLuhan (McLuhan and Fiore, 1967), "The medium is the message".
McLuhan shoots down the idea that people construct meanings and transfer these
meanings through a medium. After becoming the age of internet, the myth of internet has
been believed, that internet is the most powerful medium yet invented. Although internet
has penetrated into our everyday life, its utility has not yet practically improved in our
real life, unless internet changes the way. With the advent of iPad in 2010, various ways of
use on the internet expand and accordingly strangers separated by great distances can
gather to engage one another in the same contents for the possible improvement and
practical application of various technologies not merely related to IT technology. For the
first time ever in the internet era, this gives these media the allure of a new dimension for
the progress in our realistic life. It's been a beyond internet era where technology
progressed remarkably and life was enriched by the spread of media through tablet
terminals such as iPad, tablet PC etc. This phenomenon could be compared to online
games on the internet, strangers separated by great distances can gather in a world of
virtual reality to engage one another in contests and battles of every description. This
chapter shows that internet media have been drastically being outdated by the new
generation of smart phones such as iPad in 2010.
2.1 Change of the role of medium
2010 is said to mark the "first year of iPad (Nikkei Business, Jan. 18, 2011). This tablet PC is
extremely easy to use even for baby or elderly. It brought many advantages to everyday life;
extension of human capability, resolution of problem caused by information asymmetry and,
popularization of use of cutting-edge high-tech media and high technology. According to
Shapiro (1999), internet caused control revolution by relocating powers from organizations to
individuals. This is apparently caused by power shift since controllability of information was a
source of the power in organizations in the past. In fact, information has been still controlled in
dictatorial states. At that time, internet was simply a tool for distributing information from
organizations to individuals. Media were therefore thought to mediate the information from
one to another in the past. This situation is coming to an end. After the era of tablet PC, the role
of media has been changed from intermediate to creators. Regardless whether those in power
like it or not, media make the move fast and exercise initiative for changing the power. Media
have changed the role from power holders to power dispersants.
Open and Integral Innovation on Tablet PC
by Popularized Advanced Media as Industrial Cradle

381
2.2 Extension of human capability
The medium is nowadays furthermore playing a novel role for human perception as not
only passive message but also proactive one. Cutting-edge media (advanced media) are
penetrating to everyday life rapidly, and the ruler of medium is changing from the
conventional major to the public. The medium exercises much more influence to not only
contents of media but also everything including human activity in the human life. This owes
to expansion of capability of new medium itself, since the newly emerging cutting-edge
media are expanding the contents of the message beyond human perception. According to
McLuhan (McLuhan and Carpenter, 1960, McLuhan, 1964, McLuhan, M. & McLuhan, E.,
1988), the sociological role of medium is “expansion of the human being”. This theory
centered on the idea of technology expanding the realm of human knowledge and
experience. After 2010, it has actually started exceeding the limit of the human capability
with the function of the advanced media. Tablet PC started expanding the realm of message
beyond the human perception. Those phenomena are symbolized by virtual reality (VR)
human interface, for examples, VR simulation, augmented virtuality, augmented reality,
ultra realistic communication and so on. Augmented Reality (AR) displays in a general
sense, within the context of a Reality-Virtuality (RV) continuum, encompassing a large class
of "Mixed Reality" (MR) displays, which also includes Augmented Virtuality (AV)
(Miligram et al., 1994).
2.3 Counter measure against principle-agent problem caused by asymmetric
information
Next issue of the change is how to remove the barrier of understanding of information.
Information is not equally distributed to everyone even if the same information is transferred
or given. Any distribution method of information cannot avoid the problem of asymmetry of
information. Information asymmetry creates an imbalance of power in transactions which can
sometimes cause the transactions for the inferior to go awry in the market or transaction. In
2001, the Nobel Prize in Economics was awarded to George Akerlof, Michael Spence, and
Joseph E. Stiglitz for their analyses of markets with asymmetric information. In accordance
with this hypothesis, the major has kept a leading position in the market by creating imbalance
of the power. Problems caused by these examples are adverse selection and moral hazard in
the market or society. Information asymmetries most commonly cause principal-agent
problems. Typical case is power relationship between patient with a fatal disease and surgeon
in the operation room. As a solution of this case, advanced media are actually disclosing or
revealing the happening in the operation room.
2.4 Popularization of advanced technology through advanced media
By extending human capability, man could select and use the necessary information
through advanced media. In other words, advanced media become to mediate the selection
and the use of information. This leads to attenuating the influential power of market player
for innovation, since user could select the best for him by himself.
If cutting-edge high technology is popularized by advanced media, user could find and
select directly the best way of using. This means that the diffusion rate of cutting-edge high
technology is greatly accelerated by popularized media. When advanced media are used

New Technologies – Trends, Innovations and Research

382
Near us through the tablet PC such as iPad, user-push innovations prevail in the user
industries by the following steps:
1. Simultaneously processing of multiple information by reciprocal information exchange
2. Integration of multiple knowledge into the conventional organizations/businesses
3. Intellectualization of users
4. Users push to innovate the use/utility of information
According to Cowhey and Aronson (2008), innovation in ICT fuels the growth of the global
economy. The diffusion of internet, wireless, and broadband technology, growing
modularity in the design of technologies, distributed computing infrastructures, and rapidly
changing business models signal another shift. Attributing to a path breaking action of
tablet PC, new technology is emerging in the field of research and immediately developed
on tablet PC. Once innovative application is tested and confirmed, innovation ends up in
widespread use and instantly popularized.
Rogers (1962, 2003) explains how new ideas spread via communication channels over time.
Such innovations are initially perceived as uncertain and even risky. To overcome this
uncertainty, most people seek out others like themselves who have already adopted the new
idea. Thus the diffusion process consists of a few individuals who first adopt an innovation,
then spread the word among their circle of acquaintances. Such a diffusion process of
innovation typically takes months or years. Becoming internet era in the 1990s, use of new
technology may have spread more rapidly than before since internet is changing the very
nature of diffusion by decreasing the importance of physical distance between people.
Internet has transformed the way of communication and adoption of new ideas.
Tablet PC changed the mode of diffusion of innovation and the process of adoption of
technology. Advanced media have opened the window for experimental and direct use by
users. Innovative technology is instantly adopted once utility is recognized by users. Instant
innovation could remove the gap between experiment and adoption of innovation. Instant
innovation accelerates the frequency of feasibility study extremely and increases the number
of trial-and-error. Innovators are not corporate actors or opinion leaders but also end-users.
3. Modularity in the healthcare business
Healthcare business requires expertness and know-how. For this reason, each expertise is
originally divided into each specialized functional division or firm. To put it more precisely
in the concrete, hospital has two primitively specialized functions to diagnose and treat
patients. They are integrated into one synchronized activity by combining therapy and
diagnosis. In this way, healthcare business consists of modularized parts from the
diversification of specialties. These components are integrated at hospital by unified
activities for treating each patient. By rapid penetration of advanced media, various barriers
for integration are rapidly conquered. It owes to integrative function of tablet PC especially
by rapid progress in diffusion of iPad among physicians.
3.1 Modularity of expertise as a source of competitive advantage
In today‘s information-rich environment, companies can no longer afford to rely entirely on
their own ideas to advance their business, nor can they restrict their innovations to a single
Open and Integral Innovation on Tablet PC
by Popularized Advanced Media as Industrial Cradle

383
path to market. According to Henry W. Chesbrough (2003), the traditional vertical
integration model for innovation has been becoming obsolete. Open-modular innovation is
believed to be the post 20 century’s firms‘ success model since it leverages internal and
external sources of ideas and takes them to market through multiple paths.
Baldwin and Clark (2000) explained the merits of modularization, as follows:
1. Expand controllable capability with a minimum of complexity, which builds complex
products from smaller subsystems that can be designed independently yet function
together as a whole.
2. Save time and mutual adjustment by the effect that modularity freed designers to
experiment with different approaches, as long as they obeyed the established design
rules.
3. Manage uncertainty effectively.
To split complex products into modules increases the likelihood of innovation by combining
products and services of the best of several prototypes (BOB: Best of Breed). This will be able
to choose the most suitable option for many. It means that complex problem is transformed
into simple multi-option choice question. In 21 century, this control system has apparently
acquired competitive advantage that overcome 20 century’s manufacturing and R&D model
prevailed among traditional big firms.
3.2 Modularity of healthcare business by high-tech cutting edge media
With introduction of IT and the development of digitalization, work assignments have been
sub-divided based on specialty in computer, automotive, telecommunication and power
industries. Before common industry became modularized, each function of healthcare
business has been already specialized based on professional, from cleaning in a laundry to
diagnosis in a laboratory and treatment in a consulting room. Table.1 shows the ratio of
outsourcing at the hospital in Japan. More than 90% of laboratory test such as examination
of blood sample has been already outsourced in 20th century. As a recent world-wide trend,
collecting patient data or ordering drugs and lab tests using handheld devices can be very
effective in reducing errors. iPad has rapidly penetrated and come into general usage for
security and support at hospital.

1991 2009
Linen 95.4% 97.4%
Disposition of waste 79.3% 96.9%
Laboratory test 90.3% 95.5%
Cleaning service 70.2% 81.7%
Food service operation 19.8% 62.3%
Administrative and clerical
support
23.1% 31.8%
Sterilization and disinfection
practice
14.3% 20.7%
Commodities Management 0% 16.8%
Table 1. Trend of outsourcing at hospital in Japan (Weekly Diamond, 2010)

New Technologies – Trends, Innovations and Research

384
Healthcare business consists of modularized parts for the purpose of the diversification of
specialties. These components are integrated at hospital for the activity of treatment and
promote to integrate the healthcare enterprise as indicated by initiative of Radiological
Society of North America. In contrast with the move of healthcare industrial standard, the
medical treatment in clinical practice did not open its system to outside, rather protect to be
opened and resisting the order or restriction asked by the Health Authorities in the open.
Furthermore, extremely high specialty has fixed power of majors in the market due to lack
of new entry from another sector, although major loses the market share in case a new
market is created by a product or business that competes ind