You are on page 1of 32

ADVANCED MANUFACTURING

TECHNOLOGY/ (APS/CIM-1112)
RAPID PROTOTYPING & ARTIFICIAL INTELLIGENCE
M.TECH/COMPUTER INTEGRATED MANUFACTURING/I SEM
SAMARAT ASHOK TECHNOLOGICAL INSTITUTE, VIDISHA (M.P)
Session 2014-15

UNIT 4: RAPID PROTOTYPING


Rapid prototyping (RP) is a family of fabrication methods to make engineering prototypes in
minimum possible lead times based on a computer-aided design (CAD) model of the item.
The historical development of RP and related technologies is presented in table 4.1.
Year of Inception Technology
1770

Mechanization

1946

First computer

1952

First Numerical Control (NC) machine tool

1960

First commercial laser

1961

First commercial Robot

1963

First interactive graphics system (early version of Computer Aided Design)

1988

First commercial Rapid Prototyping system

Table 4.1 Historical developments of Rapid Prototyping and related technologies


Need of Rapid Prototyping: The special need that motivates the variety of rapid prototyping
technologies arises because product designers would like to have a physical model of a new part
or product design rather than a computer model or line drawing.
A virtual prototype, which is a computer model of the part design on a CAD system, may not be
adequate for the designer to visualize the part. It certainly is not sufficient to conduct real
physical tests on the part, although it is possible to perform simulated tests by finite element
analysis or other methods. Using one of the available RP technologies, a solid physical part can
be created in a relatively short time. The designer can therefore visually examine and physically
feel the part and begin to perform tests and experiments to assess its merits and shortcomings.

Fundamentals of Rapid Prototyping: Rapid prototyping technologies can be divided into


two basic categories;
1. Material removal processes
2. Material addition processes
The material removal RP techniques involves machining, primarily milling and drilling, using a
dedicated CNC machine. To use CNC, a part program must be prepared from the CAD model.
The starting material is often a solid block of wax, which is very easy to machine, and the part
and chips can be melted and resolidified for reuse when the current prototype is no longer
needed. Other starting materials can also be used, such as wood, plastics, or metals (e.g., a
machinable grade of aluminum or brass). The CNC machines used for rapid prototyping are
often small, and the terms desktop milling or desktop machining are sometimes used for this
technology. Maximum starting block sizes in desktop machining are typically 180 mm (7 in) in
the x-direction, 150 mm (6 in) in the y-direction, and 150 mm (6 in) in the z-direction.
Material-addition RP technologies, all of which work by adding layers of material one at a time
to build the solid part from bottom to top. Starting materials include;
1. Liquid monomers that are cured layer by layer into solid polymers
2. Powders that are aggregated and bonded layer by layer, and
3. Solid sheets that are laminated to create the solid part.
The common approach to prepare the control instructions (part program) in all of the current
material addition RP techniques involves the following steps;

1. Geometric Modeling: This consists of modeling the component on a CAD system to define its
enclosed volume. Solid modeling is the preferred technique because it provides a complete and
unambiguous mathematical representation of the geometry. For rapid prototyping, the important
issue is to distinguish the interior (mass) of the part from its exterior, and solid modeling
provides for this distinction.
2. Tessellation of the Geometric Model: In this step, the CAD model is converted into a format
that approximates its surfaces by triangles or polygons, with their vertices arranged to distinguish
the objects interior from its exterior. The common tessellation format used in rapid prototyping
is STL, which has become the de facto standard input format for nearly all RP systems.
3. Slicing of the Model into Layers: In this step, the model in STL file format is sliced into
closely spaced parallel horizontal layers. Conversion of a solid model into layers is illustrated in
Fig. 4.1. These layers are subsequently used by the RP system to construct the physical model.
By convention, the layers are formed in the x-y plane orientation, and the layering procedure
occurs in the z-axis direction. For each layer, a curing path is generated, called the STI file,
which is the path that will be followed by the RP system to cure (or otherwise solidify) the layer.

Fig. 4.1: Conversion of a solid model of an object into layers (only one layer is shown)

Fig. 4.2: RP process chain showing fundamental process steps

As above brief overview indicates, there are several different technologies used for material
addition rapid prototyping. This heterogeneity has spawned several alternative names for rapid
prototyping, including layer manufacturing; direct CAD manufacturing, and solid freeform
fabrication. The term rapid prototyping and manufacturing (RPM) is also being used more
frequently to indicate that the RP technologies can be applied to make production parts and
production tooling, not just prototypes.

Fig. 4.3: Generalized illustration of data flow in RP

Classification of RP Process: The professional literature in RP contains different ways of


classifying RP processes. However, one representation based on German standard of production
processes classifies RP processes according to state of aggregation of their original material (Fig.
4.4).
1. Stereolithography: This was the first material addition RP technology, dating from about
1988. Stereolithography (STL) is a process for fabricating a solid plastic part out of a
photosensitive liquid polymer using a directed laser beam to solidify the polymer. Part
fabrication is accomplished as a series of layers, in which one layer is added onto the previous
layer to gradually build the desired three dimensional geometry. The Stereolithography apparatus
consists of;
1. A platform that can be moved vertically inside a vessel containing the photosensitive polymer
2. A laser whose beam can be controlled in the x-y direction.
At the start of the process, the platform is positioned vertically near the surface of the liquid
photopolymer, and a laser beam is directed through a curing path that comprises an area
corresponding to the base of the part. This and subsequent curing paths are defined by the STI
file. The action of the laser is to harden (cure) the photosensitive polymer where the beam strikes
the liquid, forming a solid layer of plastic that adheres to the platform. When the initial layer is

completed, the platform is lowered by a distance equal to the layer thickness, and a second layer
is formed on top of the first by the laser, and so on. Before each new layer is cured, a wiper blade
is passed over the viscous liquid resin to ensure that its level is the same throughout the surface.
Each layer consists of its own area shape, so that the succession of layers, one on top of the
previous, creates the solid part shape. Each layer is 0.076 to 0.50mm (0.003 to 0.020 in) thick.
Thinner layers provide better resolution and allow more intricate part shapes; but processing time
is greater. Photopolymers are typically acrylic, although use of epoxy for STL has also been
started. The starting materials are liquid monomers. Scan speeds of STL lasers typically range
between 500 and 2500 mm/s.
The time to complete a single layer is given by the following equation;
Ti = (Ai/vD) + Tr
Where,
Ti = time to complete layer i (sec)
Ai = area of layer i (mm2)
v = average scanning speed of the laser beam at the surface (mm/sec)
D = diameter of the laser beam at the surface in mm (called the spot size, assumed circular)
Tr = repositioning time between layers (sec)
Once the Ti values have been determined for all layers, then the build cycle time can be
determined;
Tc = Ti
After all of the layers have been formed, the photopolymer is about 95% cured. The piece is
therefore baked in a fluorescent oven to completely solidify the polymer. Excess polymer is
removed with alcohol, and light sanding is sometimes used to improve smoothness and
appearance.

Fig. 4.4: Classification of RP Processes

Fig. 4.5: Stereolithography: (1) at the start of the process, in which the initial layer is added to
the platform; and (2) after several layers have been added so that the part geometry gradually
takes form.
2. Solid Ground Curing: Like Stereolithography, solid ground curing (SGC) works by curing a
photosensitive polymer layer by layer to create a solid model based on CAD geometric data.
Instead of using a scanning laser beam to accomplish the curing of a given layer, the entire layer
is exposed to an ultraviolet light source through a mask that is positioned above the surface of
the liquid polymer. The hardening process takes 2 to 3 seconds for each layer. SGC systems are
sold under the name Solider system by Cubital Ltd.
The step by step procedure in SGC is described as follows;
1. A mask is created on a glass plate by electrostatically charging a negative image of the layer
onto the surface. The imaging technology is basically the same as that used in photocopiers.
2. A thin flat layer of liquid photopolymer is distributed over the surface of the work platform.
3. The mask is positioned above the liquid polymer surface and exposed by a high powered
(e.g., 2000 W) ultraviolet lamp. The portions of the liquid polymer layer that are unprotected
by the mask are solidified in about 2 seconds. The shaded areas of the layer remain in the
liquid state.
4. The mask is removed; the glass plate is cleaned and made ready for a subsequent layer in
step 1. Meanwhile, the liquid polymer remaining on the surface is removed in a wiping and
vacuuming procedure.
5. The now-open areas of the layer are filled in with hot wax. When hardened, the wax acts to
support overhanging sections of the part.
6. When the wax has cooled and solidified, the polymer-wax surface is milled to form a flat
layer of specified thickness, ready to receive the next application of liquid photopolymer in
step 2.
The mask preparation step 1 for the next layer is performed simultaneously with the layer
fabrication steps 2 through 6, using two glass plates during alternating layers.
The sequence for each layer takes about 90 seconds. Throughput time to produce a part by SGC
is claimed to be about eight times faster than competing RP systems. The solid cubic form
created in SGC consists of solid polymer and wax. The wax provides support for fragile and

overhanging features of the part during fabrication, but can be melted away later to leave the
free-standing part. No post curing of the completed prototype model is required, as in
stereolithography.

Fig. 4.6: Solid ground curing process for each layer: (1) mask preparation, (2) applying liquid
photopolymer layer, (3) mask positioning and exposure of layer, (4) uncured polymer removed
from surface, (5) wax filling, (6) milling for flatness and thickness.
3. Droplet Deposition Manufacturing: These systems operate by melting the starting material
and shooting small droplets onto a previously formed layer. The liquid droplets cold weld to the
surface to form a new layer. The deposition of droplets for each new layer is controlled by a
moving x-y spray nozzle work head whose path is based on a cross section of a CAD geometric
model that has been sliced into layers. After each layer has been applied, the platform supporting
the part is lowered a certain distance corresponding to the layer thickness, in preparation for the
next layer. The term droplet deposition manufacturing (DDM) refers to the fact that small
particles of work material are deposited as projectile droplets from the work head nozzle.
An important criterion that must be satisfied by the starting material is that it be readily melted
and solidified. Work materials used in DDM include wax and thermoplastics. Metals with low
melting point, such as tin, zinc, lead, and aluminum, have also been tested.
4. Laminated Object Manufacturing: Laminated object manufacturing produces a solid
physical model by stacking layers of sheet stock that are each cut to an outline corresponding to
the cross-sectional shape of a CAD model that has been sliced into layers. The layers are bonded
one on top of the previous one before cutting. After cutting, the excess material in the layer
remains in place to support the part during building. Starting material in LOM can be virtually
any material in sheet stock form, such as paper, plastic, cellulose, metals, or fiber-reinforced

materials. Stock thickness is 0.05 to 0.50 mm (0.002 to 0.020 in). In LOM, the sheet material is
usually supplied with adhesive backing as rolls that are spooled between two reels, as in Fig. 4.7.
Otherwise, the LOM process must include an adhesive coating step for each layer.

Fig. 4.7 Laminated object manufacturing


The data preparation phase in LOM consists of slicing the geometric model using the STL file
for the given part. The slicing function is accomplished by LOMSliceTM, the special software
used in laminated-object manufacturing. Slicing the STL model in LOM is performed after each
layer has been physically completed and the vertical height of the part has been measured.
The LOM process for each layer can be described as follows;
1. LOMSliceTM computes the cross-sectional perimeter of the STL model based on the
measured height of the physical part at the current layer of completion.
2. A laser beam is used to cut along the perimeter, as well as to crosshatch the exterior portions
of the sheet for subsequent removal. The laser is typically a 25 or 50 W CO2 laser. The
cutting trajectory is controlled by means of an x-y positioning system. The cutting depth is
controlled so that only the top layer is cut.
3. The platform holding the stack is lowered, and the sheet stock is advanced between supply
roll and take-up spool for the next layer. The platform is then raised to a height consistent
with the stock thickness and a heated roller moves across the new layer to bond it to the
previous layer. The height of the physical stack is measured in preparation for the next
slicing computation by LOMSliceTM.
When all of the layers are completed, the new part is separated from the excess external material
using a hammer, putty knife, and wood carving tools. The part can then be sanded to smooth and
blend the layer edges. A sealing application is recommended, using a urethane, epoxy, or other
polymer spray to prevent moisture absorption and damage. LOM part sizes can be relatively
large among RP processes, with work volumes up to 800 mm 500 mm by 550 mm (32 in 20
in 22 in). More common work volumes are 380 mm 250 mm 350 mm (15 in 10 in 14
in).
5. Fused Deposition Modeling: Fused-deposition modeling (FDM) is an RP process in which a
filament of wax or polymer is extruded onto the existing part surface from a work head to
complete each new layer. The work head is controlled in the x-y plane during each layer and then
moves up by a distance equal to one layer in the z-direction. The starting material is a solid
filament with typical diameter = 1.25 mm (0.050 in) fed from a spool into the work head that

heats the material to about 0.50C (10F) above its melting point before extruding it onto the part
surface. The extrudate is solidified and cold welded to the cooler part surface in about 0.1
second.
FDM was developed by Stratasys Inc., which sold its first machine in 1990. The starting data is a
CAD geometric model that is processed by Stratasyss software modules QuickSlice; and
SupportWorkTM. QuickSlice; is used to slice the model into layers, and SupportWorkTM is used
to generate any support structures that are required during the build process. If supports are
needed, a dual extrusion head and a different material is used to create the supports. The second
material is designed to readily be separated from the primary modeling material. The slice (layer)
thickness can be set anywhere from 0.05 to 0.75mm (0.002 to 0.030 in). About 400mmof
filament material can be deposited per second by the extrusion work head in widths (called the
road width) that can be set between 0.25 and 2.5 mm (0.010 to 0.100 in). Starting materials are
wax and several polymers, including ABS, polyamide, polyethylene, and polypropylene. These
materials are nontoxic, allowing the FDM machine to be set up in an office environment.

Fig. 4.8 Fused Deposition Modeling


6. Selective Laser Sintering: Selective laser sintering (SLS) uses a moving laser beam to sinter
heat-fusible powders in areas corresponding to the CAD geometric model one layer at a time to
build the solid part. After each layer is completed, a new layer of loose powders is spread across
the surface using a counter-rotating roller. The powders are preheated to just below their melting
point to facilitate bonding and reduce distortion. Layer by layer, the powders are gradually
bonded into a solid mass that forms the three-dimensional part geometry. In areas not sintered by
the laser beam, the powders remain loose so they can be poured out of the completed part.
Meanwhile, they serve to support the solid regions of the part as fabrication proceeds. Layer
thickness is 0.075 to 0.50 mm (0.003 to 0.020 in).
It is a more versatile process than stereolithography in terms of possible work materials. Current
materials used in selective laser sintering include polyvinylchloride, polycarbonate, polyester,
polyurethane, ABS, nylon, and investment casting wax. These materials are less expensive than
the photosensitive resins used in stereolithography. They are also nontoxic and can be sintered
using low power (25 to 50W) CO2 lasers. Metal and ceramic powders are also being used in
SLS.
7. 3-D Printing: Three-dimensional printing (3DP) builds the part in the usual layer-by-layer
fashion using an ink-jet printer to eject an adhesive bonding material onto successive layers of
powders. The binder is deposited in areas corresponding to the cross sections of the solid part, as
determined by slicing the CAD geometric model into layers. The binder holds the powders

together to form the solid part, while the unbounded powders remain loose to be removed later.
While the loose powders are in place during the build process, they provide support for
overhanging and fragile features of the part. When the build process is completed, the part is heat
treated to strengthen the bonding, followed by removal of the loose powders. To further
strengthen the part, a sintering step can be applied to bond the individual powders.
The part is built on a platform whose level is controlled by a piston. Let us describe the process
for one cross section with reference to Fig. 4.10;

Fig. 4.9 Selective Laser Sintering systems

Fig. 4.10 Three-dimensional printing: (1) powder layer is deposited, (2) ink-jet printing of areas
that will become the part, and (3) piston is lowered for next layer (key: v = motion)
1. A layer of powder is spread on the existing part-in-process.
2. An ink-jet printing head moves across the surface, ejecting droplets of binder on those
regions that are to become the solid part.
3. When the printing of the current layer is completed, the piston lowers the platform for the
next layer.
Starting materials in 3DP are powders of ceramic, metal, or cermet, and binders that are
polymeric or colloidal silica or silicon carbide. Typical layer thickness ranges from 0.10 to 0.18
mm (0.004 to 0.007 in). The ink-jet printing head moves across the layer at a speed of about 1.5
m/s (59 in/sec), with ejection of liquid binder determined during the sweep by raster scanning.
The sweep time, together with the spreading of the powders, permits a cycle time per layer of
about 2 seconds.

8. Vacuum Casting: In this process, a mixture of fine sand and urethane is molded over metal
dies and cured with amino vapor. The molted metal is drawn into the mold cavity through a
gating system from the bottom of the mold with the help of a vacuum pump. The pressure inside
the mold is usually one-third of the atmospheric pressure. In this process, the mold in an inverted
position from the usual casting process is lowered into the flask with the molten metal. Because
the mold cavity is filled under vacuum, the vacuum casting process is very suitable for thin
walled, complex shapes with uniform properties.

Fig. 4.11 Vacuum casting (a) Before and (b) after immersion of the mold into the molten metal.
One advantage of vacuum casting is that by releasing the pressure a short time after the mold is
filled, we can release the un-solidified metal back into the flask. This allows us to create hollow
castings. Since most of the heat is conducted away from the surface between the mold and the
metal, therefore the portion of the metal closest to the mold surface always solidifies first; the
solid front travels inwards into the cavity. Thus, if the liquid is drained a very short time after the
filling, then we get a very thin walled hollow object. (See Fig. 4.12)

Fig. 4.12
9. Resin Injection Moulding: Resin injection moulding is a closed-mold, vacuum-assisted
process that employs a flexible solid counter tool used for the B-side surface compression. This
process yields excellent strength-to-weight characteristics, high glass-to-resin ratio and increased
laminate compression.
In this process, fiber preform or dry fiber reinforcement is packed into a mold cavity that has the
shape of the desired part. The mold is then closed and clamped.

Catalyzed, low viscosity resin is then pumped into the mold under pressure, displacing the air at
the edges, until the mold is filled.
After the fill cycle, the cure cycle starts during which the mold is heated, and the resin
polymerizes to become rigid plastic. Gel coats may be used to provide a high-quality, durable
finished product.
It is primarily used to mold components with large surface areas, complex shapes and smooth
finishes. This process is well-suited for mass production of 100 to 10,000 units/year of highquality composite fiberglass or fiber-reinforced plastic parts. It is recommended for products that
require high strength-to-weight requirements. Tooling used in this process can be made from
various materials including aluminum, nickel shell, mild steel and polyester.

Applications of Rapid Prototyping: Applications of rapid prototyping can be classified into


three categories;
1. Design
2. Engineering analysis and planning
3. Tooling and manufacturing
1. Design: Benefits to design attributed to rapid prototyping include;
i. Reduced lead times to produce prototype components
ii. Improved ability to visualize the part geometry because of its physical existence
iii. Earlier detection and reduction of design errors
iv. Increased capability to compute mass properties of components and assemblies
2. Engineering Analysis and Planning: Few important applications in this attribute are as
follows;
i. Comparison of different shapes and styles to optimize aesthetic appeal of the part
ii. Analysis of fluid flow through different orifice designs in valves fabricated by RP
iii. Wind tunnel testing of different streamlines shapes using physical models created by RP
iv. Stress analysis of a physical model
v. Fabrication of preproduction parts by RP as an aid in process planning and tool design
vi. Combining medical imaging technologies, such as magnetic resonance imaging (MRI),
with RP to create models for doctors in planning surgical procedures or fabricating
prostheses or implants.
3. Tooling and Manufacturing: The trend in RP applications is toward its greater use in the
fabrication of production tooling and for actual manufacture of parts. When RP is adopted to
fabricate production tooling, the term rapid tool making (RTM) is often used.
RTM applications divide into two approaches; Indirect RTM method, in which a pattern is
created by RP and the pattern is used to fabricate the tool, and Direct RTM method, in which RP
is used to make the tool itself.
Examples of indirect RTM includes;
i. Use of an RP fabricated part as the master in making a silicon rubber mold that is
subsequently used as a production mold
ii. RP patterns to make the sand molds in sand casting
iii. Fabrication of patterns of low-melting point materials (e.g., wax) in limited quantities for
investment casting
iv. Making electrodes for EDM
Examples of direct RTM includes;
i. RP-fabricated mold cavity inserts that can be sprayed with metal to produce injection
molds for a limited quantity of production plastic parts
ii. 3-D printing to create a die geometry in metallic powders followed by sintering and
infiltration to complete the fabrication of the die

Examples of actual part production include


i. Small batch sizes of plastic parts that could not be economically injection molded
because of the high cost of the mold
ii. Parts with intricate internal geometries that could not be made using conventional
technologies without assembly
iii. One-of-a-kind parts such as bone replacements that must be made to correct size for each
user.

Fig. 4.13: Typical application areas of RP processes

Surface Roughness: Surface roughness is a component of surface texture. It is quantified by


the vertical deviations of a real surface from its ideal form.
Machine components which have undergone machining operation, when inspected under
magnification, will have some minute irregularities. The properties and performance of machine
components are affected by the degree of roughness of the various surfaces. The higher the
smoothness of the surface, the better is the fatigue strength and corrosion resistance.
The geometrical characteristics of a surface include,
1. Macro-deviations

2. Surface waviness
3. Micro-irregularities
The surface roughness is evaluated by the height, Rt and mean roughness index Ra of the microirregularities.

Fig. 4.14: Surface roughness terminology

Surface Roughness Terminology: The surface roughness terminology as shown in fig. 4.14
is described as below;
1. Actual profile (Af): It is the profile of the actual surface obtained by finishing operation.
2. Reference profile (Rf): It is the profile to which the irregularities of the surface are referred to.
It passes through the highest point of the actual profile.
3. Datum profile (Df): It is the profile, parallel to the reference profile. It passes through the
lowest point B of the actual profile.
4. Mean Profile (Mf): It is that profile, within the sampling length chosen (L), such that the sum
of the material filled areas enclosed above it by the actual profile is equal to the sum of the
material-void areas enclosed below it by the profile.
5. Peak to valley height (Rt): It is the distance from the datum profile to the reference profile.
6. Mean roughness index (Ra): It is the arithmetic mean of the absolute values of the heights hi
between the actual and mean profiles. It is given by;
Ra = 1/L

| |

Where, L is the sampling length.


7. Surface roughness number: The surface roughness number represents the average departure
of the surface from perfection over a prescribed sampling length, usually selected as 0.8 mm and
is expressed in microns. The measurements are usually made along a line, running at right angle
to the general direction of tool marks on the surface. Surface roughness values are usually
expressed as the Ra value in microns.
Ra = (h1+h2+h3+..+hn)/n
The surface roughness may be measured, using any one of the following instruments;
1. Straight edge
2. Surface gauge

3. Optical flat
4. Tool makers microscope
5. Profilometer
6. Profilograph
7. Talysurf

Microfinishing process: Microfinishing, also known as micromachining, Superfinishing, and


short-stroke honing, is a metalworking process that improves surface finish and workpiece
geometry. This is achieved by removing just the thin amorphous surface layer left by the last
process with an abrasive stone or tape; this layer is usually about 1 m in magnitude.
The microfinishing process was developed by the Chrysler Corporation in 1934.
In this process after a metal piece is ground to an initial finish, it is superfinished with a finer grit
solid abrasive. The abrasive is oscillated or rotated while the workpiece is rotated in the opposite
direction; these motions are what cause the cross-hatching. The geometry of the abrasive
depends on the geometry of the workpiece surface; a stone (rectangular shape) is for cylindrical
surfaces and cups and wheels are used for flat and spherical surfaces. A lubricant is used to
minimize heat production, which can alter the metallurgical properties, and to carry away the
swarf; kerosene is a common lubricant.
The abrasive cuts the surface of the workpiece in three phases. The first phase is when the
abrasive first contacts the workpiece surface: dull grains of the abrasive fracture and fall away
leaving a new sharp cutting surface. In the second phase the abrasive "self-dresses" while most
of the stock is being removed. Finally, the abrasive grains become dull as they work which
improves the surface geometry.
The average rotational speed of abrasive wheel and/or workpiece is 1 to 15 surface m/min, with
6 to 14 m/min preferred; this is much slower compared to grinding speeds around 1800 to 3500
m/min. The pressure applied to the abrasive is very light, usually between 0.02 to 0.07 MPa (3 to
10 psi), but can be as high as 2.06 MPa (299 psi). Honing is usually 3.4 to 6.9 MPa (490 to 1,000
psi) and grinding is between 13.7 to 137.3 MPa (1,990 to 19,910 psi). When a stone is used it is
oscillated at 200 to 1000 cycles with an amplitude of 1 to 5 mm (0.039 to 0.197 in).
Superfinishing can give a surface finish of 0.01 m.
Advantages of superfinishing include: increasing part life, decreasing wear, closer tolerances,
higher load bearing surfaces, better sealing capabilities, and elimination of a break in period.
The main disadvantage is that superfinishing requires grinding or a hard turning operation
beforehand, which increases cost. Superfinishing has a lower cutting efficiency because of
smaller chips and lower material removal rate. Superfinishing stones are softer and wear more
quickly.
Common applications of microfinishing include: steering rack components, transmission
components, fuel injector components, camshaft lobes, hydraulic cylinder rods, bearing races,
needle rollers, and sharpening stones and wheels.

UNIT 5: ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS


Artificial intelligence (AI) is the study of how to make computers do things which, at the
moment, people do better. This definition is, of course, somewhat momentary because of its
reference to current state of computer science. And it fails to include some areas of potentially
very large impact, namely problems that cannot now be solved well by either computers or
humans.
As AI migrates to the real world we do not seem to be satisfied with just a computer playing a
chess game. Instead we wish a robot would sit opposite to us as an opponent, visualize the real
board and make the right moves in this physical world. Such notions seem to push the definitions
of AI to a greater extent. The feeling of intelligence is a mirage, if you achieve it; it ceases to
make you feel so. As somebody has aptly put it AI is Artificial intelligence till it is achieved;
after which the acronym reduces to Already Implemented.
The word intelligence in AI means as the ability to acquire, understand and apply knowledge or
the ability to exercise thought and reasons.
Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an
academic field of study which generally studies the goal of emulating human-like intelligence.
John McCarthy, who coined the term in 1955, defines it as "the science and engineering of
making intelligent machines".
The problem areas where AI is now flourishing most as a practical discipline are primarily the
domains that require only specialized expertise without the assistance of commonsense
knowledge. There are now thousands of programs called expert systems in day to day operation
throughout all areas of industry and government. Each of these systems attempts to solve part, or
perhaps all, of a practical, significant problem that previously required scarce human expertise.
Routine Tasks
Perception
Vision
Speech
Natural Language
Understanding
Generation
Translation
Commonsense Reasoning
Robot Control
Formal Tasks
Games
Chess
Backgammon
Checkers-Go
Mathematics
Geometry
Logic
Integral Calculus
Proving properties of programs
Expert Tasks
Engineering
Design

Fault finding
Manufacturing planning
Scientific analysis
Medical diagnosis
Financial analysis
Fig. 5.1: Some of the task domains of Artificial Intelligence

Fig. 5.2: Scope of Artificial Intelligence

Pattern Recognition: As the name suggests;


Pattern Recognition = Pattern + Recognition
Pattern: Pattern is a set of objects or phenomena or concepts where the elements of the set are
similar to one another in certain ways/aspects. The Patterns are described by certain quantities,
qualities, traits, notable features and so on.
Example: Humans, Radar Signals, insects, Animals, sonar signals. Fossil records,
Microorganisms signals, clouds etc.
Humans have a pattern which is different from the pattern of animals. Each individual has a
pattern which is different from the patterns of others.
It is said that each thief has his own patterns. Some enter through windows, some through doors
and so on. Some do only pick-pocketing, some steal cycles, some steal cars and so on.
The body pattern of human beings has not changed since millions of years. But pattern of
computers and other machines continuously change. Because of the fixed pattern of human
bodies, the work of medical doctors is easier compared to the work of engineers who deal with
machines whose patterns continuously change.
Recognition: As the name suggests;
Recognition = Re + Cognition
Cognition: To become acquainted with, to come to know the act, or the process of knowing an
entity (the process of knowing).

Recognition: The knowledge or feeling that the present object has been met before (the process
of knowing again).
Pattern Recognition consists of recognizing a pattern using a machine (computer). It can be
defined in several ways.
Definition-1: It is a study of ideas and algorithms that provide computers with a perceptual
capability to put abstract objects, or patterns into categories in a simple and reliable way.
Definition-2: It is an ambitious endeavor of mechanization of the most fundamental function of
cognition.
Methodology of Pattern Recognitions: It consists of the following;
1. We observe patterns
2. We study the relationships between the various patterns.
3. We study the relationships between patterns and ourselves and thus arrive at situations.
4. We study the changes in situations and come to know about the events.
5. We study events and thus understand the law behind the events.
6. Using the law, we can predict future events.
Example:
Astrology/Palm history: According to this methodology, it consists of the following;
1. We observe the different planets/lines on hand.
2. We study the relationship between the planets/lines.
3. We study the relations between the position of planets/lines and situations in life and arrive at
events.
4. We study the events and understand the law behind the events.
5. Using the law we can predict the future of a person.
Implication of Pattern Recognition: Pattern Recognition implies following three things. It has
been perceived;
1. The object has been cognized earlier or the picture/description of the object has been
cognized earlier.
2. The earlier details of cognition are stored.
3. The object is encountered again at which time it is to be recognized.
Coverage of Pattern Recognition: Pattern Recognition covers a wide spectrum of disciplines
such as;
1. Cybernetics
6. Mathematics
2. Computer Science
7. Logic
3. System Science
8. Psychology
4. Communication Sciences
9. Physiology
5. Electronics
10. Philosophy
Application of Pattern Recognition:
1. Medical diagnosis
2. Life form analysis
3. Sonar detection
4. Radar detection
5. Image processing

6. Process control
7. Information Management systems
8. Aerial photo interpretation.
9. Weather prediction
10. Sensing of life on remote planets.

11. Behavior analysis


12. Character recognition

13. Speech and Speaker recognition.

Fig. 5.3: Components of a Typical Pattern recognition System


Control Strategies:
1. The first requirement of a good control strategy is that it causes motion. Suppose we
implemented the simple control strategy of starting each time at the top of the list of rules
and choosing the first applicable one. If we did that, we would never solve the problem.
2. The second requirement of a good control strategy is that it be systematic. On each cycle,
choose at random from among the applicable rules. This strategy is better than the first. It
causes motion. It will lead to a solution eventually. But we are likely to arrive at the same
state several times during the process and to use many more steps than are necessary.
Because the control strategy is not systematic, we may explore a particular useless sequence
of operators several times before we finally had a solution. The requirement that a control
strategy be systematic corresponds to the need for global motion as well as for local motion.

Heuristics:
1. To solve larger problems, domain-specific knowledge must be provided to improve the
search efficiency
2. Heuristic: Any advice that is often effective but is not always guaranteed to work
3. Heuristic Evaluation Function:

Estimates cost of an optimal path between two states


Must be inexpensive to calculate
h(n)

Heuristic Search Techniques: There are a number of methods used in Heuristic Search
techniques;
1. Depth Search
2. Breadth Search
3. Hill climbing
4. Generate-and-test
5. Best-first-search
6. Problem reduction
7. Constraint satisfaction
8. Means-ends analysis
Depth Search Technique: Depth first search works by taking a node, checking its neighbors,
expanding the first node it finds among the neighbors, checking if that expanded node is our
destination, and if not, continue exploring more nodes.
Consider the following demonstration of finding a path between A and F;

Fig. 5.4: Depth First Search


1. Step 0: Let's start with our root/goal node:
We can use two lists to keep track of what we are doing - an Open list and a Closed List. An
Open list keeps track of what you need to do, and the Closed List keeps track of what you
have already done. Right now, we only have our starting point, node A. We haven't done
anything to it yet, so let's add it to our Open list.
Open List: A
Closed List: <empty>
2. Step 1: Now, let's explore the neighbors of our A node. To put another way, let's take the first
item from our Open list and explore its neighbors:

Node A's neighbors are the B and C nodes. Because we are now done with our A node, we
can remove it from our Open list and add it to our Closed List. You aren't done with this step
though. You now have two new nodes B and C that need exploring. Add those two nodes to
our Open list
Our current Open and Closed Lists contain the following data:
Open List: B, C
Closed List: A
3. Step 2: Our Open list contains two items. For depth first search and breadth first search, you
always explore the first item from our Open list. The first item in our Open list is the B node.
B is not our destination, so let's explore its neighbors:

4. Step 3: Because we have now expanded B, we are going to remove it from the Open list and
add it to the Closed List. Our new nodes are D and E, and we add these nodes to the
beginning of our Open list:
Open List: D, E, C
Closed List: A, B
Because D is at the beginning of our Open List, we expand it. D isn't our destination, and it
does not contain any neighbors. All you do in this step is remove D from our Open List and
add it to our Closed List:
Open List: E, C
Closed List: A, B, D
5. Step 4: We now expand the E node from our Open list. E is not our destination, so we
explore its neighbors and find out that it contains the neighbors F and G. Remember, F is our
target, but we don't stop here though. Despite F being on our path, we only end when we are
about to expand our target Node - F in this case

Our Open list will have the E node removed and the F and G nodes added. The removed E
node will be added to our Closed List:
Open List: F, G, C
Closed List: A, B, D, E
6. Step 5: We now expand the F node. Since it is our intended destination, we stop:

We remove F from our Open list and add it to our Closed List. Since we are at our
destination, there is no need to expand F in order to find its neighbors.
Our final Open and Closed Lists contain the following data:
Open List: G, C
Closed List: A, B, D, E, F
The final path taken by our depth first search method is what the final value of our Closed
List is: A, B, D, E, F.
Generate and Test Strategy:
1. The generate-and-test strategy is the simplest of all the approaches. It consists of the
following steps;
Generate a possible solution. For some problems, this means generating a particular point
in the problem space. For others, it means generating a path from a start state.
Test to see if this is actually a solution by comparing the chosen point or the end point of
the chosen path to the set of acceptable goal state.
If solution has been found quit. Otherwise return to step 1.
2. This procedure could lead to an eventual solution within a short period of time if done
systematically.
3. However if the problem space is very large, the eventual solution may be a very long time.
4. The generate-and-test algorithm is a depth-first search procedure since complete solutions
must be generated before they can be tested.
5. It can also operate by generating solutions randomly, but then there is no guarantee that a
solution will be ever found.
6. It is known as the British Museum algorithm in reference to a method of finding object in the
British Museum by wandering around.
7. For simple problems, exhaustive generate-and-test is often a reasonable technique.
8. For problems much harder than this, even heuristic generate-and-test, is not very effective
technique.
9. It is better to be combined with other techniques to restrict the space in which to search even
further, the technique can be very effective.
Hill Climbing:
1. In hill climbing the basic idea is to always head towards a state which is better than the
current one.
2. So, if you are at town A and you can get to town B and town C (and your target is town D)
then you should make a move IF town B or C appear nearer to town D than town A does.
3. The hill-climbing algorithm chooses as its next step the node that appears to place it closest
to the goal (that is, farthest away from the current position).
4. It derives its name from the analogy of a hiker being lost in the dark, halfway up a mountain.
Assuming that the hikers camp is at the top of the mountain, even in the dark the hiker
knows that each step that goes up is a step in the right direction.
5. The simplest way to implement hill climbing is as follows:
Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise
continue with the initial state as the current state.

Loop until a solution is found or until there are no new operators left to be applied in the
current state.
Select an operator that has not yet been applied to the current state and apply it to
produce new state.
Evaluate the new state

Steepest Hill Climbing:


1. Consider all the moves from the current state and select the best one as the next state.
2. In steepest ascent hill climbing you will always make your next state the best successor of
your current state, and will only make a move if that successor is better than your current
state.

Forward and Backward Reasoning: The object of a search procedure is to discover a path
through a problem space from an initial configuration to a goal state. There are actually two
directions in which such a search could proceed;
Forward, from the start states
Backward, from the goal states
The production system model of the search process provides an easy way of viewing forward
and backward reasoning as symmetric processes. Consider the problem of solving a particular
instance of the 8-puzzle. The rules to be used for solving the puzzle can be written as shown in
fig. 5.5. Using those rules we could attempt to solve the puzzle in one of two ways;

Fig. 5.5: A Sample of the rules for solving the 8-puzzle


1. Reason forward from the initial states: Begin building a tree of move sequences that might
be solutions by starting with the initial configuration(s) at the root of the tree. Generate the
next level of tree by finding all the rules whose left sides match the root node and using their
right sides to create the new configurations. Generate the next level by taking each node
generated at the previous level and applying to it all of the rules whose left sides match it.
Continue until a configuration that matches the goal state is generated.
2. Reason backward from the goal states: Begin building a tree of move sequences that might
be solutions by starting with the goal configuration(s) at the root of the tree. Generate the
next level of tree by finding all the rules whose right sides match the root node. These are all
the rules that, if only we could apply them, would generate the state we want. Use the left
sides of the rules to generate the nodes at this second level of the tree. Generate the next level

of tree by taking each node at the previous level and finding all the rules whose right sides
match it. Then use the corresponding left sides to generate the new nodes. Continue until a
node that matches the initial state is generated. This method of reasoning backward from the
desired final state is often called goal directed reasoning.
Notice that the same rules can be used both to reason forward from the initial state and to
backward from the goal state. To reason forward, the left sides are matched against the current
state and the right sides are used to generate new nodes until the goal is reached. To reason
backward, the right sides are matched against the current node and the left sides are used to
generate new nodes representing new goal states to be achieved. This continues until one of these
goal states is matched by an initial state.
Four factors influence, the question of whether it is better to reason forward or backward;
1. Are there more possible start states or goal states? We would like to move from the smaller
set of states to the larger set of states.
2. In which direction is the branching factor (i.e. average no. of nodes that can be reached
directly from a single node) greater? We would like to proceed in the direction with the lower
branching factor.
3. Will the program be asked to justify its reasoning process to a user? If so, it is important to
proceed in the direction that corresponds more closely with the way the user will think.
4. What kind of event is going to trigger a problem-solving episode? If it is the arrival of a new
fact, forward reasoning makes sense. If it is a query to which a response is desired, backward
reasoning is more natural.
Attribute

Backward Chaining

Forward Chaining

Also known as

Goal-driven

Data-driven

Starts from

Possible conclusion

New data

Processing

Efficient

Somewhat wasteful

Aims for

Necessary data

Any Conclusion(s)

Approach

Conservative/Cautious

Opportunistic

Practical if

Number of possible final


answers is reasonable or a
set of known alternatives is
available.

Combinatorial explosion
creates an infinite number
of possible right answers.

Appropriate for

Diagnostic, prescription and


debugging application

Planning, monitoring,
control and interpretation
application

Reasoning

Top-down reasoning

Bottom-up reasoning

Type of Search

Depth-first search

Breadth-first search

Who determine search

Consequents determine
search

Antecedents determine
search

Flow

Consequent to antecedent

Antecedent to consequent

Table 5.1: Comparative Study of Forward and Backward Chaining

Search Algorithms:
1. Basic elements of problem definition (states and actions)
Initial state(s)

2.
3.

4.

5.
6.

7.

Set of possible actions (operators)


State space (set off all states reachable from the initial state by any set of actions
implicit)
Goal test (or explicit set of possible goals)
Cost of the actions
Aim: to find a path from an initial state to a goal state (satisfies the goal test)
Search space
Generated on the fly
Represented by directed acyclic graph (searching tree)
Some notations:
Node/arc
Root
Parent/children
Branching factor
Level
Leaf
Goal node
Path
Cost
Conflicts: search strategies needed for decision making
Search strategies can be grouped according to modification;
a) Irrevocable strategies/non-modifiable strategies;
No possibility for withdraw/modification of a selected action
No possibility going back on the path from start node (to goal node)
Algorithm stores information only the actual node on the path (without any earlier
branch)
The applicable actions = the actions applicable on the actual node selection (with
local knowledge, the most promising child)
Finding a (n optimal) solution is not guaranteed
b) Tentative strategies/modifiable strategies;
Able to recognize the erroneous or improper application of an action
Algorithm can go back to an earlier state to try a new direction when
Reaches a stage which does not lead to a goal state
It does not seem promising to resume the search in that direction
Search strategies can be grouped according to used knowledge;
a) Random search;
Goal achievement is not insured in finite time
b) Blind search/uninformed search;
All of the paths are traversed in a systematic way
No information about goodness of the path or node
Algorithm distinguishes a goal state from a non-goal state
c) Heuristic search/informed search;

Specific knowledge about the given problem (heuristic)


Estimate the distance from a node to a solution

Game Playing:
1. Game Playing in Artificial Intelligence refers to techniques used in computer and video
games to produce the illusion of intelligence in the behavior of non-player characters (NPCs).
2. Hacks and cheats are acceptable and, in many cases, the computer abilities must be toned
down to give human players a sense of fairness e.g. racing and shooting.
3. Emphasis of game AI is on developing rational agents to match or exceed human
performance.
4. AI has continued to improve, with aims set on a player being unable to tell the difference
between computer and human players - remember Turin test?
5. A game must feel natural;
Obey laws of the game
Characters aware of the environment
Path finding (A* algorithm)
Decision making
Planning
6. Game bookkeeping, scoring.
7. ~50% of game project time is spent on building AI.
8. Game playing is a search problem defined by:
Initial state
Successor function
Goal test
Path cost / utility / payoff function
Why are games relevant to AI?
Games are fun!
They are limited, well-defined rules
They provide advanced, existing test-beds for developing several ideas and techniques that
are useful elsewhere.
They are one of the few domains that allow us to build agents.
Studying games teaches us how to deal with other agents trying to foil our plans
Huge state spaces Games are highly complex! Usually, there is not enough time to work
out the perfect move e.g. Go & Chess
Nice, clean environment with clear criteria for success
Game playing is considered an intelligent human activity.
AI has always been interested in abstract games.
Games present an ideal environment where hostile agents may compete.
The Illusion of Human Behaviour:
Game AI is about the illusion of human behaviour;
Smart, to a certain extent (Creativity)
Non-repeating behaviour
Unpredictable but rational decisions

Emotional influences (Irrationality, Personality)


Body language to communicate emotions
Being integrated in the environment
Game AI needs various computer science disciplines;
Knowledge Based Systems
Machine Learning
Multi-agent Systems
Computer Graphics & Animation
Data Structures

Computer Games Types:


1. Strategy Games
Real-Time Strategy (RTS)
Turn-Based Strategy (TBS)
Helicopter view
2. Role-Playing Games (RPG)
Single-Player
Multi-Player (MMORPG)
3. Action Games
First-Person Shooters (FPS)
First-Person Sneakers
4. Sports Games
5. Simulations
6. Adventure Games
7. Puzzle Games
Minimax Algorithm:
1. General method for determining optimal move.
2. Generate complete game tree down to terminal states.
3. Compute utility of each node bottom up from leaves toward root.
4. At each MAX node, pick the move with maximum utility.
5. At each MIN node, pick the move with minimum utility (assumes opponent always acts
correctly to minimize utility).
6. When reach the root, optimal move is determined.
7. Can be performed using a depth-first search.

Fig. 5.6: Minimax Algorithm

Fig. 5.7: Minimax Computation


Determining Cutoff:
1. Search to a uniform depth (ply) d.
2. Use iterative deepening to continue search to deeper levels until time runs out (anytime
algorithm).
3. Could end in states that are very dynamic (not quiescent) in which evaluation could change
quickly, as in (d) below.
4. Continue quiescence search at dynamic states to improve utility estimate.
5. Horizon problem: Inevitable problems can be pushed over the search boundary.
6. Example: Delay inevitable queening move by pawn by exploring checking moves.

Fig. 5.8: Determining Cutoff

Structured Representation of Knowledge: Representing knowledge using logical


formalism, like predicate logic, has several advantages. They can be combined with powerful
inference mechanisms like resolution, which makes reasoning with facts easy. But using logical
formalism complex structures of the world, objects and their relationships, events, sequences of
events etc. cannot be described easily.
A good system for the representation of structured knowledge in a particular domain should
possess the following four properties;
1. Representational Adequacy: The ability to represent all kinds of knowledge that are needed
in that domain.
2. Inferential Adequacy: The ability to manipulate the represented structure and infer new
structures.
3. Inferential Efficiency: The ability to incorporate additional information into the knowledge
structure that will aid the inference mechanisms.
4. Acquisitional Efficiency: The ability to acquire new information easily, either by direct
insertion or by program control.
The techniques that have been developed in AI systems to accomplish these objectives fall under
two categories;
1. Declarative Methods: In these knowledge is represented as static collection of facts which
are manipulated by general procedures. Here the facts need to be stored only one and they
can be used in any number of ways. Facts can be easily added to declarative systems without
changing the general procedures.
2. Procedural Method: In these knowledge is represented as procedures. Default reasoning and
probabilistic reasoning are examples of procedural methods. In these, heuristic knowledge of
How to do things efficiently can be easily represented.
In practice most of the knowledge representation employs a combination of both. Most of the
knowledge representation structures have been developed to handle programs that handle natural
language input. One of the reasons that knowledge structures are so important is that they
provide a way to represent information about commonly occurring patterns of things. Such
descriptions are sometimes called schema. One definition of schema is: Schema refers to an
active organization of the past reactions, or of past experience, which must always be supposed
to be operating in any well adapted organic response.
By using schemas, people as well as programs can exploit the fact that the real world is not
random. There are several types of schemas that have proved useful in AI programs. They
include;
1. Frames: Used to describe a collection of attributes that a given object possesses (e.g.
description of a chair).
2. Scripts: Used to describe common sequence of events (e.g. a restaurant scene).
3. Stereotypes: Used to describe characteristics of people.
4. Rule models: Used to describe common features shared among a set of rules in a production
system.
Frames and scripts are used very extensively in a variety of AI programs. Before selecting any
specific knowledge representation structure, the following issues have to be considered.

1. The basis properties of objects, if any, which are common to every problem domain must be
identified and handled appropriately.
2. The entire knowledge should be represented as a good set of primitives.
3. Mechanisms must be devised to access relevant parts in a large knowledge base.

Expert Systems in Manufacturing: In artificial intelligence, an expert system is a computer


system that emulates the decision-making ability of a human expert. Expert systems are designed
to solve complex problems by reasoning about knowledge, represented primarily as ifthen rules
rather than through conventional procedural code.
An expert system is a computer program that draws the expert advice of humans coded as series
of rules and applies it to a specially structured base containing information about a real or
hypothetical situation.
The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert
systems were among the first truly successful forms of AI software.
Expert systems are;
Knowledge based systems
Part of the Artificial Intelligence field
Computer programs that contain some subject-specific knowledge of one or more human
experts
Systems that utilize reasoning capabilities and draw conclusions.
Major Components of an Expert System: Every expert system consists of four principal parts;
1. The rule base or knowledge base:
The knowledge base is the collection of facts and rules which describe all the knowledge
about the problem domain
Contain everything necessary for understanding, formulating and solving a problem.
Stores all relevant information, data, rules, cases, and relationships used by the expert
system.
2. Working storage:
Contains facts about a problem that are discovered during consultation with the expert
system.
System matches this information with knowledge contained in the knowledge base to
infer new facts.
The conclusion reach will enter the working memory.
3. The inference engine:
The inference engine is the part of the system that chooses which facts and rules to apply
when trying to solve the users query.
It taps the knowledge base and working memory to derive new information and solve
problems.
The inference engine is a computer program designed to produce reasoning on rules.
It is based on logic.
4. User interface:
The user interface is the part of the system which takes in the users query in a readable
form and passes it to the inference engine. It then displays the results to the user.
The user communicates with the expert system through the user interface.

It allows the user to query the system, supply information and receive advice.
The aims are to provide the same form of communication facilities provided by the
expert.
The code that controls the dialog between the user and the system.

Fig. 5.9: Structure of an Expert System


Issues

Human Expert

Expert System

Availability

Limited

Always

Geographic location

Locally available

Anywhere

Durability

Depends on individual

Non-perishable

Performance

Variable

High

Speed

Variable

High

Cost

High

Low

Learning Ability

Variable/High

Low

Explanation

Variable

Exact

Table 5.2: Comparative Study of Human Expert and Expert System


Types of Expert System:
1. Rule-based Systems
Knowledge represented by series of rules
2. Frame-based Systems
Knowledge represented by frames
3. Hybrid Systems
Several approaches are combined, usually rules and frames
4. Model-based Systems
Models simulate structure and functions of systems

5. Off-the-shelf Systems
Readymade packages for general use
6. Custom-made Systems
Meet specific need
Advantages of Expert System: The various advantages of an expert system are as follows;
Quick availability
Reduce employee training costs
Reduce the time needed to solve problems.
Combine multiple human expert intelligences
Reduce the amount of human errors.
Never "forgets" to ask a question,
Ability to solve complex problems
Consistent answers for repetitive decisions, processes and tasks
Excellent Performance
Provide Explanation
Fast response

You might also like