You are on page 1of 16

Int J Adv Manuf Technol (2009) 41:932–947

DOI 10.1007/s00170-008-1536-z

ORIGINAL ARTICLE

Automatic recognition of machining features using artificial


neural networks
V. B. Sunil & S. S. Pande

Received: 2 January 2008 / Accepted: 18 April 2008 / Published online: 27 May 2008
# Springer-Verlag London Limited 2008

Abstract We report on the development of an intelligent V Number of vertices


system for recognizing prismatic part machining features Vs Vertex score
from CAD models using an artificial neural network. A
unique 12-node vector scheme has been proposed to
represent machining feature families having variations in
topology and geometry. The B-Rep CAD model in ACIS 1 Introduction
format is preprocessed to generate the feature representation
vectors, which are then fed to the neural network for Today, features technology has become the de-facto
classification. The ANN-based feature-recognition (FR) standard for product modeling in CAD/CAM. Two popular
system was trained with a large set of feature patterns and approaches have been pursued viz. design by features and
optimized for its performance. The system was able to feature recognition (FR) from CAD models [1]. Design by
efficiently recognize a wide range of complex machining features approach provides a very user friendly environ-
features allowing variations in feature topology and ment to create CAD model in terms of part functional
geometry. The data of the recognized features was post- features. It is however, domain specific, and application
processed and linked to a feature-based CAPP system for dependent.
CNC machining. The FR system provided seamless The need for feature recognition (FR) is particularly felt
integration from CAD model to CNC programming. when application specific features are to be recognized from
a CAD model depending upon the intended end application.
Keywords Intelligent feature recognition . Artificial neural Quite often the entire part CAD model may not have been
networks . CAD model constructed with features, or created in non feature based
system which necessitates the need for feature recognition.
Nomenclature During the transfer of the data for a CAD part model, usually
n Number of vertices shared by a face only low level information such as faces and edges are
ni Node i of input vector transferred and the feature information is often lost in the
E Number of edges target CAD system [2]. In such situations, feature recogni-
Es Edge score tion is required to recreate the design tree (feature tree) in
F Number of faces the target system after data translation. In collaborative
Fg Face geometry product design systems, product data to be sent across
Fs Face score limited bandwidth of the Internet often necessitates in a
compact form using simplified B-Rep, polygonal represen-
V. B. Sunil : S. S. Pande (*) tation [3] or mesh based solid model representation [4].
Computer Aided Manufacturing Laboratory, Depending on the level of detail/application view required,
Mechanical Engineering Department,
high level functional entities (features) need to be recog-
Indian Institute of Technology Bombay,
Powai, Mumbai 400076, India nized from the product data at the server/client side to
e-mail: s.s.pande@iitb.ac.in create the ‘view-dependent’ model for the client.
Int J Adv Manuf Technol (2009) 41:932–947 933

2 Literature review network for FR. Attributed adjacency matrices (AAM) were
used as input data for the network to train and recognize
Literature reveals that during the past two decades feature patterns. The system recognized only fixed topology
researchers have proposed and implemented various ma- features such as step, slot, blind step, blind slot, pocket,
chining feature-recognition systems using rule-based [5], inverted dovetail slot, and blind hole. Lankalapalli et al.
syntactic pattern recognition [6], graph-based [7], volume [18] proposed a nine-element face score vector to represent
decomposition-based [8], hint-based [9] and hybrid the features and used ART2 neural network architecture for
approaches [10]. Comprehensive review of various fea- FR. In a separate work, Onwubolu [19] used the same nine-
ture-recognition techniques has been reported in the element face score vector scheme proposed by Lankalapalli
literature [11, 12]. The above feature-recognition techni- et al. [18] to represent features and used multi-layer feed-
ques are beset with problems, such as inability to learn and forward back-propagation network. Both works were able
generalize, lack of tolerance to errors in input, computa- to recognize a limited set of features with fixed number of
tionally intensive and inflexibility to recognize new feature faces (fixed topology). Nezis and Vosniakos [20] repre-
types, which restrict their practical applications. The sented the features using a binary vector of 20 elements that
reported FR techniques had limited success in handling was computed by querying the layout of the adjacency
different feature variations (variable topology, etc.). In case matrix of the corresponding feature. Multi-layer feed-
of subgraph matching recognition technique, the sheer forward back-propagation network was used for FR. The
variation in geometry and/or topology resulted in an basic limitation of the work was the heuristics used which,
overwhelmingly large number of graphs to be matched to to a certain extent, pre-recognized the features. The system
find a feature [13]. Graph grammars use a set of rules of proposed by Zulkifli and Meeran [21] was limited to simple
subgraph matching, which is computationally expensive. feature types having its cross section defined by four
Other techniques go by exhaustive rules and geometric rectangular vertices and circular feature types defined by a
reasoning to tackle these conflicts appearing in their way of scheme of eight vertices. Features were represented in
feature-recognition process. matrix form using the pattern of edges and vertices forming
Researchers of late are focusing upon developing ANN- the feature. Four separate multi-layer feed-forward networks
based feature-recognition techniques that are expected to were required for the recognition. Li et al. [22] proposed a
overcome some of the limitations of the above feature- hybrid method combining ART2 network with feature hints
recognition techniques. The advantages of ANN-based and graph manipulations to recognize interacting machining
feature recognition include their ability to learn and features. Unified F-Loop was defined as generic hints and
generalize feature patterns, ability to tolerate noise in an graph manipulation on this was carried out to generate
input pattern and recognizing similar/unknown feature potential features. The obtained potential features are
patterns once it is trained. ANN-based systems are relatively represented as F-Loop graphs and are given input to
faster because the process is limited to simple mathematical ART2 network for FR. It was limited to recognizing basic
computations and does not use a search or logical rules. types of features with fixed number of faces. Ding and Yue
There is no need to predefine almost every instance of a [23] presented a hybrid feature-recognition method that
feature and perform sequence of logical checks. ANN, thus, incorporated design-by-features approach, use of ANN
offers great promise for the FR problem. techniques, and a heuristic algorithm. Two multi-layer
Prabhakar and Henderson [14] reported the use of a five- feed-forward back-propagation networks were used taking
layer quasi-neural net for FR. An encoded part face inputs as F-adjacency and V-adjacency matrices, respec-
adjacency matrix scheme was used as the input to the tively. The ANN method was able to recognize variable
network. The system could not differentiate between topology features, but was unable to differentiate between
features having the same topology but different geometry, the same topology and different geometry features. Chak-
such as a counterbore and a spot face. In their independent raborty and Basu [24] proposed binary strings to represent
works, Hwang and Henderson [15] and Ozturk and Ozturk features. The work was too feature specific being limited to
[16] proposed the use of an eight-element face score vector very simple slots/steps defined by four rectangular vertices
as input to the ANN-based FR system. Single-layer and circular pockets defined by eight vertices. Lam and
network configuration was used by Hwang and Henderson Wong [25] presented a hybrid system combining neural
[15] while Ozturk and Ozturk [16] used a multiple-layer network-based technique with graph-based and volume-
perceptron network. The method could recognize a very decomposition techniques. Here the neural network was
limited number of features. However, the uniqueness of the used as a function approximation tool rather than for
feature patterns was not guaranteed due to the lack of one- recognizing features. Smith and Dagli [26] and Chen and
to-one correspondence between feature patterns and fea- Lee [27] described the use of ANN for 2D feature
tures. Gu et al. [17] used a multi-layer feed-forward neural recognition. Jun et al. [28] proposed a neural network
934 Int J Adv Manuf Technol (2009) 41:932–947

system for extracting geometric features directly from a set 3 Basic concepts
of 3D scanned points.
The ANN-based feature-recognition systems discussed 3.1 Definition of feature
above have various limitations. Most of the systems target a
limited set of simple features such as rectangular pocket, Feature is a region of functional interest on a part. It can be
blind/through step, blind/through slot, etc., which can be defined as a set of connected faces having certain
defined by four rectangular vertices, or features with a fixed characteristic combination of topology and geometry which
number of faces, etc. and do not consider feature topology suggests some activity of significance in the design-
and geometry variations. Few systems handle variable manufacturing domain. The topological characteristics
topology features but do not consider shape-based feature involve entities like faces, edges, vertices, and their
classification. Many systems use feature specific representa- connectivity while the geometrical characteristics involve
tion schemes which have limitations such as ambiguity, non- edge type, face type, edge convexity, etc. and relationships
uniqueness, and the need to have more networks for between faces such as parallelism, perpendicularity, etc.
recognition; thus increasing the time and effort in training
and testing the networks. Some systems need heuristics to 3.2 Feature variations
find candidate faces effectively amounting to pre-recognition
of certain features. A need thus exists to develop an efficient There are many topology and/or geometry related variations
ANN-based FR system that can identify a large variety of possible within a feature type (Fig. 1). Three important
machining features with varying topology and geometry. categories include,
In the present work, an intelligent feature-recognition
– Features with variable topology (variable number of
system using a back-propagation neural network has been
faces, Fig. 1a).
designed, implemented, and rigorously tested. Design and
– Features with same shape but different topology
development issues of this system are discussed in the
(Fig. 1b,c).
sections to follow.
– Features with same topology but different shape
The remainder of the paper is organized as follows.
(Fig. 1d).
Section 3 presents the basic concepts of features and the
feature taxonomy for prismatic machined parts handled in
this research work. The approach for FR is presented in Such variations of topology and geometry within a
Sect. 4. Section 5 presents the developed ANN-based FR feature type make the FR problem difficult. Compared to
system in detail in terms of algorithm and associated conventional FR techniques such as rule-based, graph-
modules. Section 6 presents the results and discussions with based, syntactic pattern recognition etc., ANN offers great
various case studies. Conclusions from this research work promise here due to its characteristics of learning and
are presented in Sect. 7. generalization for pattern recognition.

Fig. 1 Feature variations (a)


different topology, different
shape (variable topology fea-
tures); (b), (c) different topolo-
gy, same shape and (d) same
topology, different shape
Pocket with Pocket with Rectangular Rectangular
8 faces 12 faces Pocket with Pocket with
4 faces 5 faces
(a) (b)

Through Hole with Through Hole with Rectangular Obround


1 face (ACIS 7 2 faces (ACIS 15 Pocket Pocket
Representation) Representation)
(c) (d)
Int J Adv Manuf Technol (2009) 41:932–947 935

3.3 Feature taxonomy 5. In the case of blind/through hole features, only simple
blind/through hole is considered. Compound holes like
The present work is targeted to the recognition of 2.5D counter bore and counter sunk are not considered in the
prismatic non-interacting depression machining features present work.
belonging to six feature families viz. pockets, passages, 6. The features can have isothetic (non-orthogonal -
blind slots, through slots, blind steps, and through steps inclined to coordinate axes) orientation on the face.
with variable number of faces. In addition, two fixed
All the feature types with various topology and geometry
topology features like blind holes and through holes have
variations shown in Fig. 2a–c are recognized by the
also been addressed. Figure 2a–c shows the feature
developed system.
taxonomy along with the topological information of each
feature viz. faces, edges, vertices, and the connectivity
between faces represented using face adjacency graph
(FAG). 4 Approach for feature recognition
The AAG/FAG-based FR approaches reported in litera-
ture have problems in dealing with the feature variations The overall approach of the feature-recognition system
such as features having variable topology, features having followed in our work is enumerated below:
the same shape but different topology, and features having
the same topology but different shapes (Sect. 3.2). These 1. Input: The B-Rep models of both the part and its
feature variations will lead to a large search space for machining raw stock (in ACIS *.sat format) are used as
graph-based FR technique [13], as each feature variation the input to our system. The stock is not necessarily the
will lead to a new feature template graph to be matched. convex hull of the part. Needless to state that a part face
Research is also pursued in finding a meta-data structure to which is a convex hull face will also be a stock face,
capture the kind of feature variation viz. the same feature but a part face which is a stock face need not
shape having different topology [12]. necessarily be a convex hull face. In the present work,
The present research work aims at capturing these B-Rep model in ACIS format is used mainly because
feature variations using the learning and generalization ACIS kernel which is widely used in industries for
capability of ANN. The ANN-based feature-recognition CAD model representation. Besides, it provides a rich
system developed in this work uniquely handles features set of APIs, which were very useful in preprocessing
having variable topology (like n-sided pockets, n-sided the input CAD model for creating feature FAGs.
slots, etc.) features having the same shape but variable (Sect. 5)
topology (like four-sided obround pocket, five-sided 2. Preprocessing module: Various functional steps in this
obround pocket, etc.) and also does shape-based feature module are as under.
classification like triangular, rectangular, obround, n-sided – Construction of part face adjacency graph (part
convex, n-sided concave features, etc. which have not FAG).
been extensively addressed in the literature. Basic – Part FAG decomposition into feature FAGs.
characteristics of features handled in the present work – Creating feature representation vector (FRV) for
are listed below. each feature FAG.
3. ANN feature recognizer: All the FRVs generated in the
1. All the features can be geometrically defined by bottom
preprocessing module are presented to the ANN for
contour, sweep direction and extrusion height similar to
feature classification.
the machining features definition in STEP AP 224 [29].
4. Feature representation and application interface: For
2. The features can have both planar and cylindrical faces
each feature classified by ANN, the relevant feature
and are not restricted to planar polyhedral shapes.
parameters such as feature ID, feature type, shape,
3. A feature family is divided into shape-based features.
length, width, height, position, access directions,
For example, based on geometric shapes, pocket feature
bottom contour, sweep direction etc. are computed
family is divided into triangular, rectangular, and
and stored to create a feature-based model representa-
obround pocket shapes. For irregular shapes, pocket is
tion in a native format. Application interface can
divided into two sub families viz. n-sided convex and
translate this feature-based model file and link it to a
n-sided concave pockets based on the convexity of the
downline application like a feature-based CAPP system
cross section of the pocket.
for CNC machining.
4. The blind hole is represented as two half cylindrical
faces and one bottom planar face whereas the through Details of various modules and the associated algorithms
hole does not have the bottom planar face (Fig. 2c). are discussed in the sections to follow.
936 Int J Adv Manuf Technol (2009) 41:932–947

POCKET FEATURE FAMILY

F = 4, E = 9, V = 6 F = 5, E = 12, V = 8 F = 5, E = 12, V = 8 F = 6, E = 15, V = 10 F = 7, E = 18, V = 12 F = 8, E = 21, V =14 F = 9, E = 24, V =16


Triangular Pocket Rectangular Pocket Obround Pocket N – Side Convex Pocket N – Side Concave Pocket
[0.10] [0.12] [0.14] [0.16] [0.18]
PASSAGE FEATURE FAMILY

F = 3, E = 9, V = 6 F = 4, E = 12, V = 8 F = 4, E = 12, V = 8 F = 5, E = 15, V = 10 F = 6, E = 18, V = 12 F = 7, E = 21, V = 14 F = 8, E = 24, V = 16


Triangular Passage Rectangular Passage Obround Passage N – Side Convex Passage N – Side Concave Passage
[0.20] [0.22] [0.24] [0.26] [0.28]
BLIND STEP FEATURE FAMILY

F = 2, E = 6, V = 5 F = 2, E = 6, V = 5 F = 3, E = 9, V = 7 F = 4, E = 12, V = 9 F = 5, E = 15, V = 11 F = 6, E = 18, V = 13 F = 7, E = 21, V = 15


Chamfered Blind Step Circular Blind Step Rectangular Blind Step N – Side Convex Blind Step N – Side Concave Blind Step
[0.30] [0.32] [0.34] [0.36] [0.38]

(a).
THROUGH STEP FEATURE FAMILY

F = 2, E = 7, V = 6 F = 3, E = 10, V = 8 F = 3, E = 10, V = 8 F = 4, E = 13, V = 10 F = 5, E = 16, V = 12 F = 6, E = 19, V = 14 F = 7, E = 22, V = 16


Rectangular Through 2 – Side Convex 2 – Side Concave N – Side Convex Through Step N – Side Concave Through Step
Step [0.40] Through Step [0.42] Through Step [0.44] [0.46] [0.48]
BLIND SLOT FEATURE FAMILY

F = 4, E = 11, V = 8 F = 4, E = 11, V = 8 F = 4, E = 11, V = 8 F = 5, E = 14, V = 10 F = 6, E = 17, V = 12 F = 7, E = 20, V = 14 F = 8, E = 23, V = 16


Rectangular Blind Slot Vertical Obround Blind Horizontal Obround Blind N – Side Convex Blind Slot N – Side Concave Blind Slot
[0.50] Slot [0.52] Slot [0.54] [0.56] [0.58]
THROUGH SLOT FEATURE FAMILY

F = 1, E = 4, V = 4 F = 2, E = 7, V = 6 F = 3, E = 10, V = 8 F = 4, E = 13, V = 10 F = 5, E = 16, V = 12 F = 6, E = 19, V = 14 F = 7, E = 22, V = 16


Circular Through Slot V Through Slot Rectangular Through N – Side Convex Through Slot N – Side Concave Through Slot
[0.60] [0.62] Slot [0.64] [0.66] [0.68]

(b).
Fig. 2 a–c Feature taxonomy and their face adjacency graph (FAG). (ANN output target value shown in brackets)
Int J Adv Manuf Technol (2009) 41:932–947 937

BLIND HOLE THROUGH HOLE based feature-recognition algorithm. FRV represents


topology and geometry information of the set of
connected faces in the feature FAG which are encoded
in the form of numbers. Most of the earlier FRV
schemes could handle only a limited set of features
with fixed topology and did not consider variable
F = 3, E = 6, V = 4 F = 2, E = 6, V = 4 topology features and shape-based feature classification
Simple Blind Hole Simple Through Hole [17, 18, 21, 24]. In the present work, a new FRV
[0.70] [0.80]

(c). scheme with 12 nodes has been proposed and devel-


Fig. 2 (continued) oped that can handle variable topology features and
also do shape-based feature classification. The details
of FRV are given in Sect. 5.2.1. For calculating the
5 Algorithm required topology and geometry information of the
feature face set, the part B-Rep and part FAG are
5.1 Preprocessing of part CAD model queried using the ACIS APIs and direct interfaces. For
each feature FAG, corresponding FRV is created and
The main aim of preprocessing the part CAD model is to fed to ANN for feature recognition and classification.
generate feature representation vectors (FRVs) that can be
fed to ANN for feature recognition. The steps involved are 5.2 Feature recognition using ANN
enumerated below.
– Construction of part face adjacency graph (part This section deals with the issues related to using ANN for
FAG): A part FAG is a representation of the entire part feature recognition viz. design of feature representation
showing the face connectivity. In a FAG, the node vector, selection of artificial neural network, and optimiza-
corresponds to the face of the part and arc represents tion of neural network by training and testing.
the edge between the two faces. For the given input
part CAD model (in ACIS *.sat format), the part FAG 5.2.1 Feature representation vector scheme
is created using the ACIS API. Part FAG is represented
in the computer in the form of a class containing The most important step in using ANN for feature
pointers to objects of node and arc classes. recognition is to design a feature representation vector
– Part FAG decomposition: In this step the part FAG is scheme that enables ANN to classify the input feature
decomposed into feature FAGs using the following patterns in an unambiguous manner. The FRV should
heuristic: A part face which is a stock face (i.e., the part represent the topology and geometry of the feature in a
face that is not machined out) is not a feature face and manner which can be fed to ANN for classification. In this
can be removed from the face adjacency graph. ACIS work, a unique 12-node feature representation vector has
APIs/Direct Interfaces are used for this purpose. The been designed (Table 1). Nodes 1 to 7, and 11 are used for
steps will involve: feature shape and its convexity classification while nodes 8,

– Identification of part faces that are stock faces. This is


found by the boundary intersection of the input part
Table 1 Feature representation vector scheme
and stock B-Rep (in ACIS *.sat format) models.
– Deleting the nodes belonging to the above identi- Node number Description
fied part stock faces and the arcs incident to those
1 Minimum face score value
nodes from the part FAG.
2 Next minimum face score value
This procedure may result in several graphs or one 3 And so on
single graph. These graphs are called feature face 4 “
adjacency graphs (feature FAGs). The number of feature 5 “
6 “
FAGs will represent the number of likely depression
7 Average face score value
features present on the part. The feature FAGs are then 8 Value of V + E − 5F
evaluated to get the feature representation vectors (FRVs), 9 Number of ‘Zero Transition’ faces
which is discussed next. 10 Number of ‘Four Transition’ faces
11 Total number of faces
– Creating feature representation vector (FRV) for
12 Number of external faces connected
each Feature FAG: FRV is the heart of the ANN-
938 Int J Adv Manuf Technol (2009) 41:932–947

edges are shown by ‘0’ and ‘1’, respectively. The group of


candidate faces obtained after the processing the CAD part
model will be faces A, B, and C. The FRV is generated for
this set of candidate faces.
Using the above scheme, the vertex scores of all the four
vertices of face A is 0.5 and since face A is planar face, the face
geometry score value (Fg)  is 0. The face score for face
A ¼ðVs1 þ Vs2 þ Vs3 þ Vs4 Þ 4 þ Fg ¼ 0:5 þ 0 ¼ 0:5: Sim-
ilarly, the face score for both face B and C is 1.0 and the
Fig. 3 Face score calculation and transition face for rectangular average face score value of A, B, and C is 0.833.
through slot
In nodes 1 to 6, only distinct face score values are stored
in ascending order. If the number of distinct face score
values is less than 6, the remaining elements are given 1.5
9, 10, and 12 are used for aiding feature family classifica-
value. It is assumed that a feature will not have more than
tion. The scheme is described in detail below.
six distinct face score values. This assumption is checked
and found to be true for a large variety of features. If a
Nodes 1 to 7 Nodes 1 to 7 represent values that are based
feature has more than six distinct face score values, then
on face scores of the candidate faces in a chosen set. The
only the first six in ascending order is kept. Node 7 stores
face score is dependent upon the face geometry (planar,
the average face score. So the nodes 1 to 7 for the
cylindrical face), edge geometry (convex/concave edge),
rectangular through slot will be given the values as [0.5,
and edge-vertex connectivity [15]. The face score is used as
1.0, 1.5, 1.5, 1.5, 1.5, 0.833, ...].
a measure of the face uniqueness or face complexity based
upon the convexity or concavity of the surrounding region.
Node 8 Recognizing variable topology features families is
The following scores have been chosen for assignment a complex task in feature recognition. To recognize a
to the edges and face geometry: convex edge (+0.5), feature family a topologically invariant attribute is required
concave edge (−0.5), plane face (0) and cylindrical face which can provide differentiating criteria (contrast) between
(−2.0). The numerical values chosen for the edge and face feature families. Node 8 is used to store such a topological
geometry scores reflect the geometric nature of the entities attribute viz. V + E - 5F, where V, E and F are the number
and hence the feature complexity. For example, a negative of vertices, edges and faces in the set of connected
value for edge is used to reflect the concavity of the edge. candidate faces. This attribute has been derived from Euler
Face score value of a plane face being more negative would formula. Euler’s formula gives topological invariant attri-
mean that it has a higher number of concave edges in its bute for 3D polyhedral solids and is given as
surrounding and would signify a depression feature. In
short, the score values are chosen with a view to enable
wider separation between input feature patterns to aid V  E þ F ¼ 2; for closed solid
classification. Numerical experiments were also carried out ¼ 1; for one end open
to see the sensitivity of the score value given to the ¼ 0; for both end open
cylindrical face. Different values viz. −2, −1.5, −1 and −0.7
were tried. The networks showed good performance with With the Euler’s expression, it can be seen that passage
cylindrical face value of −2. feature family satisfy V - E + F=0, whereas all other feature
The vertex score is calculated by Vs ¼ Es1 þ Es2 þ Es3 ; families (pocket, blind slot, through slot, blind step and
where Vs is the vertex score and Es1, Es2 & Es3 are the through step) satisfy V − E + F=1. Even though Euler’s
scores of the three edges that are incident on the vertex. expression is topologically invariant within a feature family,
The face score is given by it doesn’t give contrast between feature families.
X
n
Vsj
Fs ¼ þ Fg ð1Þ
j¼1
n Table 2 Variation of number of faces (F), edges (E) and vertices (V)

Number Pocket Passage Blind Through Blind Through


where n is the number of vertices shared by the face and is of step step slot slot
the face geometry score value for the constituent face.
F 4+x 3+x 2+x 2+x 4+x 1+x
Figure 3 shows an example of a rectangular through slot.
E 9 + 3x 9 + 3x 6 + 3x 7 + 3x 11 + 3x 4 + 3x
A typical face (face A) is shown separately with all its
V 6 + 2x 6 + 2x 5 + 2x 6 + 2x 8 + 2x 4 + 2x
constituent vertices and edges. The concave and convex
Int J Adv Manuf Technol (2009) 41:932–947 939

the sum of all transitions from convex edge to concave edge


or vice versa during a traversal along the boundary of a face
[30]. The pocket feature family has one ‘Zero Transition
Face’, which is the bottom face of the pocket (Fig. 4). The
bottom face of the pocket has all concave edges so there is
zero transition. Other feature families don’t have a ‘Zero
Transition Face’ (except circular through slot) and hence
this attribute can aid ANN in classifying pocket feature
family.
Fig. 4 Zero transition face in pocket
Node 10 This node stores the number of ‘Four Transition
On examining the variation in the number of faces (F), Faces’. For the rectangular through slot (Fig. 3) only the
edges (E) and vertices (V) in each of the feature family, it bottom face A has two non adjacent convex edges (v1–v2
was observed that it follows an arithmetic progression (AP) and v3–v4) and two non adjacent concave edges (v2–v3
(Table 2). and v4–v1). It will thus have four transitions (concave to
It is seen that the addition of a face to a feature convex) while traversing its boundary. The number of ‘Four
effectively results in the addition of three edges and two Transition Faces’ would therefore be one. This is true for all
vertices to it. It is also seen that for each feature family the through slots except circular and V through slots. Other
set of first terms in AP for F, E and V are different. They feature families either don’t have a ‘Four Transition Face’
however have the same common difference of 1, 3, and 2 or have more than one ‘Four Transition Face’ (viz.
for F, E and V respectively. Observing this pattern, the passage).
relationship between F, E and V for all the feature families
can be expressed by V + E − 5F. For various feature Node 11 This node stores the total number of faces in the
families, this expression evaluates as follows (value in set of connected candidate faces that can form a feature
bracket): pocket (−5), passage (0), blind step (1), through (obtained after preprocessing of the CAD part model,
step (3), blind slot (−1) and through slot (3). Sect. 5.1). This attribute can aid ANN in classifying sub-
Node 8 will, thus, aid ANN in classifying feature feature types in the feature families.
families except through slot and through step. Other
attributes like number of four transition faces (Node 10) Node 12 This node stores the number of external faces
and the number of external faces (Node 11), discussed attached to the set of connected candidate faces. Only those
further would help in their unambiguous classification. external faces are counted that lie on different surface
geometries. Rectangular through slot feature shown in
Node 9 This node stores information on the number of Fig. 3, has 4 external faces (E, F, G & D) connected to
‘Zero Transition Faces’. Number of transition is defined as candidate faces (A, B & C). But D and E lie on the same

Fig. 5 ANN architecture 12–


N1–N2–1
940 Int J Adv Manuf Technol (2009) 41:932–947

Table 3 Parameters considered for training and testing of the network be determined by any recipe [31]. It is a result of numerical
Dataset experimentation. In this work, architectures with one and
two hidden layers were considered. For both these
Training set 65x10 normalized input-target pairs architectures, the number of neurons in the hidden layers
Testing set 14 normalized input-target pairs was varied to obtain best learning and generalization
Network architectures and network parameters
performance. The output layer was chosen to have one
Network 12–16–1, 12–24–1, 12–30–1,
neuron which will signify the feature type. The various
architecture 12–24–16–1, 12–24–24–1
Transfer function Log Sigmoid, Tan Sigmoid, Pure Linear target values associated with each feature type are shown
per layer below each feature in bracket (Fig. 2a–c).
BP algorithm Gradient descent
Training parameters Node characteristics and learning rule Biases and weights
Learning rate 0.4, 0.6, 0.8, 0.99 are initialized randomly. They are updated by the learning
Number of epochs 60,000, 120,000, 180,000 rule after each epoch of the training. Log sigmoid, tan
Performance measures
sigmoid, and pure linear transfer functions most commonly
Training error Max. training error
Number of training set features not recognized
used in BPNN were tried out in different combinations; for
Testing error Max. testing error example, using log sigmoid in all layers or using tan
Number of testing set features not recognized sigmoid in the hidden layers and log sigmoid in the output
layer, etc. It was found that using tan sigmoid in the hidden
layers and log sigmoid in the output layer resulted in faster
convergence and yielded better results. Gradient descent
surface geometry and are considered as one single external algorithm was used to train the neural network.
face. So, the total number of external faces will be three.
Through step from the geometry point of view, maps onto
two corners of the part and has four external faces. 5.2.3 Training and testing of the network
Information in Node 12 will help ANN in resolving the
conflict between through slot and through step feature The primary aim of training and testing the network was to
family. find a suitable network architecture and network parameters
In summary, the 12-node input vector for the rectangular of multi-layer feed-forward back-propagation network
through slot feature shown in Fig. 3 will be [0.5, 1.0, 1.5, (BPNN) that will enable learning all the input feature
1.5, 1.5, 1.5, 0.833, 3, 0, 1, 3, 3]. patterns and to further generalize them to recognize
In comparison to the FRV/matrix scheme in literature unknown feature patterns. As a result, numerical experi-
[14–24], it is felt that this 12-node feature RV will enable ments were carried out by varying network architecture and
unique and unambiguous representation of variable and network parameters like number of hidden layers, number
fixed topology features and do shape-based feature classi-
fication.

5.2.2 Selection of artificial neural network

In this work, a multi-layer feed-forward back-propagation


network (BPNN) has been chosen because it is known to
provide good learning and generalization capabilities for
pattern association and pattern recognition problems [31].
The details of network architecture, node characteristics,
and the learning rule are discussed below.

BPNN architecture Multi-layer feed-forward architectures


12-N-1 and 12-N1-N2−1 were selected for the task of
feature recognition. Figure 5 shows a 12-N1-N2−1 archi-
tecture, where N1 and N2 represents number of neurons in
hidden layer 1 and 2, respectively.
The input to the network is a 12-node vector that was
described in detail in Sect. 5.2.1. The number of hidden Fig. 6 Training curve of network 12–24T–24T–1L with lr = 0.6 and
layers as well as the number of neurons in each layer cannot epochs = 180,000
Int J Adv Manuf Technol (2009) 41:932–947 941

Fig. 7 Case study 1 part – NIST

of neurons per hidden layer, transfer function per hidden Then the normalized value for nodes n1 to n12 is given
layer, BP algorithm and the training parameters like by
learning rate, number of epochs, etc. Further, this network  
ni  min
was tested for its generalization capability by presenting it Normalized Value ¼  0:9 þ 0:05 ; for 1  i  7
max  min
with unknown (unseen) feature patterns. Training and
testing was done using ANN Tool Box of MATLAB™. ð2Þ
Important steps are discussed below.
 ni 
A) Preparation of Dataset (Training and Testing Set) þ1
¼ 5
 0:9 þ 0:05 for i ¼ 8 ð3Þ
Initial dataset comprised of 79 pairs of different input 2
vectors (FRVs) corresponding to different features and their
associated target value. Out of this dataset of 79 pairs, 65 8 9
< 0:05 if ni ¼ 0 =
pairs were used as the training set and the remaining 14
¼ 0:95 if ni ¼ 1 ; for i ¼ 9; 10 ð4Þ
pairs as the testing set. The training set was populated by : ;
0:50 if ni > 1
replicating each of the 65 input-target pairs ten times.
Normalization of input vectors and target values was done
to get good convergence. The order of normalized input-
Table 4 List of features recognized - Case Study 1 part
target pair in the training set was randomized before
presenting it to the ANN. Feature type Number of instances

Rectangular pocket 7
Normalization of Input Vector The input vectors were
Obround pocket 2
normalized in the parameter range of 0.05 to 0.95. Out of N Side convex pocket 3
the input vector nodes n1 to n12, node values n1 to n7 are N Side concave pocket 36
real numbers and node values n8 to n12 are integers. N Side concave blind step 3
Rectangular blind slot 3
Let, N Side convex blind slot 2
min = minimum value in the set of all input vector nodes N Side concave blind slot 9
n1 to n7 present in the dataset. N Side concave through slot 1
Blind Hole 1
max = maximum value in the set of all input vector
Total no. of features recognized 67
nodes n1 to n7 present in the dataset.
942 Int J Adv Manuf Technol (2009) 41:932–947

Fig. 8 Case study 2 part – GSSL

– Train the network. The network was trained by varying


8 9 the training parameters like number of epochs and
>
> 0:05 if ni ¼ 1 >
>
>
> >
>
> 0:25 if ni ¼ 2 >
>
>
learning rate.
< = – Test the network. Testing of the trained network was
0:45 if ni ¼ 3
¼ ; for i ¼ 11 ð5Þ carried out in two phases. Firstly, it was tested with
>
> 0:65 if ni ¼ 4 >
>
>
> >
>
> 0:85 if ni ¼ 5 >
>
>
known input vectors (training set) to check the
: ;
0:95 if ni  6 number of training set features not recognized. In the
second phase of the testing, it is tested with unknown
  input vectors (testing set) to check the number of
0:05 if ni 6¼ 4
¼ ; for i ¼ 12 ð6Þ testing set features not recognized.
0:95 if ni ¼ 4

Fixing Target Value The target values for different features Error value is computed as the absolute numerical
are shown in Fig. 2a–c. The target values were chosen [0.1, difference between expected target value of a feature and
0.2, 0.3, 0.4, 0.5, 0.6, 0.7 and 0.8] such that they the simulated output from ANN. It can be seen that the
correspond to the feature families of pocket, passage, blind minimum separation between two target values is 0.02
step, through step, blind slot, through slot, blind hole and (Fig. 2a–c). So a particular feature is recognized if its
through hole, respectively. In a feature family, each sub absolute error value is less than 0.01. If the absolute error
feature was given a target value incrementing by 0.02. value is greater than 0.01 means the feature is wrongly
Since target values are already in the range of 0 to 1, there
is no need to further normalize the target values.
B) Training and Testing Methodology Table 5 List of features recognized - case study 2 part

Feature type Number of instances


Table 3 shows the various parameters considered for
Rectangular pocket 10
training and testing the network. The methodology used is N Side concave pocket 17
given below: N Side Concave blind step 8
– Select a network architecture: Some of the network Rectangular blind slot 7
Blind hole 12
architectures (single and two hidden layer networks)
Total no. of features recognized 54
considered for training and testing are given in Table 3.
Int J Adv Manuf Technol (2009) 41:932–947 943

Fig. 9 Case study 3 part

classified by the network into a different feature type and is A large number of numerical experiments were carried
said to be ‘not recognized’. Going by this approach the out to come up with an optimum BPNN network configu-
‘number of features not recognized’ in training or testing set ration as under:
is found.
Network Type: Feed-forward back-propagation
Training and testing results are presented in the next
No. of Layers: 3 (two hidden and one output layer)
section.
Hidden Layer1: 24 neurons and TANSIG Transfer
C) Training and Testing Results Function
Hidden Layer2: 24 neurons and TANSIG Transfer
Different network architectures with single and two
Function
hidden layers and network parameters (Table 3) were
Output Layer: 1 neuron and LOGSIG Transfer
trained and tested for learning and generalization capability.
Function
It was found that networks with a single hidden layer (12–
Training Function: traingd (Gradient Descent)
16–1, 12–24–1, 12–30–1) were not able to recognize all the
Learning Function: learngd (Gradient Descent)
features and increasing the number of hidden layer neurons
Performance Function: mse (Mean Square Error)
from 24 to 30 did not improve their performance. Single
Learning Rate: 0.6
hidden layer networks showed good learning capability
Number of Epochs: 180,000
with a learning rate of 0.6. Networks with two hidden
layers (12–24–16–1, 12–24–24–1) was then trained and
tested with learning rate of 0.6 by varying the number of
epochs. It was found that the network 12–24–24–1 got fully
trained and recognized all the features in the training set. It
had tan sigmoid transfer function in the hidden layers and
log sigmoid transfer function in the output layer. It was Table 6 List of features recognized – case study 3 part
trained with a learning rate of 0.6 and number of epochs as Feature type Number of instances
180,000. Training time was approximately 30 min on a
Pentium IV PC configuration. It also showed good Rectangular pocket 5
generalization capability by recognizing all the features in Rectangular blind slot 4
Blind hole 4
the testing set (unseen patterns). Figure 6 shows the training
Total no. of features recognized 13
curve of the same.
944 Int J Adv Manuf Technol (2009) 41:932–947

5.3 Feature representation and application interface extrusion height, etc. are computed using ACIS APIs/Direct
Interfaces. These parameters are stored using object-
For each feature classified by ANN, the feature parameters oriented programming (OOP) style to get a feature-based
like feature type, shape, dimensions (like length, width, part model. The feature data was finally written into a text
height, diameter), position, access directions, etc. and file with a native feature-based format similar to STEP.
geometry details such as bottom contour, sweep direction, Figure 10a shows a typical feature-based model file format

Feature File from ANN based WebNC Feature File (*.prt) generated by
Feature Recognition System Translator
FeatureID: 0 PART
FeatureType: Block PART_ID 1
FeatureSubType: Rectangular MATERIAL Aluminium
---- UNITS mm
---- FEATURE Gross
FeatureID: 1 Type Extrusion
FeatureType: Blind Hole SubType Block
FeatureSubType: None ID 0
Position: 67.500 57.500 90.000 Name Block
Dimension: Length Width Height ---
Value: 5.000 5.000 10.000 ---
TAD: 0.000 0.000 1.000 END FEATURE
GEOMETRY FEATURE Local
CONTOUR Type Pocket
---- SubType Contour
END CONTOUR ID 1
Sweep_Direction: 0.000 0.000 1.000 Name SimpleHole
Extrusion_Height: 10.000 Reference FEATURE_ID 0 FEATURE_FACE_ID 5
END GEOMETRY Location 70 60 100
---- Orientation X 1.000000 0.000000 0.000000 Y 0.000000 1.000000
--- 0.000000 Z 0.000000 0.000000 1.000000
FeatureID: 5 INFORMATION
FeatureType: Blind Slot FeatureID 1
FeatureSubType: Rectangular FEATURE_TYPE:Hole;Simple
Position: 35.000 0.000 90.000 Dimension:Radius;Depth;Taper
Dimension: Length Width Height Value:2.5;10;0
Value: 30.000 20.000 10.000 ACCESS_DIRECTIONS:0.000000,0.000000,1.000000
TAD: 0.000 -1.000 0.000 END INFORMATION
TAD: 0.000 0.000 1.000 END FEATURE
GEOMETRY -----
CONTOUR -----
---- ID 5
END CONTOUR Name RectangularSlot
Sweep_Direction: 0.000 0.000 1.000 Reference FEATURE_ID 0 FEATURE_FACE_ID 5
Extrusion_Height: 10.000 ---
END GEOMETRY INFORMATION
---- FeatureID 5
--- FEATURE_TYPE:Slot;Rectangular
FeatureID: 13 Dimension:Height;Width;Depth
FeatureType: Pocket Value:20;30;10
FeatureSubType: Rectangular ACCESS_DIRECTIONS:0.000000,0.000000,1.000000;0.000000,-
Position: 70.000 70.000 90.000 1.000000,0.000000
Dimension: Length Width Height END INFORMATION
Value: 20.000 20.000 10.000 END FEATURE
TAD: 0.000 0.000 1.000 ----
GEOMETRY ----
CONTOUR FEATURE Local
--- Type Pocket
END CONTOUR SubType Contour
Sweep_Direction: 0.000 0.000 1.000 ID 13
Extrusion_Height: 10.000 Name RectangularPocket
END GEOMETRY Reference FEATURE_ID 0 FEATURE_FACE_ID 5
---
INFORMATION
FeatureID 13
Dimension:Length;Width;Depth;Cornerrad;Taper
Value:20;20;10;0;0
ACCESS_DIRECTIONS:0.000000,0.000000,1.000000
END INFORMATION
END FEATURE
END PART

(a) (b)
Fig. 10 a FR Feature text file; b WebNC PRT file
Int J Adv Manuf Technol (2009) 41:932–947 945

generated by the current system for a typical part shown in (feature file) and its integration with WebNC, a Web-based
the Fig. 9. This feature-based model file can be used for CAPP software for feature-based CNC programming
linking to various downstream applications like CAPP by indigenously developed at CAM Lab, IIT Bombay (http://
developing suitable post processors. This is explained webnc.cam.iitb.ac.in). WebNC uses a feature-based design
further in case study 3, Sect. 6. philosophy to model a part and saves the part in its native *.
PRT feature file format. The machining features supported
by WebNC are quite limited compared to the feature
6 Results and discussions taxonomy recognized in this work (Fig. 2a–c).
In the current work, an application interface was
The feature-recognition system has been implemented in developed to demonstrate the feasibility of integrating the
VC++ and uses ACIS™ kernel [32, 33] and MATLAB™ developed feature-recognition system to drive an end
Engine and runs on Pentium IV PC. The input to the system
is the SAT files of part and stock solid model. The part and
stock SAT files are either taken from the standard
repositories like NIST [34] or are modeled in Solidworks™
CAD modeling software and exported to SAT file format.
Neural Network Toolbox of MATLAB™ was used for
training and testing the network. The trained network is
seamlessly integrated with our system using the MAT
LAB™ engine APIs.
Various case study CAD models having complex
machining features with fixed and variable topology were
used to test the accuracy and performance of the developed
ANN-based FR system. The dataset comprised of both
known (seen) or unknown (unseen) feature patterns. The
dataset was progressively enhanced by adding the unrec-
ognized feature patterns into the training set and selected
recognized feature patterns to the testing set. The enhanced
dataset comprised of 95 training pairs and 20 testing pairs.
With the enhanced training set, the network 12–24–24–1
was again trained with same training parameters. It got fully
trained recognizing all the training features and was also
able to recognize unseen features. Typical case studies are
reported here.

Case study 1 Figure 7 shows a part taken from the NIST


Repository [34], which has been modified to filter out
interacting features and the fillets. Fillet deletion is usually
performed as the first step in design feature recognition
since it simplifies the model for recognition of volumetric
features [35]. Left pane of the system (Fig. 7) shows the
tree of recognized features. The part had 67 non-interacting
machining features shown in Table 4. All features were
correctly recognized by the ANN-based FR system in
approximately 5 s.

Case study 2 Figure 8 shows a part taken form the GSSL’s


Manufacturing View demo version software [36]. Our FR
system correctly recognized all the 54 non-interacting
machining features on the part (Table 5) in about 4 s.

Case study 3 This case study primarily focused upon the


post processing of the extracted feature data by our system Fig. 11 Process-planning sheet generated by WebNC
946 Int J Adv Manuf Technol (2009) 41:932–947

application (WebNC). A data translator was written to recognizing different feature types, the 12-node FRV
convert the feature model file generated by the feature- scheme developed in this work enabled a single BPNN to
recognition system to the WebNC feature file format be sufficient for the task of intelligent feature recognition.
(*.PRT). Figure 9 shows a typical part modeled in Solid- Extensive training and testing of various network
Works™ and is saved in ACIS file format (*.SAT). It has architectures were carried out by varying number of hidden
13 non-interacting machining features shown in Table 6. layers, number of neurons per hidden layer, transfer
All were correctly recognized by our ANN-based FR function per layer and the training parameters. Two
system (Fig. 9). Figure 10 shows the feature file generated hidden layer network architecture 12–24–24–1 having tan
by the FR system and the translated (*.PRT) file for sigmoid transfer function for the hidden layers and log
WebNC. Figure 11 shows the process-planning sheet sigmoid transfer function for the output layer was found as
generated by WebNC and Fig. 12 shows the simulation of optimum network configuration. It showed a very good
automatically generated CNC code by WebNC in a generalization capability by recognizing all the seen
commercial simulator CNC Train™, from CNC Simulation features and many unseen feature patterns. The system
Systems Ltd., UK. was tested on a number of standard benchmark components
and was found to be robust, consistent, and computationally
fast.
The features recognized by the developed FR system
7 Conclusions were post-processed to create feature-based model in native
format. The feature-based model file was translated and
This work proposed and implemented an intelligent system linked to a feature-based CAPP system for CNC machin-
for recognizing prismatic part machining features from B- ing. It successfully demonstrated the seamless integration
Rep CAD models using artificial neural network. The 12- from a CAD model (ACIS format) to a feature-based
node feature representation vector (FRV) scheme designed automatic CNC code generation.
in this work was able to uniquely represent a large variety The FR system had to be retrained and retested if any
of feature families (such as pockets, passages, blind slots, new feature template is added to training set. Future work
through slots, blind steps, and through steps) having would include extension of the feature representation vector
variations in topology and geometry in an unambiguous scheme to enhance the capabilities of the FR system for
manner and further do shape-based feature classification. In recognizing interacting features with wider variety of
comparison to the earlier ANN-based FR systems for topology and geometry variations.

Fig. 12 Simulation of CNC code generated by WebNC in CNCTrain™


Int J Adv Manuf Technol (2009) 41:932–947 947

References 18. Lankalapalli K, Chaterjee S, Chang TC (1997) Feature recogni-


tion using ART2: a self-organizing neural network. J Intell Manuf
8(3):203–214 DOI 10.1023/A:1018521207901
1. Shah JJ, Mantyla M (1995) Parametric and feature-based CAD/ 19. Onwubolu GC (1999) Manufacturing features recognition using
CAM. Wiley, New York backpropagation neural networks. J Intell Manuf 10(3–4):289–
2. Fu MW, Ong SK, Lu WF, Lee IBH, Nee AYC (2003) An 299 DOI 10.1023/A:1008904109029
approach to identify design and manufacturing features from a 20. Nezis K, Vosniakos G (1997) Recognizing 2–1/2 D shape features
data exchanged part model. Computer-Aided Design 35(11):979– using a neural network and heuristics. Computer-Aided Design 29
993 DOI 10.1016/S0010-4485(02)00160-4 (7):523–539 DOI 10.1016/S0010-4485(97)00003-1
3. Shyamsundar N, Gadh R (2001) Internet-based collaborative product 21. Zulkifli AH, Meeran S (1999) Feature patterns in recognizing
design with assembly features and virtual design spaces. Computer- non-interacting and interacting primitive, circular and slanting
Aided Design 33:637–651 DOI 10.1016/S0010-4485(01)00069-0 features using a neural network. Int J Prod Res 37(13):3063–3100
4. Wu D, Sharma R (2005) A framework for fast 3D solid model DOI 10.1080/002075499190428
exchange in integrated design environment. Comput Ind 56 22. Li WD, Ong SK, Nee AYC (2003) A hybrid method for
(3):289–304 DOI 10.1016/j.compind.2004.11.003 recognizing interacting machining features. Int J Prod Res 41
5. Henderson M, Anderson D (1984) Computer recognition and (9):1887–1908 DOI 10.1080/0020754031000123868
extraction of form features: a CAD/CAM link. Comput Ind 5 23. Ding L, Yue Y (2004) Novel ANN-based feature recognition
(5):329–339 DOI 10.1016/0166-3615(84)90056-3 incorporating design by features. Comput Ind 55(2):197–222 DOI
6. Prabhu BS, Pande SS (1999) Automatic extraction of manufac- 10.1016/j.compind.2004.02.002
turable features from CADD models using syntactic pattern 24. Chakraborty S, Basu A (2006) Retrieval of machining informa-
recognition techniques. Int J Prod Res 37(6):1259–1281 DOI tion from feature patterns using artificial neural networks. Int J
10.1080/002075499191247 Adv Manuf Technol 27(7–8):781–787 DOI 10.1007/s00170-004-
7. Venkataraman S, Sohoni M, Kulkarni VS (2001) A graph-based 2254-9
framework for feature recognition. Proceedings of the Sixth ACM 25. Lam SM, Wong TN (2000) Recognition of machining features – a
Symposium on Solid Modeling and Applications, Ann Arbor, hybrid approach. Int J Prod Res 38(17):4301–4316 DOI 10.1080/
Michigan, United States, pp 194–205 00207540050205109
8. Woo Y (2003) Fast cell-based decomposition and applications to 26. Smith AE, Dagli CH (1994) Manufacturing feature identification
solid modeling. Computer-Aided Design 35:969–977 DOI for intelligent design. In: Dagli CH, Kusiak A (eds) Intelligent
10.1016/S0010-4485(02)00144-6 systems in design and manufacturing. ASME Press, New York, pp
9. Vandenbrande J, Requicha A (1993) Spatial reasoning for the 213–230
automatic recognition of machinable features in solid models. 27. Chen YH, Lee HM (1998) A neural network system for two-
IEEE Trans Pattern Anal Mach Intell 15(12):1269–1285 DOI dimensional feature recognition. Int J Compute Integr Manuf 11
10.1109/34.250845 (2):111–117 DOI 10.1080/095119298130859
10. Gao S, Shah JJ (1998) Automatic recognition of interacting machining 28. Jun Y, Raja V, Park S (2001) Geometric feature recognition for
features based on minimal condition subgraph. Computer-Aided reverse engineering using neural networks. Int J Adv Manuf
Design 30(9):727–739 DOI 10.1016/S0010-4485(98)00033-5 Technol 17:462–470 DOI 10.1007/s001700170164
11. Shah JJ, Anderson D, Kim YS, Joshi S (2001) A discourse on 29. International Standards Organization, ISO10303 – Part 224,
geometric feature recognition from CAD models. J Comput Inf Mechanical Product Definition for Process Planning Using
Sci Eng 1(1):41–51 DOI 10.1115/1.1345522 Machining Features, 2000. ISO, Geneva, Switzerland
12. Babic B, Nesic N, Miljkovic Z (2007) A review of automated 30. Nalluri RSRP (1994) Form feature generating model for feature
feature recognition with rule-based pattern recognition. Comput technology, PhD Thesis, Department of Mechanical Engineering,
Ind, Article in press, Available online 25 Oct. 2007 Indian Institute of Science, Bangalore, India
13. Gadh R, Prinz FB (1992) Recognition of geometric forms using 31. Haykin S (2005) Neural networks: a comprehensive foundation.
the differential depth filter. Computer-Aided Design 24(11):583– Pearson Education Inc, Singapore
598 DOI 10.1016/0010-4485(92)90070-Q 32. Corney J, Lim T (2001) 3D modeling with ACIS. Saxe-Coburg
14. Prabhakar S, Henderson MR (1992) Automatic form-feature Publications, UK
recognition using neural-network-based techniques on boundary 33. Spatial Technology Inc., ACIS 3D Geometric Modeler, Version
representations of solid models. Computer-Aided Design 24 15.0, 2004
(7):381–393 DOI 10.1016/0010-4485(92)90064-H 34. National Design Repository, Drexel University, http://edge.cs.
15. Hwang JL, Henderson MR (1992) Applying the perceptron to 3D drexel.edu/repository/-frameset.html
feature recognition. J Des Manuf 2(4):187–198 35. Venkataraman S, Sohoni M (2002) Removal of blends from
16. Ozturk N, Ozturk F (2001) Neural network-based non-standard boundary representation models. Proceedings of the Seventh
feature recognition to integrate CAD and CAM. Comput Ind ACM Symposium on Solid Modeling and Applications, Saar-
45:123–135 DOI 10.1016/S0166-3615(01)00090-2 brücken, Germany, pp 83–94
17. Gu Z, Zhang YF, Nee AYC (1995) Generic form feature 36. Geometric Software Solutions Co. Ltd, Manufacturing View
recognition and operation selection using connectionist modelling. Library, Demo Version, 2006, (http://feature.geometricsoftware.
J Intell Manuf 6:263–273 DOI 10.1007/BF00128649 com/Manu_view_library.asp)

You might also like