SMiRT 13 Post Conference Seminar Nr.

N. F. de Almeida Neto, ELETROPAULO, Sao Paulo, Brazil
J. M. A. Anson, ELETROPAULO, Sao Paulo, Brazil
P. Bonissone, GEC Schenectady, USA
S. Fukuda, Tokyo MIT, Japan
Co-organizers: S. Gehl, EPRI Palo Alto, USA
A. S. Jovanovic, MPA Stuttgart, Germany
A. C. Lucia, 'EC JRC Ispra, Italy
S. Yoshimura, RACE, Univ. of Tokyo, Japan


Editors: A.S. Jovanovic, A.C. Lucia

São Paulo, Brazil, August 21-23, 1995


1997 EUR 17669 EN


*&»:·:5*1 13th International Conference SMiRT
Post Conference Seminar No. 13
São Paulo, Brazil, August 21-23, 1995



Neither the European Commission nor any person
acting on behalf of the Commission is responsible for the use which
might be made of the following information

EUR 17669 EN
© ECSC-EC-EAEC Brussels · Luxembourg, 1997
Printed in Italy

Co-Organizers of the Seminar:
Dr. A. S. Jovanovic (Chairman)
MPA Stuttgart, University of Stuttgart
Pfaffenwaldring 32
70569 Stuttgart, Germany
Tel.:+49-711-685 3007
Fax+49-711-685 3053

Dr. A. C. Lucia
JRC - Joint Research Centre Ispra
21020 Ispra (Va), Italy
Tel.: +39-332-789 155
Fax: +39-332-789 156

Dr. S. Gehl
EPRI - Electric Power Research Inst.
3412 Hiltview Ave., P.O. Box 10412
Palo Alto, CA 94303, U.SA.
Tel.:+1-415-855 2770
Fax+1 415-855 8759

Dr. P. Bonissone
General Electric Company
K1-SC32A, P.O. Box 8, Schenectady
NY 12301, U S A .
Tel.:+1-518-387 5155
Fax+1-518-387 6845

Prof. S. Fukuda
Tokyo MetropoRtan Inst, of Technology
6-6 Asahigaoka, Hino,
Tokyo 191, Japan
Tel.: +81-425-83 5111 ext. 3605
Fax+81-425-83 5119

Prof. S. Yoshimura
FÌACE, University of Tokyo
7-3-1 Hongo, Bunkyo
Tokyo 113, Japan
Tel.: +81-3-3812 2111 ext. 6960
Fax +81-3-5800 6876

Local organization:
Mr. Nicolau F. de Almeida Neto
Mr. José M. A Anson
ELETROPAULO Electricidade de São Paulo S A
Dep. Usina Termoeléctrica Piratininga
Av. Nossa Senhora do Sabara 5312
04447-011-São Paulo SP, Brazil
Tel.:+55-11-563 0682
Fax+55-11-563 0052

Management Office:
Mr. Karl Lieven
MIT GmbH, Promenade 9
52076 Aachen, Germany
Tel.: +49-2408-945812
Fax +49-2408-94582
Commerzbank AG, Aachen,
Postfach 270, Theaterstr. 21-23,
52062 Aachen, Germany
Konto Nr. 12 12 588 01 - Kennung SMIRT, BLZ 390 400 13

Supporting organizations:
CEC - JRC, Ispra, Italy
EPRI, Palo Alto, USA
MPA, Stuttgart, Germany


In 1990 the organizers of the SMiRT­11 in Tokyo (SMiRT ­ Structural Mechanics in Reactor
Technology) proposed to organize a new post conference seminar on EXPERT SYSTEMS AND A I
reacted to the obviously growing interest in application of all kinds of "knowledge­based" software
tools in the areas relevant for SMiRT and for the power plant and structural engineering in general.
Starting from the positive experience of this seminar, the following one was organized in the
framework of SMiRT­12 Conference in Constance, Germany, in August 1993. The proceedings
presented here, belong to the third seminar of the kind, organized in 1995 in São Paulo, Brazil,
within SMiRT­13.
When compared to the first seminar the number of papers and participants significantly increased, as
well as the generic interest for the overall issue. The level (the number and quality of papers)
achieved at the second seminar (in Constance) has been approximately maintained. The trend
obvious in Constance is present also in São Paulo: most of the papers are linked nowadays to
practical problems, not just "general" or "in principle" solutions. On the other hand, there is an
obvious trend to encompass the areas outside the main domains of SMiRT, i.e. a trend to tackle not
only the problems relevant solely to nuclear power plants, but also those from e.g. fossil­fired power
plants and/or process plants. This has been reflected also in the title of the seminar. It has been
slightly changed for the third seminar, being now APPLICATIONS OF INTELLIGENT
The change of title mentioned above reflects also the shift from the "conventional" (rule­based)
expert systems and/or knowledge­based systems (KBSs) to the systems developed nowadays, which
all tend to be more or less integrated with other tools, and therefore, probably better described by the
term Intelligent Software Systems.
The seminar and the proceedings have been structured in a sort of a "progressive" way: both start
first with the tutorial­like lectures giving introduction and describing the state­of­the­art in some of
the two important emerging enabling technologies: industrial scale fuzzy systems and data mining
("extracting knowledge from data"). They continue then by presenting the contributions giving an
idea and/or illustration of "what is going on" in the area of intelligent software systems, covering
Western and Eastern Europe, South America, USA and Japan. This review is followed by a series of
papers presenting single systems and/or projects and their results. In the final part of the seminar and
of proceedings, the end­users have been asked to express their opinion on the usability and
usefulness of KBSs, as well as about the problems they have been facing.
This 1995 Post Conference SMiRT Seminar and its co­organizers have received support and help
from various institutions, companies and persons. On behalf of all the co­organizers the editors want
to gratefully acknowledge this support here, especially the help provided by the host of this seminar,
the electric utility company ELETROPAULO from São Paulo, Brazil. Our special thanks go also to
all the contributing authors: without their research, and their willingness to prepare and present these
papers, the seminar would never have taken place. The same thanks go to the members of the end­
users' panel: its their precious advice and opinion that must guide the work of researchers in the area
of intelligent software systems. Special thanks go to Dr. Poloni, for his precious help in preparation
of the overall seminar, and to Mr. José Anson for his marvelous mastering of the local organization
of the seminar.

The editors
Stuttgart, Ispra, May 1996


M. Over. Yagawa Intelligent approaches for automated design of practical structures 31 S. 1 M. H. Upadhyaya. object-oriented programming. D. object oriented databases. M. G. G. A. M. Fukuda Intelligent NDI data base for pressure vessels 45 R. Jovanovic. Ribeiro. Behravesh.. crack growth 119 . R. G. S. D. R. Henry An Automated Diagnostic Expert System for Eddy Current Analysis Using Applied Artificial Intelligence Techniques 67 G. Townsend Modern remanent life assessment methods: degradation. Ρ. Lee. neural networks. Wu Yan. Gruden Technology awareness dissemination in Eastern Europe with intelligent computer systems for remaining power plant life assessment EU project TINCA 109 Chapter 3: Special course on Intelligent Software Systems and Remaining Life Management 117 J. ERA Perspective (extended abstract) 63 B. Iraci The critical role of materials data storage and evaluation systems within intelligent monitoring and diagnostic of power plants 93 M. Poloni Extraction of knowledge from data: practical aspects in mechanical engineering 3 Chapter 2: Review of same major SMiRT- and KBS-related research programs 29 S. Alves da Silva Applications of intelligent Systems to substation protection and control 81 H. Α. Table of Contents Chapter 1: Enabling and emerging technologies: knowledge-based systems. J. S. Brear. Townsend Advances in Damage Assessment and Life Management of Elevated Temperature Plant . Yoshimura. Lambert-Torres. M.

Silveira. H. D. J. C. M. Jovanovic. R. Budel. Sato. P. Poli. A. Ellingsen. Psomas Intelligent software systems for inspection planning . Nievola Improvement on welding technology by an expert system application 309 T. J. Stemmer. D. C. Rivelli Technical and economical feasibility study of the electron beam process for SO2 andNOx removal from combustion flue gases in Brazil 229 M. S. N. Coury. R. T. Kautz SP249 End-Users response/acceptance of KBS's: What is required. H. Cane. M. Klinguelfus. S. Ramos. S. M. D. P. Hamano. Jorge Artificial Neural Networks Applied to Protection of Power Plant 293 H. Auerkari Theoretical and practical basis of advanced inspection planning involving both engineering and non-engineering factors 173 A. R. M.The SP249 project 155 P. D. C. Auerkari. E. A. T. J. Montevechi. J. Townsend The PLUS system for optimized O&M of power and process plant 203 Chapter 4: Specific applications 227 D. Zimek. David C. 291 Denis V.The BE5935 project 183 B. application for group technology 271 Chapter 5: End-Users' Acceptance of Intelligent Software Systems. H. V. P. Kautz Consequences of current failures for quality assurance 301 L. A. P. Jones. Miyagi Fuzzy logic . B.A. Pagano Implementation of an integrated environment for adaptive neural control 245 M. H. G. what has to be done 339 Alphabetical list of authors 355 (Only One Author per paper for contacting purposes) . Narikawa Integrated and Intelligent CAE Systems for Nuclear Power Plant 319 H. L. R. Correa. Vieira. Weber Advanced analysis of material properties using DataEngine 259 J. Ζ. F. R. Friemann Intelligent software systems for remaining life assessment . what is available. Kautz. Jovanovic. R. M. Poloni. Futami.



Neural Networks. Increasingly these industries are automating the activity of data logging. is currently a very hot topic for power and process industry. a KBS-based module. The need of Data Mining techniques. An "Advisor". Several methods are available.Germany Fax:+49 711 685 3053 e-mail: ρ oloni@mp a. In case of failures detailed case studies can be created. Case Based Reasoning) to reach the Diagnosis / Prediction desired. a sort of "Computer Assisted Data Mining". In power and process plants up to now mainly case studies relative to particular usituations (failures. that makes use of methods typical both of the "classic" KBSs and of numerical processing to maximise system performances. Different projects are contributing to build up the general system . an important portion of information comes from human experts. the expert information is not precise and is represented by fuzzy terms. Extraction of Knowledge from Data: Practical Aspects in Mechanical Engineering M. guide the user towards the use of the most suitable learning / reasoning method (Machine Learning.uni. at this point. documenting the state of the plant before and after the situation of interest.Stuttgart. creating huge databases of plant operation data. de Abstract A good engineering approach should be capable of making use of all the available information effectively. 70569 Stuttgart . instruments or obtained according to physical models. The use of pattern recognition techniques (clustering / classification) allows an "Intelligent extraction" of knowledge from the database. each with its advantages and drawbacks. another important portion of information is numerical information. that allow knowledge estraction from these huge collection of data. Currently at MPA Stuttgart such an architecture is under development. which is collected from various sensors. In addition to the expert information. Usually. The idea illustrated in this tutorial is an integrated approach. For many practical problems. Poloni MPA Stuttgart Pfaffenwaldring 32. anomalies) were available usually in paper form. can.

creating huge databases of plant operation data. one of the main obstacles in applying data mining to databases. In power and process plants up to now mainly case studies relative to particular situations (e. This has been demonstrated from a number of applications both at academic and industrial level (for more details see e. the expert information is not precise and is represented by fuzzy terms. 9. 10]). the use that will be done of the results. Usually. Introduction In recent years. data mine systems like ID3.1. At the same time. reports. another important portion of information is numerical information. instruments or obtained according to physical models. With the growing . some approaches coming from new fields like fuzzy logic (FL) [21]. Knowledge extraction from databases is performed by means of the so-called "Data Mining" [13]. AQ15 or CN2. In addition to the expert information. Advantages with respect to a conventional expert system are: the knowledge acquisition process is greatly simplified (the knowledge-base is structured as an encoded set of already solved problems). among them: neural networks (NN). neural networks (NN) [7] and machine learning (ML) [8] have shown their effectiveness in the management of uncertainty and the possibility of interpolation of system behaviour from sample data for complex problems. Each of these techniques has its own benefits and drawbacks. and for the size of the search space. expert systems technology has been extensively developed [1. In case of failures detailed case studies can be created. a portion of information comes from human experts.g. case-based reasoning (CBR). For the cited problems. e. which formally represent similar problems connected with their known solutions. A good engineering approach should be capable of making use of all the available information effectively. In fact the size of the database has consequences for the cost of validation of the induced models. As a consequence of the avaikbility of large databases. The use of the right technique depends from a number of factors. Increasingly these industries are automating this activity of data logging. manufacturing quality control).g. which is collected from various sensors. the type of data available. These experiences have demonstrated that the development of an expert system can become rather complex especially when the exploited knowledge consists mainly of a large collection of cases. several applications recently delivered in different domains (diagnosis. namely case-based reasoning (CBR) [6]. With this name is characterised the use of machine learning techniques when the environment is described through a database. the system is more robust because it succeeds in adapting an available case to propose a solution for a given problem. In similar situations. Normally an expert in the field of data mining can suggest an effective solution only together with a domain expert. the explanation supporting the proposed solution is quite expressive as it contains references to analogous cases. in addition. have been based on an alternative approach to expert systems. extraction or learning oí "models" or "relations" from archive data is a very important topic in engineering. 3] to be applicable to a number of support systems addressing tasks such as integrity analysis and residual life assessment of critical components. engineering design. namely the type of analysis to be performed. 2. [5.g. risk analysis. fuzzy and statistical data analysis (DA). is the size of the database. The main feature of this approach is the capability of solving problems through a direct comparison with available cases. failures) were available and usually in paper form. Moreover. For these purposes a number of different techniques are used. where each relates to a specific problem and its own identified solution. documenting the state of the plant before and after the situation of interest.

that makes use of methods typical both for "classic" KBSs and for numerical processing to maximise the performances of the data mining system. using (e. 5. Case Based Reasoning) to reach the Diagnosis / Prediction desired. The idea illustrated in this paper and outlined in Figure 1 is a sort of "Computer Assisted Data Mining. is also likely to train current experts and engineering personnel to identify and manage failure cases well beyond their usual range of experience and/or training. An "Advisor". Instead of using the entire database. 4]. Intelligent extraction (NN/Fuzzy) Figure 1: Integrated system structure The use of pattern recognition techniques (clustering / classification) allows an "Intelligent extraction" of knowledge from the database. should include the use of effective search strategies and heuristics. Currently at MPA Stuttgart such an architecture is under development. An important part of the work has then to be focused on designing user interaction during the search process. at this point. when operational. Such a system. and understandable representations for the knowledge. the size of the search space. . A very valuable source of heuristic information is the end-user. may allow the discovery of relationships that would remain otherwise hidden. These drawbacks bring the need of an intelligent approach. Such an approach should not only provide the way to compare and validate the results coming from the techniques available. normally an expert in the application domain. can. Neural Networks.g. Solutions for the second of the above cited problems. A solution for the first of these problems is the application of database optimisation techniques. guides the user towards the use of the most suitable learning / reasoning method (Machine Learning. Different projects are contributing to build up the structure shown in Figure 1 [18. provided by the user. that is. Furthermore. a KBS-based module.) incremental browsing optimisation techniques. At the same time. In this way the analysis can profit from all the advantages of using a data mining system without being affected from the cited drawbacks. the use and integration of domain knowledge. During the search process this set will be incrementally extended with data from the database. it should help the field expert to exploit his/her domain knowledge.dimensions of the current databases (orders of magnitude of several megabytes are normal practice) serious problems could be encountered. but also give advice on how to appropriately use these techniques. only a subset will be used for the initial search phase.

Therotical aspects of the model. predicts. continuity and stability. Its outcome may provide a basis for hypothesizing relationships between variables governing and governed by the process. storage and speed. The current state of development of different systems at MPA Stuttgart is then summarised. a model of the system is proposed. In the following a general introduction to some basic concepts of Data Mining is given together with a number of practical examples. This type of data analysis is particularly indicated when unknown relations in data sets are present and there is the need to reveal the possible structures behind the data set. this use of clustering is sometimes called dissection. Cluster analysis can also be used for summarising data rather than for finding "natural" or "real" clusters. Any generalisation about cluster analysis must be vague because a vast number of clustering methods have been developed in several different fields. estimates and/or controls the process and its subprocesses. Some definitions from the literature are: "afield concerned with machine recognition of meaningful regularities in noisy or complex environments" or "the search for structure in data".g. The system classifies. tested and placed in service. damage assessment. inspection scheduling. providing clues and insights into the model and the process it represents. lineraity. . 4. Pattern recognition in data mining Pattern recognition can be described in general as a subdiscipline of data analysis. error rate performances. with different définitions of clusters and similarity among objects. The model is "trained" with labelled training data (the model is parametrised by providing it with examples of correct instances). Data are collected from humans and sensors. The possibility to use the model obtained to characterise new data allows for its use as an assessment tool also in presence of uncertainties. e. Humans nominate either features or pairwise relationships that hopefully capture basic relationships between the apparently important variables of a process.The object field is mainly the operation support in power and process plants. object data • Feature analysis: The search for structure in data items or observations Xk e X • Cluster analysis: The search for structure in data sets X c S (data space) • Classification: The search for structure in data spaces or populations S. 2. 3.g. A system that implements the model is built. and objects in different clusters tend to be dissimilar. Such support is in terms of (e. relational data.) metallic materials properties short. The fundamental steps to develop a pattern recognition system can be described as: 1. A search for underlying structure in the data is conducted. such that objects in a given cluster tend to be similar to each other in some sense. or rules.. Hypotheses are formalised by characterising the process with equations. It is possible to summarise the concepts that will be exposed in the following way: • Process description: A (in our case numerical) description of the system or system behaviour in terms of data vectors. The model is tested and compared with other models of the same process for things such as relative sensitivity to perturbations of its inputs and parameters. or perhaps algorithms . 2. The purpose of cluster analysis is to place objects into groups or clusters suggested by the data. are established. 5. not defined a priori.

governs. individual objects and their relationships with each other. or both. It may happen that. The most familiar choice is the representation of objects within the process by a set of numerical data [21]. some idea about the classes of objects they represent. . we have access to a set of numerical relationships between pairs of objects. each object observed in the process (some kind of physical entity) has a vector of numerical features as representation. In pattern recognition. in which case the identity of each vector as belonging to one of several classes is known. instead of an object data set X as described above. perhaps. or they can be unlabeled so that we have only the vectors themselves and. perhaps hiding in differetn semantic guises.1 Process description The first choice faced by the Pattern Recognition System (PRS) designer concerns the way the process of interest will be represented for study. Generally speaking. the usual situation is that the process is governed by. proximities). Relational data are to be found in many applications and systems.A graphic representation of the different steps and their interconnections is reported in Figure 2 Process description Feature nomination X = Numerical object data Humans Sensors R = Pair-Relation data Design Data | Test data Classifier design Feature analysis Identification Pre-processing Classification Extraction Estimation 2-D display Prediction Assessment Control Cluster analysis Exploration Validity 2-D display Figure 2: General structure of a pattern recognition system (adapted from [21]) 2. Object data for pattern recognition can be labeled. two data structures are used in numerical PRSs: object data vectors (features. pattern vectors) and pairwise relational data (similarities.

Every object has associated a feature vector (vector of the co-ordinates in the feature space). smoothing and various other "clean-up" techniques. its value becomes part of the clustering problem.2 Feature analysis Feature analysis refers to a collection of methods that are used to explore and improve raw data. but only a number of them (forming the feature space) have to be used. say Y= f E [X]. that an optimal mathematical grouping is in some sense an accurate portrayal of natural groupings in the phisycal process from whence the object data are derived. that is. normalisation. the data that are nominated and collected during process description. is done by taking / £ to be a projection onto some coordinate subspace of FF. and linear versus non-linear. otherwise. 2-D Display. that is. and get ideas about. Extraction techniques can be divided into analytic (closed form foifE) versus algorithmic. time and space complexity of algorithms that use the transformed data are obviously reduced in the process. one that groups together object or object data vectors which share some well defined (mathematical) similarity. 2D Display and Extraction techniques for object data can be cast in a single framework: any function fE:B?—>F? where ρ > q is a feature extractor when applied to the set X e FF of features vectors. the transformed data set Y can be displayed as a scatter diagram for visual inspection. The utility of data for more complex downstream processing task such as clustering and classifier design is clearly affected by preprocessing operations. of course. With but few exceptions this problem area assumes the data to be object data. where different subsets of the data set are more likely to belong. 2. not from their entire set of characteristics (variables). Usually the data items are represented. Clustering methods are used in pattern recognition to determine structures in data sets. Feature selection. all of the methods itemised below can be used to produce 2-D scatterplots by taking q = 2. The basic idea is that the feature space can be compressed by eliminating. Pre-processing includes operations such as scaling. Kohonen's Feature Maps [21]. to apply the clustering algorithm. redundant (dependent) and not important (for the problem at hand) features. The other class of display techniques use analytic or algorithmic transformations of the data that result in Y being some sort of pictorial representation of X. Methods for 2D-display fall into two general categories: scatterplots and pictorial displays. the measured data. Objective function based clustering methods are a particular class of clustering methods in which a criterion function is iteratively minimised until a global or local minimum is reached. The number of cluster c is assumed to be known. Included in this category are. so this step in the design of a PRS is always important and should be given careful attention. It is hoped and imphcitiry believed.2. Experimental data carry information either about the process generating them or the phenomena they represent. The new features are the image of X under f E . If p » q . The data space is divided in subspaces. In the following image some possible choices are given. the visual representation of J-dimensional data in a viewing plane is a way to explore structures in. for example. We search for a manner (structure) in which this information can be organised so that relationships between the variables in the process can be identified. via selection or transformation. choosing subset of the original measured features. . When q = 2.3 Clustering The clustering problem for the data set X is the identification of an "optimal" partition of X.

Classification attempts to discover associations between subclasses of a population. depending on whether each feature vector characterising an object belongs exclusively to one cluster or to all clusters to different degrees. 95%). classical (crisp) clustering algorithms generate partitions such that each object is assigned to exactly one cluster. cluster analysis) local models can be reached. This choice should be representative for the physical process that generated the data to enable us to construct realistic clusters. means. Let us assume that the important problem of feature extraction has been solved.5 Comparison with conventional analysis To build a bridge between the usual data analysis techniques used in the material properties characterisation (namely regression analysis) and pattern recognition ones a simplified comparison is given in Figure 3. is normally not known in advance. Unfortunately these questions have to be answered for each different data set since there is no universally optimal cluster criteria. Before applying any clustering procedure is very important to select the mathematical properties of the data set (for example distance. A crucial point for successful analysis is the selection of the right set of features (variables). together with a fuzzy c-partitioning of the data is feasible. The objects belonging to any one of the clusters should be similar and the objects of different clusters as dissimilar as possible. The conditional part of the statement includes linguistic labels of the input . Our task is then to divide n objects χ e X characterised by ρ indicators (variables of different types) into c. c. however. Simultaneous estimates for the parameters of c regression models. leading to wrong interpretations when care is not taken in its choice. The number of clusters. 2. this is in fact the essence of the idea introduced originally in [11.This kind of methods can be either hard (crisp) or fuzzy (based on fuzzy set theory). Using regression analysis generally a global model in the data set interval definition is obtained with its confidence intervals (Le. When mixed data are available it is possible fit switching regression models. In these cases fuzzy clustering methods provide a much more adequate tool for representing real-data structures. In many cases the activity of classification is complementary to that of clustering. On the basis of the criterion used to cluster the data the similarity of the new item is evaluated. Moreover the obtained models can be used to automatically "classify" new data items. connectivity. While a regression model can be calculated in the new "point". 2 < c ^ η categorically homogeneous subsets called clusters. or algorithm by which the data space is partitioned into C decision regions. One time the clustering of the data set has been performed and the clusters detected it is possible to use these clusters to classify new data pairs mcoming. Fuzzy clusters can also give rise to "locar regression models. the classification reports the tipicality of the new item in front of the data-based model previously built. 12]. The cluster criterion adopted can influence heavily the results. In other words.g. Often. where non-stochastic uncertainties of different type can be present.4 Classification A classifier is a device. providing an indication to which region (cluster) of system behaviour it belongs. Using pattern recognition techniques (e. These models may possibly better approximate the material behaviour. The overall model is then structured into a series of'if- then" statements. intensity) and in which way should they be used in order to identify clusters. 2. objects cannot adequately be assigned to strictly one cluster (because they are located between clusters).

is often a basis for decision theory.. The focus will be on fuzzy and neural techniques. any set of actual. 1]. Short introductions on what a fuzzy set or a neural network are will be given. from some universe of objects. assuming value 1 if the object belongs to the set. the range of mp. real objects is completely equivalent to. the membership function is the basic idea in fuzzy set theory. 10 variables while the action part contains a linear (or more generally non linear) numerical relationship between input and output variables.1 Fuzzy clustering (objective function-based) Fuzzy sets were introduced by Zadeh in 1965 to represent and manipulate data and information that posses non-statistical uncertainty. For exaustive expositions the reader should refer to the cited literature.. Pattern recognition techniques: theory and practice In the following a number of different techniques are exposed that enable the realisation of pattern recognition systems. 0 otherwise. . however. say X. Bezdek in his famous book [16] says "Since their inception. a crisp membership function such as m«· To describe it a common methods is through a characteristic function." What is a fuzzy set? People new to the field often wonder what the "set" is physically. in some cases. the clustering method applies to the formation of the conditional part. Nowhere. has their impact been more profound or natural as in pattern recognition. That is. That is. Data Analysis Neural networks theory Fuzzy set theory h Y Pattern recognition Regression analysis J Classification Local models Global model Assessment Figure 3: Comparison between the use of pattern recognition and regression analysis 3. that have demostrated to have. the function-theoretic representation of F. There is. no set-theoretic equivalent of "real objects" corresponding to mp. . 3. topology. onto [0.. fuzzy sets have advanced in a wide variety of disciplines: control theory. In conventional set theory. linguistics. however. advantages over the conventional ones. this feature allows for the description of uncertain properties of an object. its values measure degrees to which objects satisfy imprecisely defined properties.1} but in the interval [0. J. A fuzzy set can be described as an extension of the classical one: its characteristic (now called membership) function mp can assume values not anymore in the set {0. 1]. So. which in turn. fuzzy sets are always functions. and isomorphically described by. simply giving a membership value less than 1.

bearing in mind the incompleteness of the analogy between them. 11 0. Moreover. depending on whether each feature vector (vector of the co-ordinates in the data space) characterising an object belongs exclusively to one cluster or to all clusters to different degrees. it seems more reasonable to invest efforts in a new statement of the problem which could handle fuzzy information rather than to constrict the fuzzy problem into probabilistic frameworks. if available. and a distance of 25 Km is definitively MEDIUM. For example a 50 Km distance will have a 0.2 50 60 Distance (Km) Figure 4: Fuzzy sets: Example An example is reported in Figure 4. m is an exponent that will have value 1 if we want to perform a crisp clustering.8 membership in the BIG set and 0.8 0. The best-know algorithm from which derive many variants. The initialisation of this matrix can be done randomly or using. fuzzy theory is supposed to be a useful mathematical description of non-statistical uncertainty. All the set theory has been "enlarged" to accomodate this extended definition. is the fuzzy c-means algorithm (FCM). and a growing value as we want a more fuzzy characterisation of the clusters. where two fuzzy sets characterising the variable distance are shown. a priori information. disjunction and so on. Table 1 summarises the different steps performed in a typical clustering session. Therefore. set operations like conjunction. . The C-partition is a matrix where the membership values to each cluster of all data items are stored. While it is clear that a distance bigger than 60 kilometers is definitively BIG (at least in the sense of the definition of distance in this particular case). For a detailed exposition on fuzzy set theory see [25]. The question could be now: where to use fuzzy pattern recognition? the main lines can be summarised as follows: => Insufficent information for implementing classical methods => Expert's uncertainty about the exact membership of the object => Inherent characteristics of objects which could be conveniently presented in terms of fuzzy sets => Opportunity to include and process expert opinion in decision making and to handle partial inconsistency The replacement of statistical entities with fuzzy ones should be done very carefully. Clustering methods can be either hard (crisp) or fuzzy. what about of something in between? A gradual membership in the two sets is given. The number of clusters C is derived from the domain knowledge on the data or is a test value that can be changed on the basis of the results obtained. that is.2 in the MEDIUM set.

the algorithm will search for spherical clusters. In other words. The two triangular regions have a common point in x$. a membership of 0. This choice gives raise to different characterisations of the clustering algorithm. to more than one cluster. This approach permits a noise point. Different kinds of prototypes and distances bring different shapes for the searched clusters: hyperellipsoidal. m e [ 1 . The iterative process proceeds until the fuzzy partition does not significantly modify further (the norm of the difference of the last two iterations has a value lower than a pre-determined threshold). to be given a low membership in every cluster. as well as to characterise in a better way points belonging.1 Possibilistic clustering of hardness measurement data Fuzzy clustering algorithms do not always estimate the parameters of the prototypes accurately. planes. in the FCM . as a consequence. A matrix norm to evaluate the distance of each data item from the clusters has to be fixed. for their characteristics. which states that the memberships of a data point across all clusters must sum to one. Applying a fuzzy clustering dgorithrn. • • • X9 • X11 • Xl4 X4 · • X10 Xi • • Xl3 Figure 5: the Butterfly data set Clustering these points by a crisp objective. fix m. and. If use is made of an Euclidean distance (like in the FCM algorithm). the clusters in Figure 6 are not because the point xg. in which " 1 " indicate membership to the left-hand cluster and "0" membership to the right-hand cluster. Into the framework of possibility theory the cited constraint can be relaxed [23]. It is easy to observe that.1.function dgorithm might yeald the picture shown in Figure 6. which seems more appropriate. The main source of this problem is the probabilistic constraint used in fuzzy clustering.5 in both clusters will result. A tipical example where the application of a fuzzy clustering algorithm is of use is that of the "butterfly" data set. hyp er spherical shells. lines. even though the butterfly is simmetrie.αο] • Initialise the fuzzy C-partition U. far from all the prototypes of the clusters found. In Figure 5 the data set is shown. X3 • * Xu X6 * • X12 Xs X2 • X5 · X. the point "between" the clusters has to be (fully) assigned to either cluster 1 or cluster 2. 3. • REPEAT Update the parameters of each cluster prototype Update the partition matrix V • UNTIL | A U | < ε . the cluster prototypes will be points (also known as cluster centres). 12 Table 1: Objective function-based clustering procedure • Fix the number of clusters C.

01 0. Center of cluster 1 Center <f cluster 2 Center of cluster 1 Center of cluster 2 Figure 6 Figure 7 Due to this fact. ultimately assumed to have a value of 10270 in the normalised condition. that there are a significant number of measurements above this threshold. Problems have been encountered in determining the material C constant. The initial hardness is equal to 115 and. which are often quite distant from the primary clusters. and hence the final partition. . This kind of behaviour is detected for temperatures above 625°C and for short exposure times. The two derived expressions will be used in the paper: T= (1) logt-P t = l(T TJ (2) Hardness measures are used to estimate the temperature. In the following. 13 algorithm the membership of a point in a class is a relative number. namely 2%Cr1 Mo and 1 Cr1/4Mo.05 As reported in [24] this material does not exhibit a standard behaviour. and by means of temperature the remain mg lifetime. although it is clear from looking at Figure 8. the membership values cannot distinguish between a moderately atypical member and an extremely atypical member because the membership of a point in a class is a relative number.014 0. where t is the time in hours and T is the temperature in Kelvin.05 < 0. The expression of the Sherby-Dorn parameter is P = logt — . can drastically influence the estimate of class prototypes.069 0.5) and than start to decrease relatively rapidly. while the Sherby-Dom parameter will be C indicated with P. although there exists a region where the values maintain a stable level (up to values of Ρ about -8.27 0.05 0.58 0. Therefore noise points. The material under consideration is a 1 Cr!4Mo steel with the fqhowing composition: C Si S Ρ Μη Ni Cr Mo V W 0. hardness will be indicated with H. Such a behaviour requires a different characterisation of the hardness-dependent variable. Hardness-based temperature estimation A set of experimental data has been extracted from [24] regarding the determination of hardness properties for two different ferritic steels.45 0. The remaining data points show a progressive softening of the material with time. depending on the membership of the point in all other classes and thus indirectly on the total number of classes itself.75 0.

It realises a graph cover with local averaging. Thus.0 Ρ Figure 8: Gaussian fit for the steel Β data Figure 9: Steel Β regression models The clustering method can effectively detect two regions and a clear threshold between them An example of approximation is given using two second order regression models. signal processing. As a test comparison. realisation. It is possible to prove that the generated fuzzy system is a universal approximator from a compact set Q cz Rn to R..62076 Ρ­3. the adaptive fuzzy system estimates fuzzy rules from sample data.41237 Ρ duster 1 « duster 2 iiïjaüsrdïcss η initial ηπτΛιι^τ ­O­o ­0­­. ­110 ­105 ­IQ 0 ­. The solution is the realisation of a fuzzy rule base where both types ofinformation are integrated. It is easy to see how the Gaussian fit can hardly deal with the hardness values over the initial one. to provide the way of evaluating the performances of the obtained system In [26] a general approach to solve this problem is proposed.. in Figure 8 a of the data is reported together with the 95% confidence approximation region. Two clusters has been assumed and the algorithm suggested in [22] employed.2. The rules are automatically built from experimental data. and expired life as input and remaining life as output is approximated by means a rule­based system.2 Fuzzy approximation of inspection intervals In this example the rektionship between damage class. The initialisation of the procedure has been performed using the FCM algorithm. there is need to integrate them in a common framework.0 ­7. In this case the possibilistic clustering approach has been used. it can approximate any real continuous function defined on Q to any accuracy.. ­° o.3«io) A Η = ­»419074­64. The result is shown in Figure 9. If both kind of resources are to be used. The "fuzziness" or multivalence of sets comes into play when output sets overlap.4 5 ­ 9 0 ­ 8 5 41. sensor measurements) and linguistic information (experts opinion). can be classified into two main types: numerical information (e. In the case of relationships difficult to model due to the lack of analytical insight in the mechanisms of system behaviour. etc. 3. or to its high non­linearity. 3. Le."U -. The procedure consists in a five step algorithm and it is exploited realising the actions described in TABLE 2. 14 The application of a chistering method provides a way to find out the possible structure of these regions from the experimental data.o. data analysis) the information concerning design evaluation. derived from metallographic replicas. In this way it ties natural language and expert rules to state­space geometry. SctlBiKÏ­Alol 1 acdB (icy..1 Adaptive fuzzy systems to build a rule-based classifier In many data processing problems (control. A fuzzy system is unique in that it can ties vague definition to the mathematics of curves.5 ­7. such an approximation can be of great advantage. this reduces to patch or cluster estimation in the data space.g. .

Rule generation algorithm Stepl Division of Input and Output spaces into fuzzy regions: to each variable is assigned a domain interval. 15 TABLE 2. retaining only what is the possible system „mean behaviour". on one side. to schedule reinspection intervals.2. takes into consideration not only the damage class as in the Neubauer approach but also service life. In TABLE 3 is reported a description of the Neubauer classes of damage [28]. In the illustrated case the relation is linear and it is based on approximations considering a worst-case approach in the elaboration of the experimental data plus some safety factors. on the other. This to avoid rules with different antecedents but with the same conclusion. A test of the results has been performed against those reported in the cited work. This set of experimental data has been used to check the performances of the new approach. and life expended and damage class. In this case only the rule with the highest degree will be hold. A kind of filtering action is performed. Step 4 Creation of a combined rule base: All the rules comine from data pairs and linguistic descriptions ("expert rules") are integrated in the final rule base Step 5 Determination of a mapping based on the combined rule base: A suitable defuzzification strategy allow for the use of the obtained rule base to infer conclusions from new input data coming from the system. The comparison is particularly meaningful because the cited studies were the base of the Californian Electric Power Research Institute (EPRI) approach for inspection scheduling. namely the Α-parameter method [27]. SMALL and MEDIUMare defined as described in Step 1. is established. All three casts were heat treated in order to obtain coarse-grained bainitic micro structures representative of the coarse HAZ of header welds. where the variables value are expected to lie. Step 3 Assign a degree to each rule: The membership value of each variable plus the eventual a priori information about the data pairs contribute to assign a degree to each rule. using both uniaxial and biaxial (torsion) stress states.2 Problem statement and data available In [27] a set of tests has been performed on two casts of 1 Cr1/£Mo steel and one cast of 2%CrMo. Such a procedure enable to collect only the most significant part of the data. . (see Figure 10). Then each domain interval is divided in regions characterised from a linguistic label and a membership function (fuzzy set) Step 2 Generation of fuzzy rules from given data pairs: For each variable (input/output) value the region (linguistic label) with maximum membership value is derived and a new rule of the following type is created (we assume a two input-one output system): IF xj is BIG and x2 is SMALL THEN y is MEDIUM where BIG. As a consequence longer inter- inspection intervals are obtained. Damage classification at different life fractions has been estimated. 3. simulating coarse-grained HAZ condition at temperatures in the range 535-635 °C. In this way a functional relation between next inspection time. EPRI in [29]. to have a test-bed for prediction methods. Surfaces replicas have been realised to apply the damage classification rule.

16 The work illustrated in this section try to formulate a generic relation approximated by means of a fuzzy system Analysis method. . It is possible to see that the proposed approach gives good predictions and moreover is always conservative. While the reliabüity of such an approximation depends clearly on the quality of the input data. The results are limited to the 1 Cr1/4Mo steeL. the fuzzy representation of the variables provides a tolerance against the possible uncertainties. as for every assessment method based on experimental results.Description of damage classes (from Γ271) Neubauer classification of No consideration of service life damage state Undamaged No creep damage detected Isolated Isolated cavities are observed. after the necessary tuning and validation will became one of the tools belonging to a knowledge based system on material properties under development at MPA Stuttgart. To find out a relation between life spent. 3. the damage class determination. A clear alignment of damaged re-inspection iy2-3 years Β boundaries can be seen. together with a different estimate proposed in the original report [27].g.2. often with multiple cavities on the same boundary. TABLE 3 . the output consists in a prediction of the expired life fraction of the component. e.3 Results A software module has been programmed to implement and test the described analysis method. microcracks have joined together and widened to form Immediate repair D macrocracks many grain boundaries long In TABLE 4 the input data and the results are reported for a subset of cases. damage rating and remaining life an adaptive fuzzy system had been used as previously described. This module. In Figure 11 are reported the fuzzy sets describing the life expired and the damage class. Some boundaries have Repair or replacement separated due to the interlinkage of cavities on them to 6 months C form microcracks Macrocracked In addition to cavities and microcracks being observed. The rule-based system has been designed to accommodate a qualitative input in terms of damage class and a crisp input as life expended. Membership functions have been manually tuned on the base of the indications coming from the experimental data and material behaviour. because for the other material too few experimental data were reported. It is not possible to deduce the direction of maximum principal stress from No action A the damage seen Oriented Cavities are observed. indicating the axis of maximum principal stress Microcracked Cavities are observed on boundaries normal to the maximum principal stress.

Reinspecti on Interval (years) Undamaged 30 ' 20 κ"" ^ 10 EPRI .4 Remarks The preliminary results obtained evaluating the performances of the fuzzy models elaborated are encouraging.~ ~ ~ ~ ~ " Neubauer . < : . Data set and report prediction Damage time time remaining Best Estimate Rule-based Class expired (hours) approach (See [27]) (hours) real value A 2008 6621 5588 5980 Β 4017 4612 4720 4454 C 6195 2434 3755 2349 D 7712 918 2377 56 D 8629 0 308 0 On the other side the most relevant practical limitations of the results presented regards the data.g.. Neural nets are designed in attempt to mimic the human brain in order to emulate human performance and thereby function intelligently. . the euristic values extracted would be different. 12%Cr-steel). The behaviour of these materials is different and. . but in order to bring them to really support industrial applications. hence. TABLE 4 . 17 3..2." " _ < ^ ■ " " " . classification algorithms). . 10 20 30 40 Service life Expended (years) Figure 10 E PRI reinspection scheme (adapted from [3]) 3. as well as the evaluation of different approaches based on the same theoretical background (e.' -' ---~' .3 Neural network s in data mining Artificial neural networks are massively parallel interconnections of simple neurons that function as a collective system It has been observed that many problems in pattern recognition are solved more easily by humans that by computers. . Data necessary for the type of analysis presented were available mainly for 1 CrVííMo steel. Further research in this direction is thus necessary. and the authors are not aware of similar sets of data available for other materials (e.t'i---"l I I I . updating their connection weights during training. perhaps because of the basic architecture and functioning mechanism of their brains. These networks may be broadly categorised into two types: • those that learn adaptively.g. . further investigations are needed to assess the conservativeness and the generality of the procedures.

The training procedure has to determine the internal parameters of the hidden units based on its knowledge of the inputs and desired outputs. After a number of sweeps through the training data. In the process of learning. Weights measure the degree of correlation between the activity levels of neurons that they connect. There exist no interconnections within a layer while all neurons in a layer are fully connected to neurons in adjacent layers. This is the desired output supplied by the teacher. i. For conventional classification problems. At this stage the network is supposed to . the appropriate output node is clamped to state 1 while the others are clamped to state 0. ♦ Multilayer perceptron using backpropagation of error During training. The worthiness of a network hes in its inferencing or generalisation capabilities over such test sets: "Connectionist learning procedures are suitable in domains with several graded features that collectively contribute to the solution of a problem. 2. sigmoid processing elements (nodes) or nuerons that interact using weighted connections (see Figure 12). An external input vector is supplied to the network by clamping it at the nodes in the input layer.. the error may be minimised. 18 • those whose parameters are time­invariant.e. The multilayer perception (MLP) consists of multiple layers of simple two state. Hence training consists of searching a very large parameter space and therefore is usually rather slow. A sequence of forward and backward passes constitutes a cycle and such a cycle through the entire training set is termed a sweep.D 25 30 15 4S 50 55 SO S Figure 11 Fuzzy sets: expired life (above). each pattern of the training set is used in succession to clamp the input and output layers of the network. After a lowermost input layer there are usually any number of intermediate. These networks can be trained by examples (as is often required in real life) and sometimes generalise well for unknown test cases. or bidden. whose weights are fixed initially and no eventual updating occurs. layers followed by an output layer at the top. damage classes (below) For the purposes of this paper a network of the first kind will be considered. during training. a network may discover important underlying regularities in the task domain " [15].

is not explicitly represented in the form of rules or conceptual patterns. Output Layer H Layer h+1 Layer h Layer 0 Neuron Input Figure 12: Multilayer ρerceptron structure (adapted from [15]) ♦ Disadvantages There are sone disadvantages in using neural networks (NN) for data mining Learning processes in NN are very slow compared (e. b) The trained network attains a capability of "generalisation". c) The trained network operates quickly in an application process. recognising written text. In the testing phase the neural net is expected to be able to utilise the information encoded in its connection weights to assign the correct output labels for the test vectors that are now clamped only at the input layer. One of the objectives of data mining isto generate knowledge in a form suitable for verification or interpretation by humans. MLP models using backpropagation have been applied in the exclusive OR problem and in recognising familiar shapes in novel positions.g. generated by NN. The number of units in layer H corresponds to the number of output classes. but implicitly in the network itself as a vast number of weights. discovering semantic features. have been applied to various engineering problems [19­20].) to simbolic learning systems. 19 have discovered (learned) the relationship between the input and output vectors in the training samples. a kind of interpolation.000. NN are outperformed by a factor of 500 to 10. Knowledge. A more detailed description with some application examples can be found in [15].e. The hierarchical (multilayer) networks with a supervised learning algorithm. i. such that a properly trained network estimates appropriate output data even for input data sets not belonging to the training patterns. among the variety of neural networks architecture. recognising speech and identification of sonar targets. . It should be noted that the optimal number of hidden layersand the number of units in each of such layers are mostly empirical in nature. In literature benchmarks available regarding the ID3 learning algorithm [13]. Attractive features of the networks in the industrial applications can be summarised as follows: a) It is possible the automatic construction of non­linear mappings function from multiple input data to multiple output data.

but this mainly concerns single layer networks. Several combinations of input parameters are examined. linear functions.1. Operating conditions and other parameters are given to the input units of the network. 20 There has been some research on trasforming this knowledge to a format better suited for human reading. A collection of more case studies will however resolve such problems. That is.or overstressing (OS)-induced failure.000 to 10. it is difficult to incorporate any domain knowledge or user interaction in the learning process. corrosion.9. which contain complete information except the number of Start-up / Shutdown. It should be noted here that because of the shortage of available case studies. while the overstressing-induced failure is defined as failure caused by higher stress beyond calculated stress due to different reasons".1. Moreover. the range of initial weights = -1 to 1. Overheating-induced failure is defined as "failure caused by higher temperature beyond calculated operating temperature due to different reasons".3. the analysis method presented here is feasible to be applied to the prediction of the erosion-. A three layer neural network employing the back propagation algorithm with the momentum method is trained in such a way that it can predict possible failure mechanisms. Hence NN perform best in areas where no additional information is available. the network parameters are determined as follows : the learning rate α = 0. component dimensions.1 Selection process of 36 case studies The network is trained to predict the possible occurrence of creep-induced failure.1. 41 case studies.2 Network architecture and input / output data An ordinary three layer network is employed.and fatigue-induced failures if case studies and related information are available. Using selected test data about case studies stored in the structural failure database of a knowledge-based system. while the Yes / No value regarding occurrence of creep-induced failure. Three typical combinations are shown below: Combination 1: . that model simple. are selected out of 72 case studies. The attention is focused on the prediction of either creep-. 3. However. and will improve accuracy of the analyses. inferring operating conditions. which is generally not the case with data mining. An analysis module for case studies using the neural network has also been developed. At first. Then 36 case studies are selected and utilised to train the network.1 An Example: Prediction of creep-induced failure In this section analyses of case studies on structural components failure in power plants using hierarchical (multilayer) neural networks are described.3. 3.e. The network training is stopped when the estimation capability for both training and test patterns reaches almost a steady state. the total number of training iterations roughly ranges from 5. is shown to the network as a teacher signal. 1 or 0. the number of bidden units = 10. overheating (OH)-. the momentum factor m = 0. i. material properties and others. inferring operating conditions and some other information.3. an appropriate selection of case studies and input parameters to be used for network training was required to attain high accuracy. the constant of the sigmoid function U0 = 1. 3. or overstressing (OS)-induced failure. Through some preliminary tests. and successfully implemented in a knowledge-based system.000. The failure causes were identified through careful observation of change of material micro-structure in each case study. the network is trained to predict possible failure mechanisms like creep-. overheating (OH).

H (hours). After the training process.3. of Learning Patterns Test Patterns25 Iterations All cases Creep All Creep cases cases cases T/Tc Creep / 89% 100 % 78% 84% σ/σ0 Not creep 10. All input parameters are normalised into a unit range of (0.e.000 (32/36)" (19/19) (28/36) (16/19) LogioH Τ/Τ. Combination 2: • T/Tc. we give 35 cases to the network as training patterns. which is a nominal hoop stress. t and • Material Classes. 21 Table 5: Results of Prediction of Creep-Induced Failure Using Neural Network Input Parameters Output Nr. • Τ (°C). d. carbon steel (CA). Combination 3: • LogioH. Not 5. is carefully examined for all 36 cases. 3.t. The network also has to attain a capability of "generalisation". i.P. Le. However. t and Material Classes Among those parameters. 2) A capability of generalisation. a capability of estimation for test patterns. which are the same as the combination (a). a) Operating temperature divided by creep temperature after operating hours (T/Tc). t and a material name (or a material class) are included in the original case format [17]. repeatedly taking one case as a test pattern.P. d. 1) parameter by parameter before given to the network.2 Results The results obtained using the three combinations of input parameters are shown in Table 5.d. where the stress factor σ is simply denned as σ = 0.. P. while the second number does that of total cases. d. When we have 36 cases. P.000 (34/36) (18/19) (24/36) (15/19) Material Classes creep The first number in bracket denotes that of successful prediction for creep-induced failure cases.000 (35/36) (19/19) (31/36) (18/19) T. σ/σ 0 Creep/ 97% 100 % 86% 95% LogioH Not creep 5. using operating pressure Ρ (MPa). while the material properties are taken from a material database of the knowledge-based system. pipe diameter d (mm) and pipe thickness t (mm). this last feature it is not easy to achieve in the present problem because of the small number of available case studies. we test whether the network predicts or classifies the one subtracted test pattern . T. It can be clearly seen from the table that the training patterns of case studies are easily learned by the network.5 Ρ (d -1) / 1 . The following procedure is utilised to carefully examine the capability of generalisation. b) Stress factor divided by creep rupture stress at operating temperature after c) operating hours (σ/σε). and d) Logaritirmic operating hours (LogioH). P.<U Material Classes LogioH Creep / 94% 95% 67% 79% T. low-alloy steel (LA). high-alloy steel (HA) or austenitic steel (AU). ο7σ0 and LogioH. T.

All the 36 cases are taken as a test pattern in order. • By click on the menu item "Query relevant cases" a default query definition will be provided by the system. It is also clearly seen that the combination (c) excluding T/Tc and σ/σ0 results in less accurate prediction. A typical ADA session can be summarised in the following points: • The end-user fills in the parameters of his present case in the database. • Starting point is a pre-defined master matrix of input and output values for a specific corrosion problem (e. The user will select a matrix by rank and is enabled to remove some of the input and output parameters. otherwise the neural network analysis procedure will be cancelled. Development of an intelligent data mining system In the framework of two BRETE projects (BE 5936 ORACLE and BE 5245 C-FAT). less iterations and higher accuracy. previously defined in collaboration with corrosion experts for several fields. the system proposes suitable matrixes for the neural network analysis assigned to ranks. 4. parts of the general structure are under development. . Among them. developed for supporting the analysis of present cases in a case studies database by means of neural networks.4 for the correct answer of 0.g. In the following sections an overview of two systems is given. The two parameters seem to play an important role in this prediction. If possible.1 FRACTAL Automatic Data Assessment (ADA) is a tool. using the available forms. 22 correctly. It is clear from the table that the neural network successfully predicts the occurrence of creep- induced failure for all the combinations of input parameters. Criteria: • At least 1 output parameter is available • more than 50% of the queried relevant data sets are used for the neural network analysis • at least 3 times more data sets than input parameters are available • The system will check is the parameter combination of the present case allows an interpolation by the use of neural networks. which can be changed by the end user. For the FRACTAL system (BRITE project BE 5936 ORACLE) the problem stress corrosion cracking of turbine rotors and disks was selected for the qualification of automatic data assessment [3]. The network will be automatically configured and trained. the combination (b) gives the best result. This will lead to a subset of relevant cases relating to the present case. • By click on the menu item "Automatic Assessment" a check for possible neural network analysis begins. SCC at turbine parts) to assure a correct problem analysis structure from the subject of corrosion engineering This matrix is now compared with the existing data in the subsets. Le. which allow to use the highest possible number of data records in the queried subset of data. the score for success is counted when the output is greater than 1 for the correct answer of 1 or when the output is smaller than 0. By the definition of mandatory fields a minimum of data will be guaranteed. The matrixes of inputs and outputs will be a selected number of influence parameters and objectives. In the table. 4.

that is under development in the BRITE project BE 5245 C­FAT [4]. learning rate . " ' ■ ' : : ' : : "Τ S ^ T ï ï V "it 7 r |perStresslnt!MN­m­·/* ¡True ­318: IstrengthLeviMN/mm1 W hs ¡572. an automatic evaluation and optimisation of these parameters is performed by the system. This module.. The need to provide to everyone the capacity to explore his own data sets without being a professional analyst and.' : : ' ' Figure 13: Two screenshots of the FRACTAL wizard 4.. on the other side. the application of different techniques can bring to the formulation of useful engineering models. in the field of data processing.2 Expert Miner Database mining is one of the fast growing technologies of last years. As it is possible to realise from the gigantic flow of literature existing on data mining. and probably.Û t&t S : ::: : : : ' ' : ' : . a trained technician in his/her own field (in the case of C­FAT a metallurgist) to perform data mining tasks using advanced techniques. As shown in the previous sections. The system will support the user through a KB S that analyses the user requests in terms of input­output data. to support the professional analyst in his everyday job. the non­ expert has to deal with a number of methods and techniques. based on his expertise about the problem Nevertheless there is a number of neural network parameters to be set up to receive reliable results from the neural networks. serving as base for the further in detail analysis. perTempFc itru "1*3820! ­ _ . The neural networks used in FRACTAL are automatically created.iLi. Growth Rate im/s . lead to solution based on an intelligent software to advice the choice of the right mining method on the base of the data available and the required results. ■Ç^ç· ΙΙΙΙΒΜΙΙΙΙΙΙΙΙ i Auíemetic AüatSfeiafctfl :::::ΐ::::™ WM­:'­:^**™9. reürrning an advice on which method or technique is more suitable to solve the current problem. .Tnj!täp5S. [14]) but not equally effective in different cases. The analysis illustrated in each of three examples was performed from an expert in the particular technique. will enable the user. to have experience in this settings. of the next years. u „„^ e&äot»fe mmnxa i *mm^ ¡mm ^ W ( WUUÍ ^PPBWBMM'«WWM ^ i ^ ^ . Because it cannot be expected from the end­user. 23 • The system will create a suitable network architecture (criteria: accuracy of prediction in a combined training and test prediction phase. which are all already available as software packages (see e.g.) The neural network prediction for the present case will be done and be displayed together with the number case studies the used input parameters and the hst of all possible relevant influence parameters from the master matrix. Τ „ . i t t t ø i i S f t sassi ■ ■ : ' · ' : ' ' ■ ' : . according • to the specific problem to be analysed ("Master Matrix") • to the available data in the present case to be assessed • to the available data in the relevant case studies in the database • to the user's selection. resulting model requirements and data available.

if not supported from ease of use.g.) relationslups to be searched. methods appUcability in terms of their minimal requirements to operate. in terms of (e. select some of them using his/her domain knowledge. Currently a research effort is starting to provide some indexes of performances to assess which approach should be used in the analysis of the data available. The system. If. . Methods: Database: ADVISOR Possibility of Intelligent applicati on queries Effectiveness Analysis of data of ose avail able t t Methods: • m3/CN2/_. To bring this new methods in the industrial practice a not-so-steep learning curve should be provided directly to the plant engineer. that is. the plant engineer. but unsupervised learning (e. This can be realised realising intelligent systems that can support an end-user in managing these powerful but not always easy-to-use tools. but was only mdirectly active. will not be even considered in a normally conservative environment like that of power and process industry. For example. The user can. Moreover. 24 No one can assure that a different technique could not bring a better result. a clustering method has to be applied. the domain expert. Database • CBR • NN Extracted Mo del« 1 Figure 14: The Advisor The Advisor KB S will include the rules that are usually applied in choosing a search method. because the possible economic advantages can be quantified in several millions of dollars each year in some cases. will propose a set of features. intelligent querying and incremental browsing optimisation techniques are going to be integrated in this part of the module. furnishing some bits of domain knowledge. clustering) techniques should be used. the applicable methods will be chosen and their effectiveness in the particular case evaluated. An example is the presence of intelligent flowcharts. will check which data are available. at this point. it is not possible to apply a supervised learning. Nonetheless there is a growing industrial interest in such systems. The author consider the friendliness of the user interface an important issue for the system to be accepted in the industrial practice. had no possibility to perform personally the analysis. e. the feature selection activity is a very critical one. Interaction with the user is also important in the Advisor behaviour. The use of appropriate tools will enhance the user interface.g. The end-user will formulate the objectives of his analysis. On the basis of the data and of the type of task to be accomplished. The Advisor.. This first step in direction of an new advanced system is illustrated in Figure 14. making it more intuitive and effective.g. on the basis of a first search in the database. It is a matter of fact that such an innovative system.g. if assessed results are not available. e. acting as an intelligent interface between the user and the set methods/database. a flowchart which can interact with the user. asking for input and proposing different (pre-programmed) action paths (see Figure 15). Tests on small data samples could give a first advice in this respect. in terms of known drawbacks with particular data sets or to realise particular tasks.

Due to the lack of suitable means of managing large quantity of information. The particular case histories targeted for the present project proposal concern those related to components exposed to fatigue and/or creep loading conditions.. This knowledge is now usually stored as large collections of case histories in paper form. on average.000 MWe ofinstalled thermal capacity): . technical articles on the subject in relevant engineering journals. the most extensive books on industrial failure analysis. and over 10 MECU indirect costs due to loss of production. and are also inefficient for solving immediate specific problems. or in publications on failure cases and analysis. and to allow the storage. For example. In the absence of an appropriate means to manage the bulk of failure cases. the end­users will need additional fast and cost effective support tools to improve the effectiveness of failure prevention and diagnosis. for a given plant capacity.000 to 4. the knowledge has been mainly transferred through failure analysis experts (as personnel training or as item­ specific reports). However. For example. the following economical benefits can be foreseen (the evaluations are for a medium size European utility with 3. such as the Metals Handbook (Volume on Failure Analysis and Prevention). failure cases efficiently for solving present failure analysis problems. even in­house.----♦ Γ Cluster analysis r ι 1 Classifier design . they are operated with smaller core personnel. For the target components a failure can represent huge direct and/or indirect losses. This trend threatens to seriously reduce the available resources for trouble shooting and failure prevention. 25 4. despite still being about 4 times larger in Europe than in the USA. over 3 MECU in costs for investigation and analysis. Through the application of a suitable data mining system. Evaluation of classification accuracy Figure 15: Expert Miner knowledge­basedflowchartscreenshot Consequently.e. The traditional means of ensuring plant safety and availability by efficient diagnosis is based on learning from prior experience with operational problems including service failures.3 Technical and economical evaluation of the integrated system Recurring failures. it is today not possible to utilise. with only sketchy background information. only contain references to a few hundred failure cases. These vehicles of transfer are limited in scope and content. I • i . and mainly used for archiving purposes. represent more than 90% of all fatigue and creep related failures in power and process plants. a recent steam pipe failure in a German power plant involved over 5 MECU in replacement costs. as power and process plants are becoming increasingly automated. utilisation and effective management of company specific in­house experience. i.. repeated failures with similar root causes and damage/fracture mechanisms. C liei Extraction öf relevant data sets Formulation of the initial feature set 1 * ~* Gattiering/generating of labeled data Gathering/generating of unlabeled data * Γ Feature selection 1 '<.

H. Lucia. or are in various phases of design/implementation.250 tons of SOx equivalent. In Knowledge-Based (Expert) System Applications in Power Plant and Structural Engineering Jovanovic. October 1994. JRC. (1989) Neural Computing . 20th MPA-Seminar October 6-7. up to 30. Germany [6] Aamodt A .. Moustakis. makes the described system very attractive not only as an applied research tool but also as a significant engineering tool in the industrial environment. FRACTAL . References [1] A. unexpected operational deviations and consequent loss in efficiency. Stavrakakis.000 GJ of waste heat. A. together with the effectiveness of the preliminary results obtained.000 GJ of waste heat. pp.500 tons of ash and slag. und 7.An Intelligent Software System for Failure Analysis of Metallic Components Susceptible to Corrosion Related Cracking 20th MPA- Seminar October 6-7.SPRINT/KBS Dissemination Workshops. 1994. and about 1000 tons of CO2. fatigue and corrosion of metallic materials. MPA-Seminar .Theory and Practice. a reduction of maintenance costs in extensively automated plants with reduced O&M personnel to save 1 to 4% of the related cost. Germany [4] Holdsworth S. Its objectives are not general. 6. Assuming that 1% of these emissions are avoided by more optimal operation and maintenance. Brear Application of machine learning methodologies for extraction of expert knowledge out of the structural failure database. March 1994 [7] Wasserman P.000 tons of C0 2 avoided per year. Vancoille. Schäfer. The architecture of a new integrated system for data mining in databases has been outlined. Proc.P. 75 tons of SOx (or desulfiuisation by-products). Stuttgart. 5. "Case-Based Reasoning: Fundamental Issues. 1. but mainly related to application in power and process plants for problems of creep. 6. 10 tons of NOx. New York . Van Nostrand Reinhold. M. The SP249 Project and the SP249 Knowledge-Based System as Steps Towards the de facto Standardisation of Power Plant Component Life Assessment Practice in Europe. W. Stuttgart. and 30. Joint Research Centre of European Commission. or 1 to 4 MECU per year The benefit to the environment is anticipated through a reduction of the emissions from sub- optimally operating plants. this amounts to 1. Knowledge-based (Expert) System Applications in Power Plant and Structural Engineering EUR 15408 EN. J. SMirt Post Conference Seminar Nr. M. 235-243 [5] S.D. 900. The increasing avaikbility of large databases of material properties and of failure case histories. Methodological Variations. Jovanovic. AICOM vol. Jovanovic. and System Approaches". A. Some parts of the proposed architecture have been already realised. Psomas .7 Nr. FukudaEds. Stuttgart. or about 5 MECU per year. EUR 15408 EN. which increase emissions per Mwh. Multi-Utility Projects ESR-VGB and ESR-International: Integrated Knowledge-Based Systems for the Remaining Life and Damage Assessment. Bogaerts. Jovanovic. G. 20. 300 tons of NOx. a typical fossil power plant every year produces for each MWe some 50 tons of ash and slag. 1994. M.R (1994) BRTTE-EURAM C-FAT Project BE 5245: KBS-aided Prediction of Crack Initiation and Early Crack Growth Behaviour Under Complex Creep-Fatigue Loading Conditions. Ellingsen. 26 improved failure prevention and life management as well as reduced loss of production in European power and process plants to save 1% of related cost. pp. Conclusions The most important result of the research described in the paper is probably the „proof that the applied advanced data mining methods (despite the fact of being based on pure numerics) are capable to „discover" and describe (analytically) the complex qualitative interrelationships among the material parameters relevant for the life assessment of high temperature power plant components. For example. 459-464 [3] P. 13. Plaza E. 2. Jovanovic. Germany [2] A. V.

Centrum voor Wiskunde en Informatica. Smets and W.(1992) Multilayer perceptron. Gupta. Proceedings of SMiRT Post Conference Seminar Nr. October 6 and 7. M.-J.C. (1987) Predicting the remanent life of lCr'^Mo coarse-grained heat affected zone material by quantitative cavitation measurements.19 [11] Takagi T. Report CS-R9406 [14] MIT GmbH (1995) DataEngine User Manual. L. Prediction of Possible Failure Mechanism in Power Plant Components using Neural Networks and Structural Failure Database. Germany [15] Pal S.1-4. [18] A. Wedel U. Yagawa. Jovanovic.. Jovanovic Α. (1983) Restlife Estimation of Creeping Components by Means of Replicas. Spain. Bogaerts. Knowledge-based (Expert) System Applications in Power Plant and Structural Engineering.. H. Springer-Verlag. S. Prade H.2. No. 1. Aug. Scientific Services Department. Germany. S. Holdsworth S. San Mateo. 20th MPA-Seminar October 6-7.). 4. Boston. Albany.M. (1991) Successive identification of a fuzzy model and its application to prediction of a complex system. Siebes (1994) Data Mining: the search for knowledge in databases. Report TPRD/L/3199/R87 [28] Neubauer Β. Tokyo. June 1994.. Germany [10] Poloni M. pp. 13.4. K . Proceedings oflUTAM Symposium on Inverse Problems in Engineering Mechanics. [25] Zimmermann H. Japan. J. (1968) 77ze Spheroidisation of some Ferritic Superheater Steels. Identification of crack shape hidden in solid by means of neural network and computational mechanics. pp. Stuttgart. Proceedings of 1994 European Simulation Multiconferences. (1994) Fuzzy analysis of material properties data: Application to high temperature components in power plants.480-485. Psomas. Barcelona. Third edition. Englewood Cliffs. S. NY [29] EPRI (1990) Field Metallography Research Leads to Improved Re-Examination Interval For Creep Damaged Steampipes. K.. Journal of Fuzzy Sets and Systems. Maile. (1991) Fuzzy Set Theory and Its Applications (2nd Edition). fuzzy sets. M. Moustakis and A. Morgan Kufmann Publishers. Pal S.. pp. IS 12. ASME International Conference on Advances in Life Prediction Methods. ΓΕΕΕ Trans. Sushmita S. Y. May 1992. pp. Mochizuki andT.93- 112. AP. (1994) Fuzzy and Possibilistic Clustering Methods for Computer Vision. ΓΕΕΕ Transactions on Neural Networks. KBS-related research programs and software systems developed at MPA Stuttgart. Central Electricity Generating Board. SMC-15. (1988) Possibility Theory: An Approach to Computerized Processing of Uncertainty. G. 315-334 [13] M. 27 [8] Quinlan J. Germany. 1994. Knowledge-based (Expert) System Applications in Power Plant and Structural Engineering.B. Brear J. New York: Plenum Press [24] Carruthers R.. S. [20] M. Tanaka K. Dordrecht [26] Wang Li-Xin (1994) Adaptive fuzzy systems and control: design and stability analysis. and classification. Holsheimer. Day RV. Yoshimura. Ellingsen.R. Plenum Press. in Neural and Fuzzy Systems. M. Maile Κ.213-222. (1987) Pattern recognition with fuzzy objective function algorithms. Mitra. Central Electricity Generating Board. Germany.J. Intelligent corrosion management systems. H. Amsterdam. Stuttgart. North Eastern Region. SPIE Intitute Series. Sugeno M. Proceedings of SMiRT Post Conference Seminar Nr. 116-132 [12] Sugeno M. 1993. Stavrakakis. 20th MPA-Seminar - SPRINT/KBS Dissemination Workshops. pp. pp 133-159 [23] Dubois D. Vancoile. EPRI First Use Report Β197 . No 3.. EUR 15408 EN. 1993. Prentice-Hall. Constance. New Jersey [27] Shammas M.S. (1988) Programs for Machine Learning. Yoshimura.. G.C. Aug. California [9] S. Vol. S. Psomas. 42. 13. Oishi. [21] Bezdek J. on Syst. pp. Report SSD/NE/R138. JRC. The Netherlands. Eds (1992) Fuzzy models for Pattern Recognition. New York [17] S.. Constance. Aachen. Jovanovic. Jovanovic.l75-187. An expert system for avoiding repeated structural failures in power plants. Vol.pp. [19] G. No. V. EUR 15408 EN.P. Germany. JRC. Kluwer Academic Publishers. pp. and W. S.683-697 [16] Bezdek J. ΓΕΕΕ Press [22] Krishnapuram R and Keller J. F. Man Cybern. A. Kraske (Eds. Int. (1985) Fuzzy Identification of Systems and Its Applications to Modeling and Control.




The interactive operations take only a few minutes. are some common features in their design processes. The FE models are then automatically analyzed. It and designed considering the coupled phenomena. YOSHIMURA. the designation of local node patterns and the assignment of material properties and boundary conditions onto parts of the geometry model are only the interactive processes to be done by a user. An automatic finite element mesh generation technique.u-tokyo. They are required to be evaluated easy for ordinary designers and engineers to perform. while micromachines whose size ranges IO'6 to tools in practical designs and analyses. 7-3-1 Hongo. considering various coupled phenomena. Keywords : Computer Aided Engineering. The system allows a geometry model of concern to be automatically converted to different FE models. the definition of a geometry model. Neural Networks. together with one of commercial finite element (FE) analysis codes. numerical simulation methods such as the vessels and piping are typical examples of huge scale finite element method (FEM) are recognized to be key artifacts. stress analysis. 31 INTELLIGENT APPROACHES FOR AUTOMATED DESIGN OF PRACTICAL STRUCTURES S. i. and are designed by different the iterative optimization of existing designs without time engineers in different engineering fields. is incorporated into the system. although The present authors have been interested in numerous optimization algorithms have been studied. However. Computer 10'3 m are typical examples of tiny scale artifacts. These However.e. A lot is difficult for them to find a satisfactory or optimized of trial and error evaluations are indispensable. J. MARC. modal analysis and so on. Such solution of practical structures. S. conventional computational analyses of practical structures are in general related to various coupled practical structures are still labour-intensive and are not physical phenomena. optimized solution of practical structures. the present CAE system also allows us to effectively obtain a multi-dimensional design window (DW) in which a number of satisfactory design solutions exist. The other processes which are time consuming and labour-intensive in conventional CAE systems are fully automatically performed in a popular engineering workstation environment. With an aid of multilayer neural networks. Among a whole process of each analysis. utilizing such conventional situations make it very difficult to find a satisfactory or computer simulations tools. They simulations allow for the testing of new designs and for have their own missions. The developed system is successfully applied to evaluate performances of an electrostatic micro wobble actuator as an example. Micromachine. Micro Wobble Actuator 1. YAGAWA Department of Quantum Engineering and Systems Science. DESIGNBASE. LEE and G. depending on physical phenomena to be which is based on the fuzzy knowledge processing and computational geometry Abstract This paper describes an automated computer-aided engineering (CAE) system for practical structures related to various coupled phenomena. Computational Geometry. electrostatic analysis. The University of Tokyo. The quantitative conditions for operating the actuator are identified as one of typical CAE evaluations. and one of commercial solid modelers. automating analysis and design processes of practical . Bunkyo. Finite Element Analysis. Design Window. Fuzzy Knowledge Processing. there consuming and considerable efforts to experiments. Introduction In accordance with dramatical progress of computer Nuclear structural components such as pressure technology. Japan e-mail : yoshi@nucl. Tokyo 113.

) structures can be found in refs.1 Automated FE Analyses systems for structural design automation [1-5]..1. (b) attachment of boundary and boundary conditions to conditions and material properties directly to parts of the geometry model geometry model. 32 structures. and (e) visualization of analysis models loops or edges that are parts of the geometry model using and results. The details of these systems will be described in this chapter. with mesh data when they operate the system. area of satisfactory solutions in a permissible design parameter space. DESIGNBASE [10].. Through the & Node Density' Distributions to Geometry Model analyses. This integrated system includes the following functions : (a) definition of a geometry model. and have developed several techniques and 2. [1-3]. a novel CAE system to effectively support realization some simple interactive operations to their geometry of three-dimensional practical structures consisting of free..e. (c) fully automated mesh generation. To efficiently related to a geometry model can be easily retrieved using support design processes of practical structures. The one is an automated FE analysis system. Outline of the System Node Generation Based on Bucketing Method The present CAE system consists of two main portions. and then by inputting actual values.2 Attachment of material properties i. a mouse. a DW concept 2. system accepts both Direchlet's and Neuman's type dimensional DW in which a number of satisfactory design Definition of Geometry Model solutions exist [4]. 8]. In other words. commercial geometry modelers. while the analyses using the system is shown in Fig. Calculation of Global Node Density Distribution 2. With an aid of multilayer neural networks. This paper The developed CAE system allows designers to describes a part of our latest research activities in this field. Any information dimensional complex geometry [7. this mesh those libraries. In practical situations. The details of the multilayer neural network [6]. A flow of The one is an automated FE analysis system. The object-oriented Attachment of Material Properties and Boundary Conditions to Mesh technique to consistently manage a number of elemental processes appeared in a CAE evaluation of practical Finite Element Analysis (Deformation. automatic FE mesh generation method for three- modify and refer to a geometry model. solid modeling including boolean operations such as union and intersection. The system consists of two main portions. models. Here the DW means an mesh generation part can be found elsewhere [7. designers do not have to deal form surfaces. (d) Material properties and boundary conditions are various FE analyses such as electrostatic. | | : Interactive process I I : Fully automatic process Fig. Using DESIGNBASE The developed system is applied to evaluate one Attachment of Material Properties and Boundary' Conditions of electrostatic micro wobble actuators [11]. stress and eigen directly attached onto the geometry model by clicking the value analyses.l Flow of automated FE analyses . [1-3]. evaluate detailed physical behaviors of structures through i. The present authors have proposed a novel which has abundant libraries enabling us to easily operate. fundamental performances of the system are discussed. DESIGNBASE [10]. depending on physical codes MARC [9] and one of commercial solid modelers behaviors to be analyzed. 8]. 1.1. The present the system also allows us to automatically obtain a multi. Eigen Value Analyses . Electrostatic.e. 2. Each other a design window (DW) search system using the subprocess will be described below. Element Generation while the other a DW search system supported by the Based on Delaunay Triangulation multilayer neural network.1 Definition of geometry model seems more useful than one optimized solution obtained A whole analysis domain is defined using one of under some restricted conditions. The IF-THEN type Using MARC empirical rules to find one of satisfactory solutions can also be found in refs. It should be noted here that different generator is integrated with one of commercial FE analysis geometry models are constructed.

constructed as follows. it is difficult to distribution of node density over the whole analysis well control element size for a complex geometry. The process is Node generation is one of time consuming illustrated in Fig. nodes are first generated. 2.1.1. It is also as the pattern suitable to well capture stress concentration. the pattern to subdivide a whole domain uniformly. depending on their analysis purposes. of both patterns. and adaptive meshing technique [14]. distributions extra nodes have to be removed from the superposed region In the present system.4 Node and element generation importance and where to locate them. 33 boundary conditions. For example.2 Hole! εs 1 1 1 1 1 ^""Crackjtip u c Location (b) Example of bucket decomposition 0 A : Group of candidate nodes B : Group of employed nodes A: Dominant area of . In general.« | j nodal pattern I B: Dominant area of nodal pattern II Bucket <cut> Location oooooi o o o o o o ooo ooo oo oooooooooooooo oooooo ooo oO O o o • β oooooooooooooo o o o o o o o o o o o o oo oooooooooooooo <01lt> _ <in> (e) φ Generated nodes « Candidate nodes not tested o Nodes taken from Gr. When these stress concentration sources exist closely to each other in the analysis domain. Here. possible to combine the present techniques with an the pattern to subdivide a finite domain uniformly. or the hole exists solely in an infinite domain. the local nodal patterns may be regarded locally­optimum around 2.3 Node generation based on bucketing method . A o Tesled candidate nodes Q Node being tested Crack tip (c) Node generation in one of buckets Fig. A user selects some of those local nodal patterns. A node domain is then automatically calculated through their density distribution over a whole geometry model is superposition using the fuzzy knowledge processing [12. When designers do not want any special meshing.2 Superposition of nodal patterns based on fuzzy theory Fig. 13].3 Designation of node density the crack­tip or the hole. The system stores several local nodal patterns such they can adopt uniformly subdivided mesh. the Node density (a) Crack tip ^ÑSymmetric line (b) • · t (a) Example of nodal density distribution »· 1 1 1 1 t Crack tip W 1 1 1 ι T\ ι ι Pu :ket y ^ Membership function Membership function 1 ^­""T Boundary 1 for nodal pattern I Κ for nodal pattern II (c) ^ ( X! . a global and then a FE mesh is built. when either the crack processes in automatic mesh generation. and specifies their relative 2. In the present method.

the following two criteria : (a) The candidate node is inside the analysis domain 2. 3(a). for design window . algorithms. a user designates material properties and Fig. The Delaunay triangulation method [16. Such analysis domain. Thanks to the bucketing method. edges. In the three­dimensional solid case. FE models and analysis results of the bucket.2. At first. Next. The DW seems more extent. 34 bucketing method [15] is adopted to generate nodes which Then these are automatically attached onto appropriate satisfy the distribution of node density over a whole nodes. a complete finite element model consisting of domain is already given as shown in Fig. The current 3(c). Let us assume of element knows which geometry part it belongs to. candidate nodes commercial FE codes. Nodes are generated bucket by bucket. i. and are put into the bucket.5 Attachment of material properties and boundary conditions to FE mesh ^ Satisfactory solution Through the interactive operations mentioned in O Unsatisfactory solution section 2. Next. a mesh. starting from the left­bottom comer automatically performed. 3 shows its fundamental principle. that the distribution of node density over a whole analysis Finally. eigen value analysis. automatic conversion can be performed owing to the taking the previous two­dimensional mesh generation as special data structure of finite elements such that each part an example without any loss of generality. Multilayer Neural Network (b) The distance between the candidate node and the The design window (DW) is a schematic drawing of an nearest node already generated in the bucket area of satisfactory solutions in a permissible multi­ satisfies the node density at the point to some dimensional design parameter space. as shown in Fig. faces and volume of elements. analysis. a super hexahedron is utilized to envelop an analysis 2. useful in practical situations than one optimum solution Practically. the Whole­area Search Method (WSM) is the criterion (a) is examined node by node. and then a node Window generation speed is remained to be proportional to the total number of nodes. each of which is named "Bucket". which are compatible to one of be generated in the relevant bucket. material properties and boundary conditions is super­rectangle enveloping the analysis domain is defined created. electrostatic analysis. depending on physical phenomena to be analyzed. 2.1.e.4 Illustration of whole area search method boundary conditions onto parts of the geometry model. At fint. It should be noted here that the nodes already generated in the neighboring buckets have to be examined for the criterion (b) as well when a candidate node is possibly generated near the border of the relevant bucket.1.2 DesignWindow Analysis Using (ΓΝ/ΟΙΓΤ check). 3(b). A candidate are visualized using a pre/post processor of MARC. node is adopted as one of the final nodes when it satisfies MENTAT [9]. As for buckets lying across the domain boundary. the super­rectangle is divided into a number The present system automatically converts of small sub­rectangles. FE analyses are are pick up one by one. MARC [9]. thermal conduction spacing are prepared in one of buckets as shown in Fig.1. Fig.6 FE analyses domain. Among several bucket. a number of candidate nodes with uniform stress analysis.17] is used to generate tetrahedral elements from numerous nodes produced within a geometry. geometry models of concern to various FE models. and so on. The distance of two neighboring candidate nodes is version of the system produces FE models of quadratic set to be smaller than the minimum distance of nodes to tetrahedral elements. the number of examinations of the 2D Design criterion (b) can be reduced significanüy. the criterion (a) is first examined bucket by determined under limited consideration.

6. However. while the Materials employed here are silicon and silicon physical values calculated are shown to the output units compounds. and its fabrication process is almost the same as those of assumed design parameters vs. Therefore. Despite a large generated in the design parameter space that is empirically number of macro­electrostatic motor designs [18]. 6(a) is its schematic plane view.e. i. Fig. The micro actuator network provides some appropriate physical values even comprises a movable platform. However. As shown in Fig. rotor. 35 employed here. a lattice is first electrostatic micro wobble actuators [11]. This performed to prepare training data sets and test data sets actuator uses an electrostatic force as other micro­motors for the neural network. high reliability and high productivity. some of the present authors have been made so far to build electrostatic micro actuators proposed a novel method to efficiently search the DW [18­23]. All the lattice points are then large­scale electrostatic actuators are in use because of examined one by one whether they satisfy design criteria their insufficient electrostatic power. The neural network is then trained using the wobble actuator has several advantages such as high training data sets.5 Schematic view of procedure of design window search using neural network Fig. numerous FE analyses are part of a highly accurate positioning device [11]. dimensional DW is immediately searched using the well Dimensions of its reference design are as follows. 6 Basic structure of micro wobble actuator . in a or not. A new This method consists of three subprocesses as concept of micro actuator is now demanded. The WSM is the most flexible and robust. each of which is a coupled data set do. A number of efforts extremely huge. 4. an electrostatic mechanism appears number of lattice points to be examined tends to be to be more advantageous to use [19]. The micro shown in Fig. 5.'<& Cross Sectional Preparation of Learning Data View Cut and Unlearning Data by FEM analysis Insulation (a) Top view Physical Values Anchor to Movable Ring Substrate Electrode Insulation ' Spiral beam Utilization of Neural Network as Evaluation Tool (Searching Design Window by Using Neuro as FEM Analyzer) 4 4 4 Substrate Input data (Design Parameters) (b) Cross section view Fig. 6(b) its cross­section view. most of the micro actuators have failed using the multilayer neural network [4]. using the automated FE system actuator considered in the present study is designed as a described in section 2. which are well known as materials for as teacher signal. A training algorithm employed here is semiconductor devices. the backpropagation [6]. response of the FE system. few determined by a user. and a plurality of electrodes. [24].1. At first. Electrostatic Micro Wobble Actuator The present CAE system is applied to one of Movable Rine (Rotor) Anchor to Substrate Spiral Beam Preparation Phase y± Phase 1 ' ^ * ■ " " ' ''■'~:°r".e. but the microscopic domain. After a sufficient number of The basic structure of the present actuator is training iterations. the neural network can imitate a illustrated in Fig. platform is a ring­like plate of aproximately 200 μπι in outer diameter and 150 μπι in inner diameter. are given to the input units of the network. i. The three 3. Finally a multi­ beams. three spiral for unknown values of design parameters. to generate enough force for practical applications. the well trained and Fig. stators. That means. Here the design parameters assumed performance. calculated physical in ref. Compared with similar devices. the micro values. The trained network together with the WSM.

The results are described surface of the electrodes with a little distortion of the three below. Fig. The Table 1 Reference dimensions maximum stress occurs at the middle of some spiral Diameter of Plane Ring 200 Urn beams. i. larger than the outer diameter of the ring. Thickness of Plane Ring 2.. Although the rotation of the ring is limited Assuming the reference dimensions of the rotor due to the spiral beams. have to be analyzed : Fig. placed in the circumferential direction around the platform.655 spiral beams caused due to an electrostatic force. Fig. When the ring rolls one cycle without slipping.5 um that the rotation angle of the rotor is limited at about 62 Width of Spiral Beams 5. the and one of the electrodes.1 Automated FE Analyses between the rotor and one of the electrodes is applied to To examine fundamental performances of the the rotor. its in-plane deformation is analyzed to advantages including high torque and low friction.1. This evaluate the quantitative relationship between a rotation electrostatic micro wobble actuator is to be used as a micro- angle and a torque necessary to rotate the rotor within the positioner.5 um 11 shows the relationship between the calculated torque Inner Diameter of Electrodes 206 um Ts and the rotation angle. (4) Electrostatic analysis of the air gap between the The inner diameter of a set of the electrodes is by 3 μπι ring and one of the electrodes.7 Boundary conditions for in-plane deformation Permittivity of Insulator 4. the driving force produced as an three spiral beams. the present actuator has several listed in Table 1. Results and Discussions controlled force with the magnitude of the distance 4. 36 beams are disposed at the inner space of the ring. considering its rotation along the inner surface present micro wobble actuator. (2) Out-of-plane deformation of the ring with the three 10 shows a calculated distribution of equivalent stress at spiral beams caused due to its weight. which consists of 24. σ = 7 GPa. Fig.0 analysis of rotor .583 nodes.e. the displacement- 4. it has transversed a distance of the circumference 4.0 μπι . 8 shows a geometry model of the rotor. rotation angle φ of 62 degrees at which the deformation reaches the elastic limit of silicon. The electrodes are beams. It can be seen from the figure Thickness of Spiral Beams 2. tetrahedral quadratic elements and 50.0 μπι Angle of Spiral Beams 360 deg. electrostatic attraction force is generated between the ring Assuming the reference configuration and dimensions. Reference dimensions of the actuator are elastic limit of the beams.3 fixed Yield stress 7 GPa displacement Mass density 2300 kg/m3 Fig.1 In-plane deformation of ring with of the inner surface of the electrodes subtracted by its own beams circumference. spiral beams. forced Thickness of Insulator 1. and (3) Modal analysis of the ring with the three spiral connect the ring with the substrate. Material properties are summarized in conditions of the present analysis. In reality. and the ring rolls along the inside above phenomena are analyzed. the rotor is Table 2. 9 does a typical FE mesh. The stress at one junction of the beam and the ring also reaches the same as the maximum value. As each electrode (5) Recovery process of the deformed ring with the is excited sequentially. In this analysis. while (1) In-plane deformation of the ring with the three Fig. displacement Table 2 Material properties Material Si Young's modulus 190 GPa AB = CD Poisson's ratio 0. 7 illustrates boundary shown in Table 1. the following behaviors of the electrodes. attracted to contact with one of the electrodes through an electrostatic force.

Here. the permittivities of the air and insulator of S O.2 Out-of-plane deformation of rotor due to its weight Since the rotor is very thin compared with its diameter. As material Fig. This ~ 30.5μπι vs. 10 Calculated distribution of equivalent stress properties. It is found that the first eigen frequency of 46 kHz is far beyond the minimum requirement of 10 kHz. Fig. This is one of typical scaling effects of micro structures.0 Ο o H Ι 15. the out­of­plane deformation of the rotor caused due to its weight is analyzed using the same FE mesh shown in Fig.0 '_ ι value will be referred to in the section of electrostatic 2 C o 25. .1. 1 . in­plane two­dimensional FE analyses are often performed because of complexity of actual micro actuator geometry.0 1 · > 10. Fig. 37 degrees because of the elastic limit. . The calculated maximum deflection is 2. modal analyses of Fig.0 Γ . 9. 8 Geometry model of rotor Starting torque : 0. 9 FE mesh of rotor the rotor are performed.5 45 67. ι . . .42x10"' Nm.5 90 Angle of rotation(degree) Fig. 12 shows the calculated first and second eigen modes and eigen frequencies. 2.3 Modal analysis of rotor Using the same mesh of Fig. . 9. è "¡Λ 1mm g" 20. rotation angle 4. This value is apparently negligible compared with the in­plane deformation of the rotor. 22. Fig. 4. .0 that the starting torque required is 0.1.1. 11 Calculated torque vs. i.4 Electrostatic analysis of air gap between rotor and stators To estimate electrostatic performances of micro actuators. . 200μπι.0 οU analyses.049 χ 10"5 μπι. 1 . Here a sufficiently large area of the air is modeled in order to approximately take into account infinite boundary conditions.e. 13 shows a geometry model and boundary conditions of part of the air gap between the rotor and the stators. we consider an actual three­dimensional geometry of the micro wobble actuator.42 xlO '^m Fig. 4. 11 also tells us 35. .

(3) testing and modification: the KB norm based approach [10. and at the application levels. 45. Three main reasoning tasks are common to KBS and FLC: proximate methods. We will briefly describes 1. For a more de- bilistic approaches is represented by interval-valued repre. is the conditioning operation. the ignorance mea- of this promising technology. makes use of inferential chains (syllogisms). Because of time and fined in the early nineties [6]. pert system and a fuzzy controller. and evolved into formal applicable to the chosen representation and the control of methods for propagating probability values over Bayesian the inference. and by Smets. the inference mechanism.2 The Application Tasks 1. This complementarity was first deployment (porting and embedding the FLC on the host noted by Zadeh [55]. and meta-information.1-4 Fuzzy Logic Based Reasoning Systems Following the rapid prototyping paradigm. rule base generation. has identified five application tasks for a KBS (see refer- table ones are based on a fuzzy-valued representation of ence [11]): (1) requirements and specifications: the knowl- uncertainty. such as [4. (5) run- reasoning is the generalized modus-ponens [54]. the input and termset granularity tems and fuzzy controllers. respectively. 25. As mentioned in the previous section. . to determine issues such as the In the next section we will briefly describe the develop.3 Structure of the Discussion knowledge representation. 39]. 41]. certainty calculi to perform the intersection. edge acquisition stage. Most re. such as the modified Bayesian rule the knowledge representation. ness analysis). to similarity value. we will limit the scope of our discussion to cover the most notable trends and efforts in fuzzy logic 3. (4) optimizing storage and re- The basic inferential mechanism used in possibilistic sponse time requirements: the KB compilation.1 The Reasoning Tasks valued representations. These techniques started from ap. in section six we will discuss some future trends conflict measurement and resolution. These include the Possibility Theory and Lin. who in 1968 introduced the concept computer). (the partial order induced As such the share many common tasks at the reasoning by the membership function on the universe of discourse). the most no. desirability or preference values. to determine the calculi selection. 20]. The first three stages corre- spond to the development of the FLC application (per- The distinction between probability and fuzziness has formance characteristics. our discussion on FES will be an- tween probabilistic and possibilistic methods. (the com. the fifth stage corresponds to the application of the two theories [18].2 Complementarity ment of a FLC application. In section three and four we selection. Some of the earliest techniques found among the ap- proaches derived from probability are based on single. 34. 54].1. 38 that acts as an elastic constraint on the value that may be 2 KNOWLEDGE BASED SYSTEMS DEVELOPMENT assigned to a variable [57]). functional validation stage. order estimation. these tasks for KBS and FLC respectively. and the control five we will discuss a few applications of fuzzy controllers. Another trend among the proba. In the next two sections we will briefly cover Belief Networks [38. a Fuzzy Expert System which that these technologies ought to be regarded as being com. appropriate data structure for the uncertainty information ment process common to both fuzzy-logic rule based sys. 9. the author Among the fuzzy logic based approaches. who extended belief functions to fuzzy sets [48]. of the probability measure of a fuzzy event. the inference mechanism [26] and confirmation theory [47].3 Probabilistic Reasoning Systems these tasks and then we will proceed to illustrate the de- velopment of KBS and FLC. tailed description of KBS reasoning tasks the reader is sentations such as Dempster-Shafer theory [24. 2. In section union. we conclude chored on RUM/PRIMO. 2. detachment. validation and robust- tions. (2) design choices: the KB de- guistic Variable Approach [57. identification. state variable been presented and analyzed in many different publica. of the inference. the fourth stage corresponds to the transi- searches in probabilistic reasoning and fuzzy logic have tion from development to deployment (fuzzy rule set com- reached the same conclusion about the complementarity pilation). to (FLC) are examples of Knowledge Based Systems (KBS). For clarity's purpose. Fuzzy Expert Systems (FES) and Fuzzy Logic Controllers plement of the distance among possible worlds) [40. the basic inferential mechanism are extensively covered in reference [19]. space limitations. velopment stage. The same task decomposition applies to the develop- 1. surement. which ning the application: the deployment stage. the Finally. the reasoning tasks required by FES can be divided into three layers: the 1. 49]. referred to reference [14]. and pooling of the information. and the resource allocation.1 FES Reasoning tasks based reasoning systems. while the FKC reasoning tasks In all these approaches. was developed by the author in 1987 [20] and further re- plementary rather than competitive. to determine the un- will describe the technology development for a fuzzy ex. 3 FUZZY EXPERT SYSTEM (FES) Given the duality of purpose and characteristics be. and the Triangular. 32] to mention a few.

3D FEM *. 14 shows a FE mesh used.. potential. 16 Calculated starting torque vs.854 x 1012 C/Vm and 3. 16 shows the calculated starting torque τ Fig. 39 are assumed to be 8. ι .. When the next electrode is excited.. 13 Geometry model and boundary conditions for air Driving voltage (V) gap between rotor and one of stators Fig. 14 FE mesh of air gap J Mode I (46.2 kHz) L Mode II (101 kHz) Fig. while the broken curve To built this mesh.301 tetrahedral quadratic elements and 14. the rotor is electrostatically attracted.* .' . When one of the electrodes is excited. Fig. driving voltage curves. torque rj 100 200 300 400 Fig. the following three nodal patterns are utilized : (a) the base nodal pattern in which nodes are generated with uniform spacing over a whole analysis domain. 2.. and (c) a special nodal pattern in which a density of nodes is getting coarser departing from the bottom face.5 2 2.0 O ···©··· 2D Analytical O . ι . driving force . 12 Calculated first and second eigen modes of rotor Fig. respectively.5 3. the rotor revolves without Fig. The solid curve denotes the of 8. (b) a local-optimum nodal pattern for the insulator.964 nodes. 15 shows a calculated distribution of electric Vm... 15 Calculated electric potential distribution 3.542 x 10" C/ slipping. 1. and comes to contact with the insulator on the inner surface of the electrode.0 Insulator 0=0 „ Ν "-0 = i Starting ' Symmetric plane ι .. The mesh consists vs.0 I . Fig. three-dimensional FE solution.

5 μπι Gap width between the rotor and stators (G) : 2. i.1. actuator to be operatable are schematically drawn as a DW.0 . and the assignment of material Such a significant difference may be caused due to the properties and boundary conditions. and .964 50. 50MHz. 14. to start rotating the rotor as given in section 4. generation and a FE analysis are fully automatically Considering that the torque of 0. the starting torque calculated from the electrostatic analysis t c is larger than that calculated from the in-plane deformation analysis of the rotor Tf.0 μπι Thickness of the insulator (T.) : Design parameters and geometrical constraints Q Definition of geometry model ■ Attachment of material properties and boundary conditions & Design parameters and geometrical constraints of Designation of node density distributions to geometry model ■ Generation of nodes & elements the electrostatic micro wobble actuator considered here Π Attachment of material properties and are as follows : boundary conditions to mesh H Finite element analysis Width of the ring (Wr) : 20 .1. 17 Processing times of various FE analyses units in the input layer. it takes from this figure that the starting torque is proportional to about 15 minutes to perform all interactive operations.42 x 10"' Nm is necessary performed in about 35 minutes.1.1.8 mm Design criteria employed are as follows : (1) The wobble actuator can rotate within the limit of elasticity. the dynamic response of the deformed rotor is some simple interactive operations to geometry models. and that the 2D analytical the definition of the geometry model. the square of driving voltage.4.2 Design Window Evaluation 4. 17 shows the measured processing times of is utilized.2.5. the designation of solution is four to five times larger than the 3D FE one. In the electrostatic analysis. To ensure of practical structures such as micromachines through this. 18. As one of examples. During this recovery process. analyzed. 16 that a driving voltage exceeding 170 it takes about 35 minutes to perform interactive operations V is indispensable.e.30 μιπ * Interactive operations Thickness of the rotor (T ) : 2. the assignment of When the rotor is rotated to a certain extent and material properties and boundary conditions take only the voltage is disconnected. i. the deformed rotor is recovered about 20 seconds.2 Network topology and training (b) In-plane deformation of the rotor (c) Out-of-plane deformation of the rotor conditions (d) Modal analysis of the rotor (e) Recovery process of the deformed rotor A multilayer neural network employed is of three- layered type as shown in Fig. it is In the analysis of in-plane deformation of the rotor. Node and element omit of electrical leakage in the 2D analytical solution.5. These are measured on one of popular engineering workstations. the rotor allows designers to evaluate detailed physical behaviors should not touch any electrodes around itself.583 50383 50383 Number of nodes Note : (a) Electrostatic analysis between the rotor and stator 4. the developed CAE system dynamically. local nodal patterns.e.6 Processing speed Here we demonstrate how the DW search method Fig.1. It is found from the analysis that the rotor does not touch any of electrodes.5 Recovery process of rotated rotor generation and a FE analysis.2. 4. obvious from Fig. The network has four Fig. For all the analyses. (2) In order to rotate the rotor.0 . while about 70 minutes is required to fully automatically perform node and element 4. the designation of local nodal patters. That is.1 . including solid modeling.2 -1. ten units in the hidden layer. the maximum equivalent stress amM is less than the yield stress σ . some dimensions of the all the FE analyses described in sections 4. It can be seen 128MB memory). SUN SPARCstation 10 (1CPU. 4. 40 does the two-dimensional analytical one.583 50.

when the mean error of estimation for input data and output data are normalized to a unit range the test patterns reaches the minimum value of 0. With from 0. G and W space.05 '5 0.0. the Mean_Error = — Σ Σ ^pjt ' OPk\ (D nt p=i ¿ = 1 ' y network gradually tends to produce the appropriate output data.5%.32 χ 10"' Nm.8). this criterion. = 0.02 001 Starting Torque > 0. 10 test patterns are prepared to check a generalization capability.e. (G=2. which are similar to the teaching ones. η: the number of output units i.000 within a possible range of each design parameter. On the for the p­th training or test pattern other hand.6.32 χ 10e­09 Nm The number of searching points 0 40000 80000 120000 160000 200000 Thickness of rotor : 2.0. The four design parameters. The two units where in the output layer output two kinds of starting torques.04 u Test data Training data 0. is larger Teaching than 0.0.1.e.95.3.5. Through iterative training. I f Π i. O. It can be seen in the figure that Data satisfactory solutions can be found when T. and G t : the number of training or test data sets are the input data for the network. 18 Network topology and its input/output data 0.2.e. is small. (T.5).536 The number of searched points Iteration Number of Learning in design window = 376 Fig.2.5). Τ.1. the sizes of the micro wobble actuator to be Hidden Layer 10 Units Input Layer 4 Units Fig. teacher signal to the k­th unit in the output layer Pk ' In the present example. the case that a constant of the sigmoid function UQ is taken to be 0.0.2) 0. W. 81 training patterns are for the p­th training or test pattern prepared.0 (const.03 (20.32 χ IO"9 Nm . and Wr is larger.) '" design window = 65. 20 shows the DW in the T.25.005. 19 Convergence of training process Fig. 2.8) 0. τ and τ . the backpropagation learning algorithm [6]. Output 2 Units Layer Next. 19 shows the history of learning process in is confirmed to be within 0.O6 (30. Fig. They are randomly selected The well trained network is obtained at 200.05 to 0. the estimation accuracy of the starting torque Fig. 30). T. 41 two units in the output layer. 2.3 DW search employed for both training and test patterns : DWs are searched using the trained neural network. all the combinations of (W = 20. output signal from the k­th unit in the output layer ( T = 2. i. G is small. Here the following mean error of estimation is 4. All the learning iterations.. 20 Design window when τ > 0. 25. the solutions in which satisfy that the starting torque τ.

5 μπη University of Tokyo and Mr. insulation the present DW search can be performed in an extremely The number of searched points short processing time. This CAE system is successfully applied to (30. Fig. i. 21 shows the DW in the Wr. 21 Design windows for 100 and 120 V .0. in design window = 18. 536 Electric Industrial Co. Fig.072 FE analyses. On the other hand.0. When obtaining the above DWs with the WSM.536 points. G and T. Japan. space when the voltage 120 V is applied and Tr ranges from 2 to 2.2.5. Fig.0. That is. The number of searched points in this DW is 85.8) the evaluation of performances of an electrostatic micro wobble actuator. 22 Design windows for 150 V 5. considering both in-plane deformation of the rotor and the electrostatic phenomena.0. Kiriyama at RACE.1. for their valuable discussions.2) Acknowledgements A part of this work was financially supported by the Grant -in-Aid of the Ministry of Education. 2. A DW search approach supported by the multilayer neural network is also described. To rotate the rotor. (20. would be needed if we do not use the neural network approach.0. That means. 22 shows the DW in the G. we employed only 182 FE solutions (=(81+10)x2) for training patterns and test patterns. stress and electrostatic analyses. we searched 65. Ltd. Conclusions A novel CAE system for practical structures is described in the present paper.e. Shibaike at Matsushita The number of searching points = 65. when the driving voltage is 150 V and Wr Thickness of rotor ranges from 20 to 30 μπι. 42 operatable are searched. no satisfactory solutions are found when 100 V is applied. Starting Torque (Electrostatic Analysis) > The authors wish to thank Prof.5 μπι. This DW is much larger than that for 120 V.420 (Driving Voltage : ISO V) Width of rotor : 20 . for their help. They also thank Nippon MARC Corp.0 . Both torques for different design parameters can be promptly evaluated using the trained neural network. Interactive operations to be done by a user are performed in a reasonably short time even when solving complicated problems such as micro actuators. 131. Although a training Thickness of process of the neural network takes some amount of time. In reality. The other processes which are time consuming and labour-intensive in conventional systems The number of searched points The number of searched points are fully automatically performed in a popular engineering in design window = 0 in design window = 85 (Driving Voltage : 100 V) (Driving Voltage : 120 V) workstation environment. a DW is null. and RICOH Company Ltd. Τ and Τ space. xe has to be larger than Ts.. the Starting Torque (Stress Analysis) Thickness οΓ rotor : 2.30 μπι Fig.

Techniques". S. H. Yagawa.. S. Y. Focusing on Configuration. 43 References IEEE Transactions on Systems. G. P. G. W. Yoshimura. Technology. in Computational Geometry". "IC-processed Journal of the Atomic Energy Society of Japan..502-521. Transactions of the Japan Society First Wall Design". 12 (1968) 94-102.. Integrated Computer-Aided [22] Furano. Soneda. and Actuators (Transducers '87). R. Japanese). "Automation of Thermal and Structural Design Using AI [14] Yagawa. and Muller. Y. [1] Yoshimura.. L. G. "Computing the n-Dimensional Aided Engineering and Software.. "Design Automation Based on Knowledge Engineering North-Holland. Based on Fuzzy Knowledge Processing". 1-6. Journal of manual k5. G. Yoshimura. S. Springer-Verlag. E. L. [19] Tai. Ogawa. in Print. and Omodaka. K. Y. Hinton. an Electrostatic Linear Actuator by Silicon Wesley. H . Gabriel. in Print. A20 (1989) 49-55..and 3-D Mesh Generation (1987).... [6] Rumelhart. 35(1988)731-734.. and Yagawa. and Mochizuki. K. Tokyo. "A Fast Algorithm for Constructing Window Search Approach : Its Application to Fusion First Delaunay Triangulation in the Plane"... Lecture Notes in Computer Science of Mechanical Engineers. Fabrication. H.420-457. and Mullen. and Operation of Submicron [9] MARC Analysis Research Corporation. E. J. Proceeding of IEEE. G. [20] Fujita. in Print.... "IC- Mechanics. T. Fuzzy Knowledge Processing and Computational [2] Yoshimura. M.. C . Testing". SMC-3 (1973) 28-44.. Materials and Processes". Computational [21] Tai. (1989). Analysis of Side-Drive Micromotors". 61 (1995) 652-659 (in (Computer-Aided Cooperative Product Development). Japan. and Nakao. S. Furuhata. M.. Yagawa. USA. and Fujita. G. Computational Geometry". Engineering Software. "Practical Use of Bucketing Techniques [3] Yoshimura. Geometry : With a New Function for Three-Dimensional "An Artificial Intelligence Approach to Efficient Fusion Adaptive Remeshing". H. Addison. S. A.. Y. Α. S. Zadeh. Κ. S. and Williams. R. pp.. Α... 1 (1992) 130-140. S. IEEE Solid-State Sensors and [7] Yagawa. Yagawa. (1990). Advances in Wall Design". System for Structural Design of FBR Components". Y. Uno. Micromachining". S. Delaunay Tessellation with Application to Voronoi [4] Mochizuki.. Salt Lake Based on Fuzzy Knowledge Processing and City. T.2 (1994) Microelectromechanical Systems. and Mochizuki.861-864. "Outline of a New Approach to the Analysis of Complex Systems and Decision Process". H . E. UT.. The Computer Journal. N.. 37 (1995) Electrostatic Synchronous Micromotors". IEEE Transactions on Electron [11] Shibaike. "Electric and Fluid [5] Ueda. Micromachining". and Fuzzy Control". (1982). pp. Man and Cybernetics. "Fuzzy Algorithms". T. "The Principle of an E... 323 (1986) 533-536. in Print. pp. Information and Control. and [8] Yagawa. 7 (1990) 73-77. Y. Fan.. 9 (1992) 333-346. L. A. Actuators. [12] L.. Computational Geometry. "Automated System for Structural Design Using Design [17] Sloan. [10] C h i y o k u r a .. L.. D. and Omodaka. pp. 70. and Muller. International Journal of Computer. 24 ( 1981 ) 162-172. Engineering Analysis with Boundary "Automatic Large-Scale Mesh Generation Based on Elements. and Kawai. C. T. G. Nature. R... H. Material". G. MARC Gap Comb-Drive Micro-actuators". Engineering. S. G. "Silicon as a Mechanical International Journal of Materials & Design. . Sensors and (in Japanese). [24] Petersen. "Design of Micro-mechanisms Devices. D. "Development of Expert Microelectromechanical Systems. [16] Watson. Yoshimura. S. Α. 9 (1987) 34-55. "Automatic 2. Journal of Yoshimura. 1 (1992) 52-59. "Design. Solid M o d e l i n g with [23] Fujita. " The Fabrication of DESIGNBASE : Theory and Implementation.. and Yagawa. Ser. Processed Micro-Motors : Design. Yoshimura. S. Shimakawa. (1988)... H. Proceedings of the 2nd IEEE Workshop on "Automatic Mesh Generation of Complex Geometries Micro Electro Mechanical Systems (MEMS). Polytopes". Integrated Computer-Aided Engineering. [ 15] Asano. Α. Nakao. N. F. "Learning Representation by Back-propagation Electrostatic Linear Actuator Manufactured by Silicon Errors". K. G. 153-195 (1985). [ 13] Zadeh. and Mochizuki. pp. [18] Omar.


Online welding defect evaluation (1). Introduction This paper describes the activities of the committee of Nondestructive Evalution Data Base of the Japan Society of Nondestructive Inspection. .tel:+81-425-83-5111 ext. welding defects (3). what kind of data base? image processing data base for inspection (3). we carried out survey work and we received 62 answers out of 167. Tokyo. remaining life evaluation (1). This paper describes the outline of the survey and the developed systems based upon it.tmit. [1] NDI data base necessary? yes. flaw evaluation (3). This committee developed several prototype systems based upon the survey. 191. JAPAN . not so much(5%) absolutely not (0%) [2] NDI data based developed? or to be developed? yes (34%). visual inspection^ ). no (56%). Survey To clarify what should be done by this committee. 2. This committee was set up together· with several other committees in the Japan Society of Nondestructive Inspection in order to promote the standardization in NDI technologies with the financial support from the Ministry of International Trade and Industry. 3605 fax:+81-425-83-5119 e-mail : fukuda@mgbfu. corrosion of pipes (1). yes (52%). Hino. no answer(10%) [3] If yes in (2).ac. maintenance inspection for a cubic storage tank (1).jp 1. text data base (3). Asahigaoka. 45 INTELLIGENT NDI DATA BASE FOR A PRESSURE VESSEL Shuichi Fukuda Tokyo Metropolitan Institute of Technology 6-6. very much (44%).

we will not (1) [9] What kind of data would you store in your data base? fact (53%). image processing techniques. material R&D engineers (26%) production engineers (39%). inspection reliability (48%). others (10%) [15] What will you think of from the word "evaluation"? inspection accuracy (34%). others (15%) The largest answer was to store fact+paper+knowledge. knowledge(56%). no need to treat NDI data separately (7) ^ [7] What will be the problem of development of NDI data base by some official organizations? know how will spill out (16). no. images (60%). final products (68%). others (15%) [14] Who will be the data base users? NDI engineers (81%). cost performance problem (9). [10] Where is your data base used? lab (52%). strength of material evaluation (39%). keyword selection. [11 ] What stage do you expect to utilize data base? raw material (56%). 3 D image processing of flaws. establishment of common needs (4). standardization of data formats (9). factory (58%). expert system techniques. figures (74%). structural strength evaluation (42%). data are too much diversified and very difficult to standardize (8). standardization on the part of computers (4) [8] If. standard data (81%). we will (2). others (2%) [13] How would you use data base? as an encyclopedia (40%). information source on methods (55%). field (56%).·. would you use it ? (question to those who answers no need for NDI data base) _ yes. others (10%). NDI R&D engineers (64%). papers (58%). data base will be developed by someone else. . knowledge base for inspection procedure determination (1) [4] What do you think is important for developing NDI data base? evaluation techniques for UT flaw size. design engineers (15%). intermediate products (44%). tables (73%). incorporated into the larger data base. objective and user image of data base should be made clear [5] Would you prefer some organizations to develop NDI data base for you? 6 yes's [6] Why do not you develop NDI data base within your organization? lack of manpower (15). 46 {-:. others (11%) [12] What kind of data format is necessary? sentences (58%).

UT (41). For an OS. piping (5). we will consider to use real time UNIX with 10 microsec . Pt (15). ór when we produce NDI procedure specifications or when we make up a report on NDI. This system aims at sampling data in real time and storing them in the data base. steel raw material (6) [20] What failures? cracks (15). AE (6). ET (21). what we will use as data are texts. Further. Both can be processed as several MB/sec discontinous variable length data.lab(2) [18] What material? . corrosion (14). System for Intelligent Text Processing When we make decisions by referring to. There is no appreciable difference in real time processing between NDI data and keyboard inputs. non-metal (13). steel (18).. vessels (8). for example . 4. Regrettably enough. data are transferred without CPU intervention to facilitate the processing speed and to reduce the CPU load. RT (24). Intelligent NDI Data Processing: Conceptual Design This is still in the stage of conceptual design. SM (7). Fig. embrittlement (4) 3. figures. new material (4) [19] What structure? Welded structure (11). . codes and manuals as to how we should carry out NDI. we made a conceptual design for such a system using a computer that possesses real time OS and an A/D conversion function. 47 others (21%) [16] What NDI techniques do you think is important for NDI data base? . MT (20). Therefore. weld defects (5). we are processing them just as simple data files with time stamp tags. others [17] What conditions do you think is important for NDI data base? field (18). The only difference is input speed. we cannot find appropriate means to retrieve these data in an intelligent manner at the present time. heat exchangers (3).1 shows the conceptual design of this system. But discontinuous high speed analog inputs are very difficult to process on an ordinary computer.1 milisec response time. metal. And at the same time we adopt multi CPU and distribute tasks symmetrically. factory(8). metal (3).

5 and we indicate the location where we wish to . a pressure vessel ischosen. Macintosh 2Ci and Next Cube. We can utilize this technique for classification. The report will turn out to be an object oriented data base without any trouble after the task of classification is completed. the amount of data is very limited to such an extent that the system demonstrates the capabilities of a hypertext approach. therefore. Fig. Thus. classification be class and category. the type of a structure to which an NDI technique is applied. if we let a certain word be an object. we can easily take in image data. It is well known that a literature in a certain field or a report made up by a certain person always has a certain style or uses a certain vocabulary. Thus. are always linked to other specific words or sentences. Fig. We developed a prototype based on this idea with the sample from MITI code number 501 "Technical Standard for Structure. 5. These data constitute a piece of knowledge. Thus. And for object programming. of Power Generating Nuclear Reactor Facilities. These vocabularies. So the purpose of this prototype system development was to demonstrate how a hypertext is useful for constructing a data base. It is known that hypertext provides a very useful tool for developing a data base. we can utilize a text for retrieving these pieces of information. In the following example. The Japanese sentences are inputted using a scanner and OCR MacReader Japan. the data can be roughly divided into texts and figures (images). Then an image of a pressure vessel appears as shown in Fig. if we examine the sentence for its concordance. etc. NDI Data Base Using HyperCard An NDI data base prototype was developed using a hypercard on a Mac. and Öbjectiveworks for Smalltalk. and HyperCard and Expanded Tool Kit are also used. rather than to store a large amount of data. 48 tables and/or photos. The sentences are analyzed using Micro OCP (Oxford Concordance Program).80.2 and Fig. tables or photos are without expection referred to on the text. But figures.3 show samples of screen images. using the buttom at right. we will be able to extract a certain characteristic or a piece of knowledge. and link be inheritance and message passing.4 shows the initial screen image where a user can choose. we used Smalltalk ." The computers we used are SUN Sparc 2 GX Plus. As Smalltalk does discriminate data structure.

This system can be used for design support. then we will know what kinds of codes and standards should be referred to. PC-based Network System There were many requests that as PC's are more popular among NDI engineers in industrial sectors. If the location is specified. and further that as we are getting into a networked society. The 5th item. is also automatically inputted in the similar manner. low alloy steel. 6. because he or she knows where to inspect. . a type of joints.7. the 6th item shows the name Fire Prevention Law and the 7th item shows Ultrasonic Inspection. and the third. thickness and 4th item. and the second one shows the inputted material. it would be more appealing if it is demonstrated what PC's can offer for them in Jerms of NDI data base. then a computer prompts him or her to input thickness data. but he or she does not know the expert's name for that joint. he easily finds out by indicating the location by an arrow. In the large box under these 7 items. In this example. a pressure vessel. the NDI procedure specification is shown. etc. then the type of a joint there will appear and the button for material will also appear at right. what PC's can do for them using a network. We can refer to all the applicable technique by continuing the clicking procedure of the 7th button. When we push the 7th button of inspection techniques. The 6th and 7th items corresponds and the inspection methods which are specified by the code shown in the 6th column will be shown in the 7th. If a user designates materials used there. even if a user does not know the name for such type of a welded joint. in this case. where the top item shows the inputted type of a structure. When all necessary information is given. too. And if we click the 6th button. then another applicable technique under the Fire Prevention Law will be shown. then we will know what other kinds of codes and standards are applicable to the welded joint under these conditions. then the final screen image will appear as shown in Fig. if the load conditions are normal. Thus. because if we click such buttons of materials (2nd). 49 inspect using an arrow. is automatically inputted if a user indicates the joint he or she wishes to inspect as is already described. which is one of the applicable inspection techniques specified by this law to the welded joint under these conditions. The location specification by an arrow also automatically fills in the 6th item where applicable codes and standards are shown and also in the7th item where applicable inspection methods are shown. This improves the user interface a great deal if the user is a non-expert. working stress. thickness (3rd one).

[2]. We stored text-base data such as reports.14 shows a sample retrieved table of papers with their titles. Fig.9 shows the initial screen image where all the committee member and system développer names are shown in Japanese. numbers and pages. as our next step in this series. Alhough this is a very simple system. and we can look for the details of ISO. papers and some associated figures. After inputting the password. which will permit processing of images and sounds as well as texts. This system was developed using BigModel. [5].13 shows a sample of information retrieval based on the string pattern matching. 50 To answer these demands. the commercial telephone lines.10 appears as a main menu. research materials.11 where [1]. which is a very popular software for communication among PC network users. volumes. authors. by getting into another submenu.[3]. . Fig. we can refer to ISO . they cannot be referred t o at the same time in this system. the screen ¡mage of Fig.8 shows the schematic of the PC networked NDI data base system which was developed using a BigModel communication software. But information retrieval based on a string pattern matching complements this disadvantage and we can easily retrieve relevant information from papers and reports. and codes and standards. research news and lectures. [6] correspond to codes and standards. [4]. Fig. And if we choose codes and standards. Fig. papers. we are now developing an NDI data base on the internet. and we can also easily refer to the relevant codes and standards without knowledge about word thesaurus. for example. we have developed a networked data base on PC's which can be utilized over. But as PC's are approaching workstations more and more these days and the infrastructure for the internet is quickly improving. it is very useful for NDI engineers in Japan since most of them have access to only PC's and workstations are still very limited among them. JIS and NDIS. And then a user goes on to a submenu shown in Fig. But as the figures and texts are stored in different forms of file .

ρ j 11 | go c*a». □ □ ι *\·7 Γ · Λ · Ε Ι 9 « Β A/DJEBttt¡X ' 4 . ! ► t Kö UB/S« 4 M _ | ix· LJUUUIIUMIIJ 1 Hiccw. [*-* _ — — IPU I \ Fig.1 Conceptual Design for Ultrasonic Testing Equipment Using Real Time Unix .

-b.*J. 1:· II V.2 9 3ÄS 98. "■'■ 2­25 .l.ΤΓ. ll-lt ι:·.S- CD CD H i:-·. ITU sssft S5ftft?!Ä * 6 ft.!K32 ng*S2 ^ U lsi .1 E16.7. »•ui 2­22 » 1 .I MB (Sffl 23' Ai­ ! . Æ­ AV [1)5. Illl Kj. 2­21 i.1 MBfïJS 234K5S 35 t. At.I MB lïffl A\­ t><.5 0 u6 . ÍA:: A\­ Ä. 52 lÉSSStJHx­*^­* «U£ES» 4JÄg 98.2 Hierachical Date Structure of Nondestructive Evaluation Data Base (Text. k· I .Α­ ϊ/·. ι ne am 2: 7tfg 9Θ.5 O I 5· S5ft 14 J i g 98. ¡rii S E ft «¡ai ιι­ιι (116.01 lii.Ι HB ( f m 234K3Í Jg5ft CD 536ft I Q' Q1 5ß8ft sena S37ft CD CD CD CD CD SS9ft SlOft Stift «. IIII 1 JÄ 2 JS—? 2 3Α­^α<7)(1 ΞΓΤΛ Ï& í:7. B a a g a a a 2­23 3&° CD CD' CD CD Ü.·. "■Il 2­26 357 ft 2­24 : ΞΓ^ CD CD D CD ■39 5» S9 ftftîl» Sg 1 o ft 'SS I O ft««« SGftWlÄ CD SS I I ft CD SS I I ftÄ?!K I2JSB 9B. ¡I 23Α­·^·Πβ5(3 Un 3JS 1JS 5)Ã S3Ä Φ |i:iiiii| lilililJÜlÜÜrillÜllililililiiil ÜlilÜlUilililililÜÜllililllliiÜiiilfíliÍ'l Õ a Fig.1 MB fïffl 234K5S 7JSg 9B. Sí 1:.116 ttJÜSI 11 t i líi. Figures and Images have same file structures) .

53 I D I Launch IEJÏ 3 f HSiSttS τ . !*»#(*.Ά -J 5 Ù t f Utilities > Changes > ttS: U=e. s'JM-rjitfz. 2 T « .έ(*. ϊ ί κ ? η δ Μ Γ 4 « 3 ^ ί · ο 4 » β 3 Ι *)-R?ÄO"iiitt?ft5«W<Oe:eÖ)2ÄO 1 ö j R i k t S . π 11.. ÍS* ι. 'Atzì$-£tZ*><»T-$. 75WJ: Fig.ί ^ τ .^ΑίΓτΓ.3 Sample of Information Retrieval from Nondestructive Evaluation Data Base . I Í? » ft5««fcf51ST^õtt.. ^ .k<o^^(c^-è-fz ^öjT-^itnii'·« ς « ο .h j l · .*:Í5[3(C «t δ ε k .£.è(c <* 5 * 2 # .al(?¡S«K Special > Quit. — » χ ΐ ί * ) ΐ . '. -í ttfcl&t/tSiKi. laurini* ΓΚ^ίτ-ΐ = t o ( c * ^ x t * ..x>£t. . £ & ί * ι * Κ ) SBftS23a-^ í S U ü * » ^ * .* ^ . I tas h*<7)Œ) I J. (*E. (W^) ι ■vtomwt. í n í f t 7 5 =i . ι · 5 ο ϋ ΐ Α τ .-t> r< .Λ a Browsers > S E Ì S ^ S T T .

5 Screen Image (1) . 54 Ê­£x"­£^~* Wièifö t: j»Α/^+ $ v* (^SS^S^J Fig.4 Initial Screen Image (Choose kind of structure) ( tm ) Fig.

4-20MHZ mi* tt» Ä Ä S Ä * * (5Z10M0A70) ƒ!]&& 5MHz ^ o o C Quit ) F¡g. 55 C gg ) ( fe» ) Fig. &tø*£ JISZ2344Ofcfcfå£.* ) flftirøflf tft£Ktt# o ι : £Β .Éft* (><0' /rf]&& 0.7 Final Screen Image .6 Screen Image (2) {_ «iStø J J±7J¥2ir ( MK tm ) &­&£« ^HSfiff(mm)3 2 0 ( fô^fétt ) 55­Ô­* ( mñ-h ) •Hat { m ) fflRtø { as*.

K .fíi ( tò Si Ψ UJ i* Hi W ft f* ) N D I 0 0 0 0 4 rik ?f· ^ ( m iL 14 ^ i* Hi * ^ ) N D I 0 0 0 0 5 fil II ΨΆ ( & ± ffi I Sî ) N D I 0 0 0 0 6 * CK të Jt £ ( -x m * ^ ) N D I 0 0 0 0 7 'h ill fä ^ ( ¿£ tø l Ä l H ^ . Communication Software .K * Λ Λ L T < ti è ^ : # # # # # # # # Fig.? ·> x r A ic m -t * « m »i 3 s [ # m m ÎL « ^ a m * É£ I ¿Ê SI S SB I 3 14.NDISYS /< X 7 . j p J [ Tel : 0425-83-5111 ext. a o .8 Network System Using BigModel ! D Ψ 9 *< — χ ? s Χ ψ A N l· 7 [ N D I * -r l· 7 — ? í / X f A ^ A Í C í · I I ] C <D * -y h 7 . 56 Γ Host 1 Γ PC98 PC Ì (PC98 PC) DOS/V PC AppleMacintosh BigMode4.8 3 . Ui & ) N D I 0 0 0 0 2 IP i* ÍU l^¥ ( m db -Λ Φ ) N D I 0 0 0 0 3 M ta ff. 3 t · tfl EH Wl ft S fô riß « fê ± S ¡ft * [ e-mailtyoshiömgbfu. RS-232C RS­232C Commercial telephone lines vtisy Fig.5 1 1 9 v e S B M C e B C B B B B C a a B B B B K K H B B = B = H = = = « « « B B E B B B M B B B 0 = I : B 0 B N D I 0 0 0 0 1 ÍK ÍS Pi c x m &.) N D I 0 0 0 0 8 {t * * 3s HÉ ( M « ι* w a « IA « κι ô ) N D I 0 0 0 0 9 i l ift 0Í — ( |l| iff íü X M ) N D I 0 0 0 1 0 lä m W — ( Ell iL 14 3* i* »i * ^ ) •^ X * -t 1 D .######## 1 D Í À H t ( A Í ^ ·· ndlOOOlO /« x 7 .3641 ] t Fax : 0 4 2 5 . t m i t .9 Initial Screen Image .0 Other PC 31incs .

^ f A |¿ N D I * y r. ? ·> Χ τ ' A Τ "Τ *» [A] S W¡ f 9 y D .i/ . F Ì L I τ y >r ­ > [T] ηη [Β] 3a Ψ fé 7K fê ÍM1 ^ .7 ..10 Main Menu .*♦ S .f 7 í' x [U] ìz^mìnm^ [C] * + ν h ÍN1 Í Ü 7 ­ f ­í ? ^ fij S [W] T^­feXtK&iíftl [D] ^ _ ƒ ^ _ ro] | 7 ­ f ^ ^ I f f i L [Χ] ± 'y rum [E] £7 [P] ^ n ^ ^ A a ­ ^ ­ [Ζ] /Ν A l>" y l· Y — l* •^ fr 7' [Η] ^ .l· y' y r Λ fr [Q] K PDI 3 ­ t ~ [?] R L Τ < tí ¿ l' Fig.

11 Submenu (1) .y * ­ » (1) ! i i ] ÍSfc · S * ! ¡2) » 3 t ! ! [3] sp 14] ÄT ¡Ä ! [5] 3 S C/i 00 j [6] » f t δ Vi L Τ < Λ' è ΙΛ : 1 ­ & te • S « ! IH Ξ IR æ IS ( I S 0 * © f t ) ! [2] B * I 3 I ? & & ( J I S ) ! [3] B * $ K g & S 1 8 ê £ t t C Ν D I S ) « IR L τ < ίί έ ^ : 1 · I R tí [yy-mm-dd] B tt ) : r JUR L T < « ' * ^ ( [W] £ < [B Fig.S ÌR L Τ < rí S n : d ­ Ν D ι ψ .

-SENDER -CONTENTS- 00001 95-02-13 17:20:27 NDISYS ISO TC135 ( Nondestructive testing.-NUM.DATE. 12 Submenu (2) .and alli ed* process 00005 95-02-13 17:33:12 NDISYS ISO TC42 ( Photography ) 00006 95-02-13 17:36:46 NDISYS ISO TC42 ( Photography ) 00007 95-02-13 17:44:36 NDISYS ISO TC107 · 00008 95-02-13 17:46:39 NDISYS ISO TC107 00009 95-02-13 17:49:02 NDISYS ISO TC112 ( Vacuum technology ) υι Ό 00010 " Fig.and allied pro'secc 00004 95-02-13 17:30:45 NDISYS ISO TC44 ( Welding. -R. -R.TIME.) 00002 95-02-13 17:22:48 NDISYS ISO TC135 ( Nondestructive testing: ) 00003 95-02-13 17:26:05 NDISYS ISO TC44 ( Welding.

13 Information Retrieval .-SENDER.. Ά <D & 1* & ftls 7 <) * & -NUM.-R. fif&ÍJEfci'J'X^oiHR Fig.g l ü l l ü i f g t .-SENDER.DATE.R .TIME. T I M E . -CONTENTS- 00000 ν « f+m)i|:Soi>DîLfc » 00001 9 5 ­ 0 2 ­ 1 3 1 7 : 2 0 : 2 7 NDISYS ■ ISO TC135 ( N o n d e s t r u c t i v e t e s t i n g ) o ISO 3057­1974 j # Κ « S t ft ­ $t ft tí.DATE. -CONTENTS- 00001 95-02-13 17:20:27 NDISYS ISO TC135 ( Nondestructive testing ) ·» r Φ * H C 1 o ¿> Ö I U ** 00002 95-02-13 17:22:48 NDISYS ISO TC135 ( Nondestructive testing ) ISO 3058-1974 J $ K 8 & JR .»*£?Η*λΛΐτ<Λ*^ : $m « -NUM. -R. -R.

f c f ô S f f i l c o U T -ΐ!?ϋ4Ξ. II. 2 1 2 30 S H Κ -Y -v ic fc i í i. iSISSTa 2 2 54 21 i a * ¿fl 5 T cf&c ic ¿ i ffl I δ m o m Ili « s* ÍÇHIJUE 2 2 57 22 IftX xiaafetffcttQ-itttøwcRa.xttisiaico^T IHffiSfôtíl 1 1 30 9 'δ H Sil SI f i l l i 23 ¿ L T © 7 ^ 7 5 ν Π g&fô.ic|iliAlc «ι» τ 19 iftX íaeift-nsfHSifôissBotòíSí ifÆiel. 7 X»aîfK1BH:é:ARS*i|8i|fiË^ 17 1ÄX fflja滩ttJf^t*©ffiWtt N. Raiimei s t e r 2 1 Π 34 S f i i.C. iftliV/IOAkiHtøJSJa^W dii m möi 2 2 61 <l/4"4røMT2H©KKiti|H'IIE?J> Fig.i! «s ni ηι if\ ¡ñ /s χ ra ι * ytr ι?.D ^ T ^msspji 2 1 12 32 S H íaeWíláíílrffi O f i l i l í rèi* Ff.ß | J 2 3 80 m K R »-rus B ICO H T 24 iÄX i Ô I Î 8 B © M 5 l W . n i * II· 1 1 13 3 iä* öEPiia.S«i??Ouít§ll|í)^|iiBfà ψ M «ζnu 2 2 45 38 affi S H Bi Ili VA η tò ili ffi ic J: i Ili * © M iEffi. 1 2 40 10 SH XtftRcXrfrøW ffiMtäir 1 2 45 M SH X-Ray S t a n d a r d s for Production and R «ακακία â 1 2 50 e p a l r Welds 12 SM EMS? OM BU *Sflã 1 2 52 13 S H w/t ^xS:Œ ? . 61 fpi| ia¡fü » * .Miller. ■ψ tt ^ V-5.SH n i W ï ï o i f l ï f i é i A a t G. iljJHinßB 2 3 92 25 i à * BetalronlC J. u töïrlWUi 2 4 122 τ 27 M * 3iifeiftr0iälÄSffiicoUT ΜϋηκαΐΦ 2 4 132 28 1Ä£ U m o ' J t í i Í Ü I c o U T 'ΜΙΑώΨ. 2 2 ' 49 20 IftX I f l S i S l c A 4 f ê l 8 K P í n O W I i 1 r l c o l * T 1 Ä ¥ I . 2 4 139 29 SH IKIalSíSSrlRJSffi ίΙΐΙΐΙΙίϊΨ.14 Sample of Information Retrieval . H ¿fe ( S t a n d a r d G r i d Method ÍAUfiÍltftiaê 1 2 66 )ICfciXtøtøg 15 SH HDT»B-~lO& ( I 9 4 9 ~ 5 l ) gft 1 2 68 16 lÄX x » i a a ^ Ä K J : 6 Ä i a i f t i i i < » i w> iiii m s s 2 1 . 'ra isn».mΦ Μ il:-DB 2 2 48 39 S H β Í« l|i co y -i -ν te <> ¡tø Bli Ι« 13· ìli 2 2 56 <0 S H ■i 'J i ? 0 A|C J. COBO) fienile taaJMiSA 2 2 3B J:4ttMieroe¿*oKlKaEí. Tenn 2 1 19 y 18 iftx xoaÉtflcikllilÉtøBKRa. Λ ΦΙ S 2 1 15 33 . 1 xnK&icour ío*K!írâ 1 1 S 2 ία ΰ tt ie J: « » «ία t» æ fô* Fl·. i Radiographic o U T ¿δrais=fin 2 4 112 26 IÄX un is ia Ri ic j : δ (i % f è o x ta I Í ai ic o ¡fê*JJ»tè.. co6o) fierait 13 O Μι & 2 3 69 Λ i fik HitøPø íif t * o R is dÈ cx in (¡Γιfitκ 23 I Ä X ¿a Ai t ic j : s m si rs m © ÍS SE M A« in o H%^ * . G. H.y -.iou ι.§i-ni^/. ¡ta 2 1 25 35 S H HH!fe1rt5|:töïiffilc|U|-r S i t f W f M i l l f 2 1 26 36 S H X W R Ο' Λ' y -7 tø le ¿ s iÔ|g|8-T-© tèïi 2 1 29 iBlfc ( K-í y ) 37 SH *S1SWl-llin/. 1 2 56 4 S H j | i i ï i i t * ï i i s#æiaa:Kffi©? JM gíf&M 1 1 17 22 5 um ASMEdW s .i ß l W i l N l i l ä i l i K 1 1 t SH «*»B!©xttU[H©iaU«co^T-i¿ SíJÍFSOi 1 1 24 LTX«fflXW£íS^«i:»¿ L T - 7 JHt4íV. MEStft | í Hi © ffiΛ 2 1 11 31 S H X t l 7 -f ^ A j í I S ' i f f i l C .ig2RxtftiSiaiftffiäiä (ßnsg) Β*ίίίΙ8Ιβ£ 1 1 24 8 xj8ínioo^.ioookvXiAlî ΗΦιΈ>! 1 2 63 £SKrÄ?tftl 14 S H (St tyi ^.


There is a need therefore to develop a risk-based approach to plant life management by coupling the predictive life models with uncertainty distributions for the input parameters and thus derive component failure probabilities as a function of future operation. The primary requirement concerns the operation of high temperature components which operate under the most hostile and frequently variable conditions of temperature and stress. methodologies and non-destructive methods for post-service condition assessment of pressure parts. A difficulty with these methods however.An ERA Perspective R D Townsend Abstract In the power. In this presentation a general description of the risk-based approach to life assessment will be presented. 63 Advances in Damage Assessment and Life Management of Elevated Temperature Plant . weldments and turbine rotors which in service have experienced degradation due to creep. Examples of the methodology will be given by consideration of Case Studies on: Fired Heaters Reactors Pipework SG/BC61 ADMI/RDT/sd/doc-5 71 ERA TECHNOLOGY . changes and variations in materials properties and the actual equipment condition. Previous work at ERA has established many of the techniques. arises from the uncertainties in the input data required for the remaining life calculations. The uncertainties arise through variations in the operational stresses and temperatures. creep-fatigue. petroleum and process industries the current highly competitive business climate has created a demand for methodologies and techniques which can assist plant operators in reducing operation and maintenance (O&M) costs and deferring capital expenditure while maintaining safety requirements. These provide a sound basis for a deterministic approach for assessing the remaining life of a component. fatigue and embrittlement. For these components there is a need to improve the accuracy of remaining life predictions as a means of providing the basis for planned optimum replacement schedules and where possible to allow extension of operating lives beyond the original designs.

ε 'tmmm Failure Probability Determination ERA TECHNOLOGY W/69/FL0052B .


66 Failure Probability Prediction From Damage Analysis/ Modelling ERA'S Life Prediction Methodologies/Software . steam raising plant . reactors (Reactor PLUS) . fired heaters (Heater PLUS) . rotating equipment Damage Processes: Creep Corrosion probabilistic Carburization damage/cracking Hydrogen Attack assessment routes Fatigue Erosion Temper Embrittlement ERA TECHNOLOGY . process pipework . steam/hydrogen reformers (REFORM) .

and a fuzzy logic decision algorithm for flaw detection. and (6) development of guidelines for the on-line implementation of this technology. (2) development of robust neural networks with low probability of misclassification for flaw depth estimation. and a decision module. development of a methodology for large eddy current database management. and representation. artificial neural networks. INTRODUCTION Steam generators and heat exchangers are important components that affect the thermal performance of power plants and chemical industry processes. artificial neural networks for flaw depth estimation. compilation of a trained neural network library. (5) evaluation of the integrated approach using eddy current data. (3) flaw detection using fuzzy logic. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis. 67 An Automated Diagnostic Expert System for Eddy Current Analysis Using Applied Artificial Intelligence Techniques Belle R. USA Gary Henry EPRI NDE Center. The following key issues are considered: (1) digital eddy current test data calibration. Palo Alto. and decision making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. A large eddy current inspection database from the EPRI NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. TN. and tube defect parameter estimation using computational neural networks are the fundamental contributions of this research. fuzzy logic flaw detection technique. Knoxville. For example. NC. Charlotte. TN. (4) development of an expert system for database management. Upadhyaya The University of Tennessee. Knoxville. The integration of ECT data pre-processing as part of the data management. There are thousands of tubes inside a steam generator. USA Mohamad Behravesh Electric Power Research Institute. compression. CA. USA ABSTRACT A diagnostic expert system that integrates database management methods. USA WuYan The University of Tennessee. the B&W once-through nuclear steam generator has 1 .

In the past. Multi-frequency test has large and complex databases. This technique requires highly trained personnel and is labor intensive. fuzzy logic. plant state identification during transients. Utilizing a fuzzy logic representation offers the advantages of describing the state of the system in a condensed form. Tube degradation could occur due to thermal and mechanical stresses. estimation of performance related parameters. digital signal processing techniques. and decision making using fuzzy logic. nonlinear. Eddy currents are affected by minor variations in the permeability of the test object. 4. 68 about 15. THE EDDYAI EXPERT SYSTEM An expert system called "EDDYAI" has been developed. In recent years research in neural networks has been advanced to the point where several real-world applications have been successfully demonstrated [4]. eddy current inspection has proven to be fast and effective in detecting and sizing most of the degradation mechanisms that occurred in steam generators. nuclear plant monitoring. and towards reducing human interaction with the testing process [3]. Sensitivity is much greater at the test surface closest to the test coil. wear and fretting. 2. and control algorithms [6]. flaw detection system using fuzzy logic. partial differential equations with very complicated boundary conditions. 3. The current research and development of eddy current inspection is directed in part towards more quantitative test results and conclusions. Modeling analysis methods are very difficult to apply for test data analysis. and superior in performance to conventional systems [5]. Some other disadvantages of eddy current inspection method include: 1. Fuzzy logic and expert systems have been shown to be highly successful. fatigue and creep. signal validation. reliable.000 tubes [1]. This research will also provide a technology base for the safety assessment of system and subsystem technologies used in nuclear power applications of artificial intelligence techniques. artificial neural networks. and defect parameter estimation . Depending on plant operating conditions. neural networks. underwater acoustic signature classification and text recognition. These include automated pattern classification. Only visual observation technique is currently used for eddy current test data analysis. Eddy currents are affected by the orientation of a flaw. The degradation of tubes in steam generators causes the most failure rates. eddy current phenomenon is described by three-dimensional. However. and corrosion. This system integrates ECT database management. database management. expert systems. The research undertaken here focuses on the problem of automating steam generator eddy current data analysis using an integration of expert system. and digital signal processing techniques for the automation of NDT signature analysis is a unique feature of this research. developed through linguistic description and is convenient for applications in monitoring. and therefore it has been used as the standard technique for steam generator tubing inspection. The integration of database management. Therefore. diagnostics. the inspection of steam generators is critical to the safe and economical operation of nuclear power plants. Human error in performing the analysis of test data is the main drawback for its successful application. one or more of the above causes can damage the tubes [2].

User interface The user interface provides choice of eddy current inspection data file. display of related information and graphics. performing flaw detection and defect sizing. DRES Format ECT Data M easurem ent Data General Information Calibration Data Peak Detector * ♦ Define Null Point U ser Data Representation * Interface M agnitude Calib. It consists of a user interface. a knowledge base and supporting modules. and measurement data. calibration data. The DRES format data file has three parts: index. Figure 1 shows the major function blocks of the EDDYAI expert system. a rule base. It combines all the analysis into a user­friendly diagnostics system. DRES format ECT tube inspection data files. 69 using artificial neural networks. EDDYAI is developed on the PC WINDOWS platform. Flaw D e t e c t o r * * Phase Calibration ¥ Defect Sizing K n o w l e d g e / R u l e Base Figure 1 The structure of the EDDYAI expert system ECT data file This system uses the multi­frequency. and presentation of data analysis results. Knowledge base The knowledge base consists of fuzzy membership functions and the trained neural networks for defect parameter estimation. The overall system can perform the analysis of multi­frequency ECT data automatically. .

multi-frequency data file. it is necessary to develop a procedure to manage and compress the data. from an eddy current transducer. Figure 2 shows a typical impedance plane (resistance versus inductive reactance) trajectory of data from a differential eddy current probe Figure 2 A typical impedance plant trajectory of data transducer. phase angle calibration and magnitude calibration. Fuzzy flaw detection The fuzzy flaw detection system prepares the fuzzy system input and fuzzy membership functions. Peak detection Peak detection scans the ECT measurement raw data and finds all the peaks. For such a large. Data representation Data representation reorganizes the ECT measurement raw data using different data representation algorithms. ODSCC. Data calibration Data calibration can perform the null point determination. ECT DATA MANAGEMENT AND CALIBRATION Fifty-seven sets of multi- frequency ECT data were obtained from the EPRI NDE Center. The data management system of EDDYAI has the following main components. These data files contain the pitting. executes the fuzzy inference engine program. . and field eddy current test data. Each data file contains sixteen types of signals coming from eight measurement frequency channels. ECT Data Management The size of each EPRI ECT data file is about 35 kbits. These data files are stored in the DRES format. and finds the flaws in the ECT data. Defect sizing Defect sizing function block consists of trained neural networks for defect sizing. 70 Rule base The rule base consists of logical steps for data analysis and rules for decision making. The DRES data have both calibration data and actual data.

The ECT signal for a good tube with the R same structure appears as a Indices straight line. These tubes should be identical in material and size to tubes to be tested. outer diameter (OD) and through-wall defects [7]. Therefore. We analyze the input sequence R for valid peaks and keep a Count of the number of peaks encountered and a record of Indices which locate the points at which the Threshold is exceeded in a valid peak.). we deal with only 1/8 of the ECT measurement data for each data analysis cycle. Null Point Determination: Null point is the ECT signal in the flaw free region of the calibration specimen. there are three steps in data calibration: null point determination. Minimum calibration requirements include inner diameter (ID). 71 Calibration data or actual data: User can perform data calibration procedure by selecting the calibration data. tube end) or tube damage (such as pitting. We use radii as the input sequence R. The radius is defined as the distance from the current position to the null point. or big jumps Width ( indicates changes in tube ^W *K~+. This kind of data is referred to as Figure 3 Inputs and outputs of a peak detector unusual data. The ECT signal that appears with Count peaks. Selection by measurement frequency: The user can select the ECT signal by measurement frequency. thinning. It is the original point for phase angle and magnitude calculation. A peak is valid when the number of consecutive elements of R that exceed the Threshold is at least equal to the Width. This kind of data Threshold is defined as normal data. phase angle calibration and magnitude calibration. A peak detection technique is used to find unusual data parts. ECT Data Calibration Eddy current inspection requires standard calibration specimens (tubes) with artificial defects for initial instrumentation set-up and subsequent signal analysis and interpretation. curves. structure (such as tube support. Figure 3 shows the inputs and outputs of a peak detector. For eddy current data analysis. etc. Peak detector: The ECT data can be classified as normal data or unusual data. L/iJ ΛΛΑ-V W. An accurate null point can make the decisions based on the information of phase angle and 5 . or start the data analysis procedure by choosing actual data from the DRES format data file.

If the phase angle value of a 100% through wall OD signal is not around 140 degrees. medium and high. The important difference between the fuzzy logic approach and the traditional approache is that. Fuzzy logic is the logic of fuzzy (approximation) measurements and is believed to be similar to the human decision making process . the phase angles of 100% through wall OD signals for different frequencies should be placed at an angle of 140 degrees. This is the magnitude calibration procedure. Fuzzy logic is characterized by a linguistic variable whose values are words or sentences in a synthetic language. In fuzzy logic." The key issues in fuzzy logic applications are: . The beginning of fuzzy logic is most widely associated with Lotfi Zadeh. there are five OD defects varying from 100 percent through wall depth to 20 percent through wall depth. phase angle information is used to find the flaw locations and the flaw depth. The magnitudes of all other signals are converted to voltage scale by comparison with the 20% OD signal. In 1965 Zadeh wrote the original paper formally defining fuzzy set theory from which fuzzy logic emerged [8]. we can define temperature as a fuzzy variable to take linguistic values of low. In order to remove the effect of lift-off. 72 magnitude of the ECT signal more reliable. This is the phase angle calibration procedure. Any two phase angle slopes of these OD signals can determine the null point by finding the intersection of the two slopes. The drawback of this procedure is that the lift-off effect can change the true null point and the equipment noise will also reduce the accuracy of the null point estimation. Two procedures to find the null point in the ECT calibration data have been developed in this project. such a judgment is formulated by a possibility distribution function (often taking values between 0 and 1) and is referred to as a "membership function. These OD signals should share the same null point. Phase Angle Calibration: In ECT signal analysis. it should be rotated to 140 degrees. A second null point determination procedure has been developed using the intersection of phase angle slopes of outside diameter (OD) standard defects. the magnitudes of the 20% through wall OD signals are usually set to 4 volts. For example. the former uses qualitative information whereas the latter requires rigid mathematical relationships describing the process. Magnitude Calibration: In ECT signal analysis." The definition of "low" or any other term depends on the user's judgment. Fuzzy logic may be used for decision making in this situation which is characterized by uncertain and/or non-crisp information. the decision for flaw identification and estimation may have a high uncertainty because of a large number of defects with overlapping patterns and due to information from multi-frequency tests. FUZZY LOGIC DECISION MAKING FOR FLAW DETECTION In ECT data analysis. In the ECT calibration data set. The values low. medium and high are called "fuzzy values. The first procedure for null point determination uses the mean value of the flaw free region data as the null point.

or "Not a Flaw. OD flaw (OD). there are four system inputs. C. Possible ID flaw (PI). B. System output is the decision we want from the fuzzy system. Figure 4 shows the organization of this system. and NOT a flaw (OT). membership functions. system output. fuzzification. Not a flaw (OT). The phase angle values at points A. D. The system output has seven fuzzy sets: Negative Unknown (NN). They are the phase angle values of the four absolute signals for different frequencies. 73 (a) Information representation. dufuzzification. (b) Building fuzzy membership functions. The output in a fuzzy flaw detection system should be decisions of "ID flaw". ID flaw (ID). "OD flaw". 1Τ Figure 4 Structure of a flaw detection system using fuzzy logic. rule base and evaluation of rules. E and F are determined by the phase angles of absolute signals in the calibration data set. Possible OD flaw (PO). and Positive Unknown (PN)." Membership Functions: Each system input is associated with a fuzzy set which contains the appropriate membership functions. Flaw Detection System Using Fuzzy Logic Fuzzy logic is used for flaw detection in steam generator tubing. and (c) Developing compositional rule of inference. Each system input has three fuzzy sets: ID flaw (ID). The universe of the fuzzy sets varies from 0 to 360. OD flaw (OD). Membership F unctioi IS System System Input Hr jr Rule Output Denazi­ Fuzzification ψ Evaluation fication k. The phase angle range is from 180 degree to -180 degree. System Input and Output: System input is the status information (such as phase angle values of eddy current signals) about the external system (such as the test specimen). The fuzzy logic flaw detection system contains system input. The values of membership functions are between 0 and 1. 7 . Every fuzzy set has its own membership function. Figure 5 shows typical membership functions of a flaw detection system input. In the fuzzy flaw detection system.

74 Fuzzification: Fuzzification is the process of estimating the degree of membership of an input. The calibration data set was first pre-processed by using the peak 8 . Denazification: After rule evaluation. In the fuzzy flaw detection system. These decisions are linguistic variables. and quarter frequency signal. the strength of each rule is computed for each rule using the minimum operation. Evaluation of the Flaw Detection System Using Fuzzy Logic The fuzzy flaw detection system was tested using one set of calibration absolute data from the EPRI NDE Center. In the development of rules. it becomes a fuzzy input which can be used for fuzzy inference. The conditions of the rules correspond directly to degrees of membership (fuzzy inputs) calculated during the fuzzification process. Rules and Rule Evaluation: The rules inside a fuzzy inference system represent the relationships between system inputs and system outputs. A total of 25 rules were developed for decision making for flaw detection. The projection of the gravity center on the x-axis is the result of the inference. Defuzzification process is used to resolve the conflict issues and convert the linguistic variable to a crisp value based on membership functions of system outputs. The flaw detection decision can be made based on the phase angle information in the system input. They covered all possible input combinations. The idea is to find the center of gravity of the shaded area that represents the output. Therefore. and the high frequency and the quarter frequency signals were used as the additional information for flaw detection. The defuzzification is performed by the center of gravity method. there are four system input signals: F i § u r e 5 F u z z y s e t s md membership functions of a high frequency signal. primary system input frequency signal. This result is then used in generating the decision for flaw detection. and they may conflict with each other. Each rule has the form of an if/then statement. medium frequency signal. The 'then' side has one or more decisions. more than one rule may be fired. there may be more than one decision with different strengths. Once system inputs are fuzzified. This method is called max-product composition. The 'if side of the rule has one or more conditions. Once a system input is fuzzified. primary and medium frequency signals were used as the main inference signals. Then the output of each rule is multiplied with each strength value and the final result is obtained by performing a maximum operation among the outputs of all rules.

6 40 40 40 OD OD 6 318 170.4 OT OT 16 563 351.9 56.1 166.2 205. The phase angle values are converted to the conventional form which uses the 180 degree axis as the 0 degree axis.3 Unknown OT 19 614 168. Test# Data Phase Angle Values Desired Fuzzy Logic Location Decision Decision 1 125 118.7 273.3 47.2 166. E and F are determined.8 Unknown PN 3 207 177. and the Threshold was set to 0.8 OT OT 9 351 86. In the fuzzy flaw detection system. Twenty peaks were found after data scan.1 Unknown PN 18 598 51. the Width was set to 3.2 71.8 OD OD 10 406 102.4 Unknown OT Table 1 Results of Preliminary Study of Flaw Detection System . 75 detection procedure. From the results.3 355 298.1 199.8 OD OD 13 530 138.4 240.4 76.3 155 159.2 348.7 167 197.9 276. C.1 225. the membership functions can be obtained.5 158.8 OD OD 11 459 121. As described in Figure 5.5 108.7 273.9 95.2 161. B.3 9.5 98. In the data scan procedure.3 51.4 195.1 164.7 OT OT 7 322 167.9 197 226.1 201.9 233.3 154. D.1 159.4 233.5 OT OT 15 545 61. it can be concluded that the fuzzy system can detect flaws with a high degree of success.5 223.5 volt.3 19.2 Unknown PN 17 596 45.9 195.8 187.9 68.6 162.5 154.3 162.5 OT OT 14 532 139.8 195.4 195.1 OT OT 5 293 39.8 OD OD 12 511 154. once the values of points A.2 OT OT 4 214 177.4 349.3 9.3 81 63.3 175.4 167. the membership functions were established by using the calibrated OD defect phase angles.8 Unknown PN 2 130 118.8 276.1 OT OT 8 324 165.9 60. Table 1 shows the results of the test.5 122. The twenty peaks were tested using the fuzzy flaw detection program.3 205.7 Unknown OT 20 651 170.8 2.

.· * " * * " .-.". This algorithm is the most widely used systematic method for supervised learning in multiple (three or more) layer artificial neural networks.:: *'­.'" : ft. v"'* 1 ** *■··' "■•"••Γ*.* " " * * · * .*-'·"·'·*...* .·' "'■·· . I ··' ""··:--:··. underwater acoustic signature recognition.■···.*-^*".. · * *"* l. . [10].· * * * * ■ " * * * .* " " * " * · * .. ' ■ ' * ' .·■■'.' " ' ' ." * * ν * Γ * * * · .*-Ί -·**"" . .­­VV­*:­­".·"'" t t i ■·. The backpropagation network (BPN) algorithm is used in this research project. .'. The number of processing elements in a middle layer is often determined experimentally.'.··■""'. plant monitoring. : L-f»··*·" "**·.j.­. " " _ : ■ ' ' · ' > ' .'."'■/. " " "*­"­"­*·■**'■­"­. Artificial neural networks have been applied to the problems of pattern classification." * .:■■■'' ·. signal validation. The solid nodes in the figure are processing elements in a neural network.'-V"" ." _ . " --^*. The input layer requires a signature vector from measured data . " .'. and many others [9]. .'""" ί φ ""*.·­'* ■ . Hidden Layer 2 φ. " ?/­"^<v.:"·. * * * · .7"*. 76 NEURAL NETWORKS FOR FLAW DEPTH ESTIMATION Artificial neural networks (ANNs) provide general mapping between two sets of information.· " " * . """*-.* " " . Jf>~—™* Ì . . * — — ' ■ · ' ' ' " . 10 ./··.. ■■:'■■. Pattern T y p e or Pattern Parameters .. .·-'" *-*-v-_L Hidden Layer 1 . * * " · * » ": * · - It--''' **· .φ *l · * . The network output may be in the form of a signature vector or a pattern classification index.* · ■ " · " · " .. It is very well documented in Ref." " " "·"*'".·*.. . . transient state identification in power plants.­­. φ φ φ ·*·■—*· In tp ut ^m • Figure 6 Architecture of a multilayer neural network. The mathematical basis for the backpropagation training of ANN's is straightforward but involves several steps. This nonlinear mapping from data to data is very useful in associating information pairs where a clear mathematical relationship is not available. . A general architecture of a multi-layer neural network is shown in Figure 6.

Once a defect location is determined. The current neural network software has the following requirements. each of which covers 360/N degrees. (2) number 11 . This method has been shown to be very effective in defect parameter estimation [11]. (3) linear integral value of raw data. Data Segmentation Representation: The operation of this method is to convert the data points from Cartesian coordinates into a polar coordinate system. These three information are: (1) the value of maximum distance from the origin. a sequence of radii from the center of gravity to the contour of the shape has been used for neural network training. They will be used in this project for tube support detection and flaw type identification. One advantage of this representation is that the phase and magnitude can be normalized separately. These are (a) size of the data vector. (a) All data files must have the same number of elements in the input vector. Linear Integral Signal Representation: The linear integrated raw data has been found to be useful for defect type identification [11]. (c) invariance to data orientation. Selected Raw Data Representation: The eddy current test data file contains thousands of data points. the information input to the system must have certain features. Only the data close to a defect are useful for analysis. For each segment. three types ofinformation are obtained and used. (4) sequence of radii from the center of gravity to the closed contour of the shape. The null point is set as the origin of the polar coordinate system. (b) The total number of elements in the input layer. (b) invariance to data scaling. hidden layer and output layer must not exceed 350. and (d) sensitivity of defect type and defect parameters to input signatures. The polar coordinate system is then angularly divided into N segments. and (5) segmentation of raw data. 77 ECT Data Representation For the artificial neural network approach to be effective in defect type identification and defect parameter estimation. the raw data is converted to magnitudes and phases for neural network training. (c) All training and testing data files must be normalized. 50 data points around this location are selected for neural network training. (2) magnitude and phase of raw data. Radii from the Center of Gravity: Since the defect parameters will influence the null point of the eddy current signal. Data representation methods involve reorganizing the raw measurement data using (1) selected raw data. Magnitude and Phase Representation: Since the magnitude and phase information is very important for flaw detection and sizing. In order to meet the above requirements. raw eddy current test data must be properly represented.

are the fundamental contributions of this research. After 28.65 pre­processed magnitude and phase of CS i^ ■ ^ 4 eddy current pitting data for depth if 1 estimation. • Integration of the modules of the diagnostic expert system with appropriate user interface.45 ! il has 100 input elements. The \ i ¡Iι learning coefficient was set to 0. can be performed with high accuracy using computational neural networks. and parameter estimation using neural networks. 12 . such as defect size and flaw depth.000 iterations. 50 hidden lì \ \ elements. and (3) the average value of the data. The RMS error for estimation the recall data was 0.2 for the first 5000 iterations. 78 of data points in this region (in percent). Thus. CONCLUSIONS AND FUTURE WORK The purpose of this research project is to develop a robust methodology for eddy current data analysis and automation of steam generator tube diagnostics. The fuzzy logic flaw detection module developed in this context is a unique and important contribution of this research.4. The neural network 0. The normalized G. In the pitting data set. we ιa D.15 for the <s ¡I il ¡Ι succeeding iterations. it is important to pre­process the ECT data and perform proper calibration to avoid the effects of scaling and null­point shifts.2 cumulative hyperbolic tangent transfer u u r r ø or PATTERNS function is used as the nonlinear transfer function. This technique maps the structural shape information of an object into a fixed feature vector of real numbers. and 1 output element. orientation and size. Figure 7 shows the recall results using the six Figure 7 Recall results of pitting defect depth recall data points. Development of Computational Neural Networks for Depth Estimation of Pitting Defects A three­layer backpropagation neural network was trained using the 0. The momentum \ term is set to 0. V ¡1 samples for testing. For the applied artificial intelligence methods to be successful.00001. fuzzy logic decision making. and 0.03. The following work will be performed in the future. the tuoric Out: Desired Ou t au t normalized root­mean­square error (RMS) decreased to 0.5 ! chose 29 samples for training and 6 V. the integration of ECT data pre­processing. The estimation of tube defect parameters. It is robust to object variations in position.

V. Lippmann. "Hybrid Digital Signal Processing and Neural Networks for Automated Diagnostics Using NDE Methods. 2.. 7 V. 11 B. Vol. Cecco et al. "Eddy Current Testing: Today and Tomorrow. No. Steam Generator NDE Program. "Parallel Distributed Processing. ACKNOWLEDGMENTS The research reported here has been sponsored by a grant from the Electric Power Research Institute. pp 28-32. October 1993. 1993. 1978." Materials Evaluation." Babcock& Wilcox. 5 L. 3 R. A. Yan. Cambridge. Ed. Bradford Books/MIT Press." IEEE Trans. pp 4-22. January 1994. 13 . Rumelhart and J. 3. E. W. 10 D. April 1987. 2 W. Gordon and Breach. Shaffer. Liu. Viot. "Fuzzy Logic. Upadhyaya and M." IEEE ASSP Magazine." Applied Spectroscopy. B. 79 • Testing and improvement of fuzzy logic-based flaw detection system using an extensive ECT database. Systems Man and Cybernetics. P. 6 L. pp 26-33. 47. "An Introduction to Computing with Neural Nets. pp 12-23. MA." Vol. 1985. NUREG/GR-0010. ρ 23-11. 1. 1986. Upadhyaya and W. New York. "Fuzzy Logic: Concepts to Constructs. 9 R." Vol.. 2. 4 Y. Lord. Deeds and C. Vol. A. 1987. No. R. 8 G. GP Publishing. Zadeh." NRC Final Report. 4. S. "Eddy Current Testing. 1973. McClelland. 138. SMC-1. "Eddy Current Inspection of Steam Generator Tubing." IEEE Computer." AI Expert. November. Analysis of the effect of noise in ECT data and its compensation. "Chemometric Data Analysis Using Artificial Neural Networks. Dodd." Electromagnetic Methods of Nondestructive Testing Monographs and Tracts Vol. REFERENCES 1 "Steam/Its Generation and Use. Zadeh. Naghedolfeizi. Inc. 1. 1993. D. pp. "Outline of a New Approach to the Analysis of Complex Systems and Decision Processes. R. April 1988.


1. Finally. the operating constraints inherent to each command.CEMIG OT/SE4 . it becomes propitious and recommendable the introduction of logic computer programs which perform these functions more safely in a wide range of distinct and adaptive configurations and conditions.Av.Brazil Phone:+55-35-6291240 .br Summary: Every switching operation within a power substation is based on pre- established conditions which stem from engineering studies considering both equipment and component hmitations and constraints inherent to the switching itself in relation to the system situation. something possible consolidating every operating technology besides aggregating advancements to the supervision and control operating functions. Companhia Energética de Minas Gerais .e-mail:germano@efei. in such a way that the switching plan constituted by and through the traditional logic is enriched and enhanced with the experience and heuristic knowledge from the operators and dispatchers.dcc. INTRODUCTION The introduction of digital technology and the development of Expert Systems (ES) techniques has made the intelligent automation of switchings in substations (SS). Barbacena 1200 . the .ufing. control and monitoring functions of switching actions do not eliminate the possibilities of erroneous interpretation of the recognition of pattern conditions.Fax:+55-35-6291187 . a comparison between the proposed solution with the conventional system and procedures is carried out.Belo Horizonte .37500-000 . The process for inclusion and validation of new knowledge or components in the substation system is also discussed.Brazil Phone:+55-31-3494161 .Fax:+55-31-3492691 2. 81 APPLICATIONS OF INTELLIGENT SYSTEMS TO SUBSTATION PROTECTION CONTROL Guilherme Moutinho Ribeiro1 Germano Lambert-Torres2 Alexandre P. The aim of this intelligent automatic system for the switching operation is built using the support of Expert System (ES) techniques. Any operating intervention entails the elaboration of a Switching Plan. In the generation of the Switching Plan the equipment operating limitations. BPS 1303 . pointing out the advantages and limitations of this expert system application.MG . in which the actions and commands are sequentially linked. With the ongoing digitalization of power substations.30161-970 . Study cases which permit an inference analysis of the problem solution with an evaluation of the results obtained are presented. Despite all its evolution.EFEI Av. Alves da Silva2 1.AP/2 .Itajubá . the analogic solutions used in blocking. Escola Federal de Engenharia de Itajubá .MG .

it is necessary to frame up the whole assembly so that further results may be achieved by the implementation of each fraction. The utilization of ES in SE operation and control aims at the evolution of the local supervision and control systems. which simultaneously puts together various concepts and domains. Figure 1 shows the operative architecture of a typical SE. S2 operating criteria and the company switching philosophy are considered. there is a need for validation of the measurements (input data) and diagnosis (identification of the need for action). So. so that an intelligent move may be achieved. The reunification of actions and commands in coherent processes of high aggregated value. due to the high degree of uncertainty and the large number of variables involved. adding all the background and the heuristic knowledge to the formalized one. allows to associate reliability. the operating processes are optimized under the functional and economical stand point. is intrinsically complex. with the new technologies and the availabihty of data and information. besides making the automation of the power substation operation possible by substituting the human decision/action by an artificial action of the same efficiency level. with the various types of operative functions. Thus. who must be capable of efficiently responding to the most diverse requests. for determination of the strategies of operative action [2]. of equal effectiveness level. POWER SUBSTATION OPERATION The operation of a Power Substation (SE). With the availability of real-time sampling of the state digitalized data (switching equipment and protections) and of the analogical quantities.Expert Systems techniques. with effective gains. a quality leap in the SE operation mode was made possible. it is possible to define and to establish a new operating paradigm. incorporating practical and heuristic knowledge. The various supervision and control actions require the presence of an operator. 2. Every application. it is possible to proceed a treatment (consistency and validation) and to structure a set of information with a certain degree of truthfulness and suitability able to support the intelligent automation of the switching operating functions. taking advantage of the whole potential of the new technologies. replacing the human decision/action with an artificial intelligence. Upon the introduction of digital technology into the SE's and the advent of the practical application of Artificial Intelligence . must be reconsidered in terms of its basic concepts. Simply transferring or adapting current procedures does not allow the flourishing and incorporation of advancements into the operative procedures involving supervision and control [1]. both listing the . while Figure 2 shows the functional states of a SE. in addition to allowing the automation of the operation. In view of the operational complexity of a SE. The use of the ES technique makes possible to store every switching technology. quickness and efficiency in the switching operating functions. formerly based on the analog technique. Consequently. by handling various types of data and information. optimizing the operative processes from the functional and economic points of view.

Input Data Management of Estimation of Measurements the State <y a¿ \¿. Handling and Valadiation of the Signals Diagnosis JL 3t Supervision Control Protection (a) Preventine Actions State: • Equipment Supervision Topology Management • Operative State Surveillance Loading Surveillance. 83 various operative actions. . • Load Dispatch Availability • Operative Limits Service Life Control Actions Voltage Control: • Tap Control Stability Control and Prevention • Safety and Integrity Function Reactive Control: • Capacitor Bank Switching Control of Reactive Power Switching Plan: • Sequential Switching • Transfers • Isolation < Interlocking Correctivce Actions • Alarm Handling · Oscillography (Recording) • Load Schedding & Restoration Schemes · Analysis of Occurrences • Corrective Strategies for Emergency States · Restoration (b) Figure 1: Operative Architecture and Functions of a Substation.

the activities of the level are not logically related by data flows or control flow. The optimum tendency is the decentralization of the supervisory control but. the rules must be organized as modules. in addition to improving the operational efficiency of a SE. The conception within a functional hierarchical structure. speed and safety with the utilization of ES. in order to prevent an eventual need of revising a reasoning and. Therefore. In both instances. at a "typical span" level. with only functional subordination. implementation and mainenance costs [3].. thus assuring functional cohesion. where the activities of the level do not have any significant relationship among them. In addition. The simplification of the control system and the increase of the value added to the supervision functions. flexibility. The logical cohesion. Every established hierarchical level must contain only elements which contribute for execution of the assignments associated to it. by incorporation of new or complementary functions or upon the application of new funcions that may eventually be conceived and made feasible. 3. where the activities to be performed are defined ouside the level and the coincidental cohesion. allowing and facilitanting future expansions capable of harmonically absorbin the evolutions of the SE topology and assuring portability. Such future expansion possibilities must be taken into account not only in the formulation of the Data Base. enables the implementation of nonconventional control and supervision functions. 84 Corrective SAFE Elimination of Preventive Actions \ t h e Fault Actions Control Actions \ S/ NORMAL / DISTURBANCE / \ J . modularity allows expansion of the system itself.. the considerable number of rules attached to it. reduces its design. The functions must be performed at a level as near as possible to the process. must be prevented.. The formulation of the solution of the operative problems of a SE must take into account the flow of information (dynamic data) and of control (actions) for its hierarchical definition. consequently. bus also in the construction of the rules (Knowledge Base). designed so as to meet various requirements such as reliability. FUNCTIONAL REQUIREMENTS AND ORGANIZATIONAL ASPECTS Any ES application must be conceived in a modular form. in the event of the adaptation of the SE to an expansion or modification of its topology.^1 Control \ Actions ABNORMAL change of state in view of the occurrence Figure 2 : Functional States of a Substation. with an open and flexible architecture. thinking in overall terms and acting .

in order to reach a logically consistent. Is must contain a detailed description of the SE. and define the solution of a problem or subproblem. . static and dynamic attributes. aiming to safeguard the integrity of equipment and people. and considering the context of the Power Electric System (PES) to which it belongs. well as experiment all the range of operative conditions. Such rules may be combined and compared among themselves. whether formal. its components and equipment. SWITCHINGS IN SUBSTATIONS All the switchings in SS follow operating criteria pre-established by engineering studies. existential. circuit-breaker bypass. Therefore. its main operational characteristics. 85 locally. temporal. schemes. generally developed in the ming of the experts because of extesive observation of typical results. The knowledge of the causal. by using the information stored in its knowledge Base. insulation of equipment (bus.: automatic re-closing). blocking/unblocking of automatisms (e. the possible contingencies and their restoration sequences. it allows the decision-making to be made with basis on a wide knowledge. transformer or line). and functional relationships among the evidences. The main switchings in a SS are: . The Inference Motor enables the ES to infer knowledge. the majority of the details of the specialization tend to be grouped and assembled in heuristic rules. . it actually results from the interrelationships of previously stored information. factual or empirical. switching. circuit-breaker transfer. in order to obtain some results which did not exist "a priori". such as topology. In the inference process. .g. the hypothesis or the parameters of the models which may be used in the solution of an operational problem. . However. Thus. Thus. The supervision and control functions are conceived aiming to monitor the switchings. and to provide its correct suitability to the operating technology [4]. energization/disenergization of equipment. comprises the formal part of the Knowledge Base. restoration guidelines and philosophies. in face of the operating flexibility allowed by each topology. properly stored in the form of "facts and rules" in the Knowledge Base. but inaccurate solution (Fuzzy Logic). which consider both the switching equipment operating limitations and the operating constraints inherent to the switching itself. The heuristic rules attempt to substitute the need of memorizing details or particulars of the SE or the Electrical Power System (SEP). the Knowledge Base is structured using as a basis Facts and Rules which must portray all the knowledge available about the SE. 4. the information derived is not completely new. a SS may have a ES .

The switching plan to bypass the circuit-breaker of the bay IK would be made up of: SEQUENCE SWITCHING 1 check the voltage in bus BK and in circuit IK 2 check closed switches 1K3 and 1K5 3 check closed breaker 1K4 . as presented by Figure 3. Considering as an example the data of the bay "IK".circuit number 1 K . 86 automatizing its switchings. command (verify open/verify close). output of a Transmission Line (TL).disconnecting switch near the bus TL to SS2 Figure 3 : Typical TL Bay IK. for example.voltage level 138 kV 3 .block/unblock). A switching can be broke down in a set of actions and commands of the kind: . action (open/close . . 1K3 means: 1 . should they be either individual or part of a sequential switching plan. the following signal set displayed is obtained: V(K) voltage measured in bus K V(1K) voltage measured in the output of bay IK I(1K) current in the output of the bay IK 1K3 disconnecting switch 1K5 disconnecting switch 1K5T grounding disconnecting switch 1K6 bypass disconnecting switch 1K4 circuit breaker 1K50/51 line overcurrent protection 1K50/51N neutral overcurrent protection where.

EXPERT SYSTEM FOR POWER SUBSTATION RESTORATION . undergo necessarily the analysis and evaluation of the paradigms previously established and the definition of the problem solving strategy. e) to check the switching equipment setting (opened/closed). Its application is justified in the cases of unavailability of an established theory. The conception and development of an automatic system able to switch a SS.0. which create inference path for the problem solution. g) to layout the switching plan. The characteristics inherent to the problem recommend the use of Expert Systems as a more suitable tool to its equationing and solution [5]· 5.MofF*) measure(" 1 ". the following steps are found: a) to receive the switching. within a specific domain.60. its topological and operating characteristics are stored as FACTS in structures of the kind: connectionf 1 ". The ES use knowledge outlined by symbolic representation and apply heuristic rules through deductive processes."K") switching_switch(" 1 ". By considering the circuit-breaker represented in Figure 3."K"."B". and solves complex problems through use of inference and knowledge methods. h) to validate the switching plan (simulation). renewed. d) to check the selector switches setting. c) to check the state of the equipment (energized/ disenergized). imitating the behavior of a human specialist of this field.250.9) ."K". generating necessary and sufficient actions and commands in sequential order. with its respective disconnecting switches. orientation and operating constraints)."4". f) to refer to the rules concerning the problem interlocking. not causing any change in the program operating structure. 87 4 block the automatic reclosing of breaker 1K4 5 close the bypass switch 1K6 6 open the breaker 1K4 7 open the switches 1K3 and 1K5 In the inference process for actions and commands which will make up a switching plan. which are structured with high flexibility degree in face of the access associative to the rules [6]. knowledges are allowed to be withdrawn. updated or modified."on") selector_switch(" 1 ". where there are doubtful data and information and troubleshooting problems. The ES is based upon knowledge (and not on data)."K"."67M."K". Since control is separated from knowledge."K". 138."on") protection("l".ESRASE An Expert System (ES) is a program able to treat a certain problem."4-43R". included. b) to identify the circuits involved.

ss limit_measure("l".CLASS. the actions and the commands which will make up the switching plan are infered.8 = power factor limits linesource = equipment kind SE1.0.9 — power factor measured 128.SWrTCH. in which the operating philosophy and the heuristic knowledge are stored.VOLTAGE. SWITCH_CLOSED) mterlockmg_measure(CIRCUTT.SE2 = TL terminals initiator = terminal energization characteristic By its turn.59.S\VITCHING.500.-0."SE2". the RULES describing the functional and operating characteristics of a switching.line_source("SEl ".VOLTAGE.CONSTRAINT) From the FACTS and by using the Knowledge Base."K".61 =frequencylimits -0.ORBENTATION.61. SWITCHES_OPENED.CRITERIA) criteria_switcrung(SWITCH. forming the Knowledge Base.ACTION."initiator").144 = voltage limits -1.8) equipment(" 1 ". .500 = current limits 59."K".8.CURRENT) pmlosophy_switching(CIRCUIT. in the following standard way: switching_plan(SEQUENCE.128.-l. have been structured through Production Rules of standardized way in structures of the kind: mterlockmg_switching_switch(CIRCUIT.8.0.CIRCUIT.144. Figure 4 presents the partial listing of the predicates of the rules which make up the Knowledge Base.SWITCH.SWITCHING) where SWITCHING can be either an ACTION (open/close) or a COMMAND (check the voltage presence)."on") where: 1 = number of the circuit K = voltage level 138 kV B = bus 4 = circuit-breaker 4-43R = reclosing selector switch 67 = overcurrent protection 138 = voltage measured 250 = current measured 60 =frequencymeasured 0.ACTION.

check_on(N. ST ATE) : - interlockmg_switchmg_device(DEVICE.CLASS. the following statements are valid as far as the interlocking of the switches opening/ closing command is concerned: .STATE.switch 1K5T can be switched only if switches 1K5 and 1K6 are open.DE VICE) : - interlocking_measure(DEVICE.CLASS. independent of the voltage or the circuit level. check_interlocking(N. Figure 4 : Partial listing of the Knowledge Base.switches 1K3 and 1K5 can be switched only if circuit-breaker 1K4 is open.DEVICE_ON. 89 command_switching_device(N. 6.DEVICE.CL AS S. or with respect to the constraints or operating orientations present. check_ofiTN. check_current(N. with its respective disconnecting switches.STATUS). which consists in restricting the switching freedom of a given equipment with respect to the states of the other switching equipment existing in the circuit and of the control devices associated to them.DEVIVE_OFF).SWITCHES_OPEN.SWITCHING. . check_voltage(N.A). ST ATUS) : - check_interlocking(N. V). CLAS S.DEVICE. check_interlocking_measure(N.CLAS S.A).DEVICE.CLASS. Considering the circuit-breaker represented in Figure 3. While performing a switching. . one of the requirements of paramount importance is the interlocking.CLAS S.V.switch 1K6 can be switched only if circuit-breaker 1K4 and switches 1K3 and 1K5 are closed.CLASS.CLAS S . Then. SWITCHES_CLOSED) .DEVICE_OFF). check_interlocking_measure(N. it is possible to implement the interlocking for this kind of bay by using a set of Production Rules where the general structure of the predicate is: interlocking_switch_switching(CIRCUIT.DE VICE). These rules hold for all circuit-breakers with the same topological characteristics as the present ones. STUDY CASE Among the most important actions in a switching plan.DEVICE_ON). the switching of disconnecting switches and of circuit-breakers are to be mentioned.

"ofF..[ ])."5T"."5T"."off'...[. the ES developed checks the signal permissivity with respect to the rules interlocking set to validate it or block it.. mterlockmg_switching_device("3 ".3"." ". Figure 5 : Partial Hst of the interlocking module rules. mterlockmg_switching_device(.."A").4"."on"..."6". interlockmg_measure("4"... In case of validating it...["3". interlocking_measure(.4". mterlockmg_switching_device(."6"])."3 ".V"."ofF."4"...["4". corrimand_switching_device(" 1 ".["5" the messages are issued as presented in Figure 6."5"..5T".["3 "]). mterlockmg_switching_device(.6".].[ ]."4"."A")..5T". interlocking_measure(.]."open") "INTERLOCKING OK" "Open command 1K3 sended" Figure 6 : Example of the program screen display."K". a set of orientative messages presented by Figure 7 will be issued.3". 90 Figure 5 presents a partial list of these rules written in Prolog.5T".[." ".].]. 10 . mterlockmg_svvitching_device(.["5"]). mterlockmg_switching_device(. To send a command signal from a switching device.3"..5"."off. In case the program identifies the interlocking constraint.["4"]).." ").["5T"])..["5T"].

M. Paper 39-304. 8. Lambert-Torres . join all the operating procedures and command actions necessary to its performance. taking advantage of all the existing background knowledge. With the definition of a new set of operating techniques. 11 . More elaborate switchings such as isolating a bus or energizing a transformer.Expert System for Automatic Restoration of Substations". REFERENCES [1] G. 34th Cigré Biennal Session. orienting for the processes and not for the tasks. including the verifications of the selector switches settings.CURRENT (IK ) = 50 A " "Close command 1K 5T locked" Figure 7 : Example of the program screen display. establishment of relational standards. So as to achieve the goals proposed. it is necessary to identify accurately the service to be rendered and to elaborate the architectures necessary to the Information System and able to support it. ΙΠ SEPOPE. better swiftness. Only this way it will be possible to reach the goals such as supported evolution. 1992."5T". Ribeiro and G.1K 3 close . it is possible to perform the SS operating automation. France. CONCLUSIONS The application of ES in the operation of the SS allows to add considerable functional capacity to the operating system. Belo Horizonte. signallings and conference of the operating conditions (requirements and constraints). 91 command_switching_device(" 1 "."ESRASE . Brazil. Lambert-Torres et alli . Paris. 1992."close") "INTERLOCKING FAIL .1K 5 close -VOLTAGE (IK ) = 138 kV . So as to establish a new operating paradigm it is necessary to re-evaluate the purpose."Computer Program Package for Power System Protection and Control". [2] G."K". 7. constant search for total quality and reduction of the operating costs. The concepts of Information Technology and Process Re-Engineering should be applied to the products and services.Reason: .1K 4 close . personal capacitation. efficiency. interlockings.

Costa. Australia. Electra. Lambert-Torres. Melbourne. Do . [5] Β. Liu and CIGRÉ TF38-06-03 . 1993.D. [6] G. Ribeiro. 12 ."An Tool for Teaching Power System Operation E mergency Control Strategies"."A Fuzzy Knowledge-Based System for Bus Load Forecasting". G."PROCONTROL . 1991. in Fuzzy Logic Technology and Applications. TV International Symposium of E xpert System Application in Power System (E SAP)."Practical Use of Expert System in Planning and Operation of Power System". Mukhedkar . 1994. [4] C. and X. G.C. on Power Systems. Lambert-Torres. Lambert-Torres et alli . 92 [3] G. Valiquette. and D.M. IEEE Trans. IEEE Press. 1993. C I A .A Hybrid E xpert System for Power System Protection and Control".

b) non-destructive examination of systems.State-of-the-Art Power Plant System) . The solution of this engineering problem requires the performance of different-nature information processes. Jovanovic . components and locations. repair. Over . In the background of the whole plant life management process and the corresponding IT tool are the data sources (i. operate. In the first group are the tools that can be broadly classified as knowledge-based (expert) systems (KBS) (see Figure 1). fuzzy logic's and engineering decision processes. i. Germany see the Glossary for this and all following acronyms . and can be used (usually by an expert) to set a realistic inspection or maintenance interval.e. interfaced to functionally interact. One of the important problems to solve in this architecture is the interfacing of the different materials information sources which are needed in the form of databases and associated algorithm libraries. is a tremendous challenge. taking the High Temperature Materials Databank (HTM-DB) as an example of "European databases" developed at JRC Petten and widely used in several large European projects. JRC Petten of the European Commission. S. H. which range from exact algorithmic calculations and data processes to less formatted heuristic knowledge. H. Each of these processes can nowadays be represented or supported by modern information technologies (IT) tools. Intelligent computer systems in the area of power plant maintenance and diagnostics Four major groups of intelligent computer systems in the area of power plant maintenance and diagnostics can be identified. The Netherlands " MPA Stuttgart.g. e.e. Boiler Maintenance Workstation of EPRI (USA) for certain boiler components. in particular the determination of optimal inspection intervals. primarily those databases regarding material data. inspection data and component/system data). e. reliability and/or cost centred maintenance of systems. or SOAP . reduce load or re-inspect the component(s) concerned. 2. A. Kröckel 1.e. c) a multi-criteria decision making based both on the regulatory guidelines and experience i. Introduction Advanced life management of power plant operating high temperature pressurised systems and components is based on an interactive strategy of: a) design and re-design complying with formal requirements of regulatory codes and incorporating component life assessment (CLA ) by analysis of the component life exhausted by creep and fatigue. 93 The critical role of materials data storage and evaluation systems within intelligent monitoring and diagnostics of power plants H.see Dooley and Institute of Advanced Materials. i. The development of an IT architecture for plant life management in which these different tools are combined. heuristic knowledge to manage the decision to replace. This paper tackles the issue of material databases.

Maile. these systems give only the possibility to store data from previous inspections and/or component/system data.. The user is then supposed.. EPRI) J The "whole plant" systems: e. • consider variable operational conditions. EPRI-CQIM) Vibration advisor (e.g. corrosion (KUL). The fourth group are systems for component/system state monitoring. they can usually provide a engineering assessment of the corresponding remaining life.e. Such KBS systems are currently developed at MPA Stuttgart under sponsorship of the Association of German Electric Utilities . MPA-ESR) (e. Gehl. optimal intervals between outages.g. in each particular case (i. In general systems in this group give only an implicit recommendation (based exclusively on engineering factors) on when to inspect one component .g.g. e. USA and Japan has been devoted to development of such KBS's applied in the fields of power plant and structural engineering. IVO.g. In general. EPRI-GEMS) (e..g. The nature i.g.) Other systems. range. these systems give only the possibility to monitor more components.g. or the system developed in the European SP249 project (Jovanovic. to decide when is the right time to re-inspect again. In general. EPRI-BMW) (e. • include heuristic knowledge.. Much of the recent research effort in Europe. EPRI-SOAP material selection (KUL. 1991). Figure 1: Some KBS's usedfor single problems (the "first group" of systems) The second and the third group are databases and database-like systems. usually per one component and/or location). for instance the ESR System (Jovanovic.e. 1994). The databases for component/system data are usually developed and delivered by component/system manufacturers.VGB. timing of outages. i. but the recommendation on when and how to inspect is "mechanistic" and based on extremely simplified assumptions. LMS. . Boiler maintenance Piping monitoring systems (e. EPRI-HEATEXP) ^ Generator monitoring J Coal quality impact (e.e. The desiderata for the further development of these systems towards the 'ideal system' (see Figure 2) refer to the following objectives: a) refinement and additional features • provide guidance on e. scope and methods of required inspections..g. These are developed especially for material data. 1990). confidentiality of NDT results has led to the fact that many of the NDT result databases have been developed by utilities. These tools define the inspection intervals implicitly.g. 94 co-workers (1993). Friemann. non-destructive testing (NDT) results and for plant component/system data.(KBS's for single problems). Some of these systems (Jovanovic. MPA.BE3080) Piping analysis Heat rate analysis (e. These systems are developed both by manufacturers and by utilities.

• provide link to database and monitoring systems existing in power plants. is now accompanying the management of the component integrity throughout its life through successively determined design and redesign (inverse design) steps following inspections. 95 • include non-engineering factors like costs. CLA is that aspect of the overall architecture which has the main need for materials information. CLA and the need for material data 3. Ì Momtoring KBS's for single systems: problems . other databases and other KBS's 3. neural networks. The system appears as a conglomerate of single software modules controlled by an overview . • integrate the most advanced engineering methods for damage evolution assessment/prediction with the state-of-the-art intelligent software technologies such as hypermedia.1 CLA technologies Component life assessment (CLA). • as the principal means or "vehicle" of the CLA technology transfer. historically a part of the design process. Exploitation of the KBS's within the CLA technology has been successfully demonstrated in programs such as those of presented at the ACT Conference (1992). which serves as the main tool for transfer and application of the target CLA technology. intelligent flowcharting. • interfacing of external material databases.III database: database: I SP 249 HTM-DB J IDEAL SYSTEM im PLANT ENGINEER / ANALYST Figure 2: Ideal system (KBS) linked with. In the SP249 project the KBS technology appears at two levels: • as a part of the modern CLA technology. The SP249 CLA Generic Guidelines are implemented in a knowledge-based system (KBS). • implement the new (state-of-the-art) methodology for multi-criteria decision making. b) integration of material information • definition of material databases as internal systems. in the European SP249 and ESR projects.I Inspection ^results databases Internal lliËjaefriall:iï\ material material . environmental impact and/or safety implications. • a co-ordinated approach to internal and external materials data analysis.

helping him to: 1) retrieve data (about material. The hypermedia based parts/modules "cover" the background information built into the system: • the CLA guidelines. partly on the experience and heuristic knowledge incorporated into the CLA guidelines. The architecture allows to introduce new modules or to reorganise the existing ones any time. component. • frequently used codes. a decision aid for making the "3R decision": replace.). ί nm « M |<5^ V Figure 3: "Intelligent" environment for the CLA analysis in SP249 . • standards and other documents. e. 4) obtain advice. • case studies. 5Ε3Ε3Ε3ΒΒ v — ­■ P­­ rli ZftB P­Í W. 96 module. • damage analysis. 1994) Object oriented programming (OOP) appears both at the level of the overall SP249 KBS architecture (each part of the system is an object exchanging messages with other ones) and at the level of its single parts. The system covers: • decision making according to SP249 CLA guidelines.) • recommendation regarding the annual inspection (revision). run. Friemann. built on top of commercially available software (Jovanovic. 3) retrieve necessary standards.» «*U m r-1 r_. 5) find an optimised solution for his problem (see Figure 3). i. 2) evaluate/calculate data. (This decision is based partly on the regulatory guidelines. etc. Using the system the user is supported by an "intelligent environment". Materials data retrieval (1) and evaluation (2) can be done within the sophisticated procedures of the HTM­DB as an external KBS functionality and/or on the KBS side within its internal functionality. The whole system is designed as an engineering "tool box". repair. HTM-DB as the external •tam materials database KBS I *— Larson · Miller -j-s.

ASME. With respect to ASME CODE N47 the time-dependent stress intensity limit. KTA. TRD). 97 3. Most of them are currently tackled by JRC Petten and MPA Stuttgart in their efforts to integrate HTM-DB data into projects like C-FAT. 1984). They rely on materials data referring to the respective national (e. .2 Material data Possibilities of practical integration of material databases with KBS's are nowadays realistic and fully feasible. As recommended in the ASME CODE N47 and DIN 50117 the minimum data are determined from the confidence interval by subtracting 1.g. Their use in elastic analysis in the design phase of the power plant. Basic issues to be solved are: a) response to intelligent dynamic queries created either by the KBS or by the user. (b) 1. BS. Most of them are strictly based on elastic rules which guarantee that allowed strain limits are not exeeded within the operation time of a component (Nickel. for instance is derived from the minimum of: (a) 1% strain limit or stress to rupture.25 of the stress for onset of tertiary creep. ISO) standards. St. (c) 1. g. These data are used for component design of power plants which is correlated to existing codes (e. 1983). AFNOR. EN. Material data for plant life management (see Figure 4) can be categorized by the following material pools: Pool 1: Materials data from national or international standards: Those time-dependent materials data are offered by the HTM-DB as Excel charts with hypertext information in addition to the test data of pool 2.5 of the stress to rupture. The codes use high safety factors to cope with experimental uncertainties.g. Schubert Penkalla Over. This procedure guarantees a probability of 95% that the result of a single experiment is better than the minimum data (HTR Regelwerk. DIN. b) creation of user-defined database reports which can be included directly into both KBS inputs and electronic document based reports. ASME) or international (e. SP 249 and others.65 of the standard deviation from the mean value.

Alternative code options which allow design by analysis. .000 hrs by using St ­ values (bold line) calculated from the stress to rupture criterion (b). which has the effect that improved materials are on the market before the code limits can be adapted. The 550°C stress to rupture isothermal of a ferritic material is calculated from a Larson­Miller plot by using an HTM­DB data set with rupture times between 450°C and 650°C. * ■■} 10 1Ε+1 1Ε+2 1Ε+3 1Ε+4 1Ε+5 1Ε+6 Rupture time (h) Figure 4: Example of a stress intensity limit determination Figure 4 shows how conservative is the elastic design procedure against creep deformation based on the St ­ value. 98 Larson-Miller 550°C isothermal curve for a ferritic material ^w i! ^ . These requirements are necessarily conservative with respect to the material potential available. better heat treatments and better quality management. require inelastic data and relationships which must be provided by the power plant supplier himself and/or obtained from other data sources where this information is available in reliable and validated quality.000 hrs for steam pipes stressed at about 80 MPa based on mean values would be reduced to about 80._ S υ 100 I t ^ t : 'I1*** τ ■ Ί^^ι»^ ■ ■ : ' 11-"^> 4L *"^»«. moreover. But with higher temperature the material is exhausted sooner. The operation time is often restricted by the formal requirements of elastic design analysis (design by rule) of the existing standards and code cases.000 hrs by using minimum values (dashed line) and to about 2. and one must find a compromise between life­time and temperature. Pool 2: Materials data from tests. They can be used together with utility test results for inelastic analysis in design & redesign. in particular if one allows for the rapid progress in materials fabrication and production technology. For cost aspects and thermal efficiency both the operation time and temperature should be as as high as possible. Within this example an operation time of 100. usually for materials and/or temperatures ranges which are not covered by the codes: Such experimental test results of new and service exposed materials data mostly coming from European projects are offered by the HTM-DB as released data.

The code cases demand quality control materials tests such as tensile and charpy-V impact tests at different positions of the components to guarantee that the component conforms to specification. II. Normally the measured test results are entered in the component certification forms which are stored in thick files. During the life-time of a power plant they can be used as reference data for damage assessment of components. in-pile & out-of-pile and in-plant tests (in nuclear power plants primarily): Such experimental test results can be administrated within the HTM-DB and used for damage assessment in reference to the information coming from quality control tests or even other sources. Figure 5 shows the correlation of the four material pools with KBS based life assessment and management procedures. Doing so the company has a fast access to the data and can easily use these data together with material information coming from other sources and/or their evaluated parameters for life-time analysis. • which are earthquake exposed. in—pile & out-of-pile tests within nuclear power plants to secure against embrittlement of components • which are irradiated. The power plant suppliers can administrate these materials data of all their plants within the HTM- DB. surveillance tests within nuclear power plants to guarantee the integrity of components • the catastrophic failure of which could endanger the population in the surrounding areas. For safety reasons the authority can demand additional in-plant or in-pile tests and surveillance tests for critical components to assess the damage and the irradiation embrittlement or to guarantee the component integrity after emergency conditions. • which must be secured against extraordinary emergency conditions. the irradiation embrittlement or the component integrity after in- service inspections or emergency conditions. in-pile & out-of-pile tests and surveillance tests (pool 3) and Non-Destructive Testing (NDT) from plant inspections to assess the material damage. Such requirements can arise for the following cases: I. The materials data examined in quality control tests (pool 4) must be compared with those coming from special in-plant tests. . Pool 4: Materials data from quality control tests (in nuclear power plants primarily): Such test results can be administrated within the HTM-DB. III. in-plant tests within conventional power plants to secure against damage of components • with complicated weldments in high temperature and/or stress exposed areas. 99 Pool 3: Materials data from surveillance.

Kröckel. quality control. fracture mechanics or Young's modulus.surveillance tests Figure 5: Possible data pools of the HTM-DB for the Plant Life Management 4. 1993). national and inter- testing (not covered in . Fattori.& out-of-pile tests. national standards standards) . thermal expansion that are mostly used in high temperature technology. Chapter 4: High Temperature Materials Databank The HTM-DB is a computer-based system for the storage and evaluation of mechanical and physical properties of engineering alloys such as tensile. The HTM-DB computerizes the scientific process of engineering data generation from material testing through the functions of data organization. Guttmann. fatigue. the HTM-DB is not only limited to high temperature materials application (Over. 100 Plant Life Management J Component Life Assessment Component Integrity J design & redesign Assessment Inspections NDT 1 results reference Emergency inelastic damage & elastic conditions analysis failure data analysis h /'Pool S: Pooll: Pool 2: Matenals data from : Matenals data from : Materials data from . creep. . model-based and statistical data data tests. to the presentation of material parameters which find use in engineering algorithms. Although this is the main scope. Its emphasis is on data from standardised tests and on evaluation methods which are well established and widely accepted. The database and the evaluation programs are oriented to international material standards and recommendations. The database structure covers all engineering alloys and their testing at any temperature for time independent and time dependent materials behaviour.

(It is the rupture strain.applied stress dependence.) • creep rupture time . 2000 DIN. The materials data from standards. and is higher than either the minimum or steady state creep rate. They can be selected from windows in the User-Interface. from the delivered minimum creep rates. The HTM-DB evaluation program library is linked with the User-Interface. The Norton Creep Law is shown as an example of such an HTM-DB evaluation program. etc. the HTM-DB offers materials data from standards as additional numerical and graphical information (see Figure 8: Data catalogue).) • steady state creep rate . Such test results can be entered and stored in the "databank" component of the system where they can be accessed and handled with typical databank routines and from where they can also be taken to data evaluation by the other component. tr. such as correlation coefficients and standard deviations. are average data and contain the mandatory materials information only. divided by the rupture time. In most cases much more information such as grain size & hardness is provided. 101 Besides the experimental materials data.stress dependence. (It can be calculated from the creep curve data supplied by the laboratories using the 'Seven point fitting method' as defined in ASME E 647 or. It contains specific evaluation programs for data on the mechanical properties stored by the system. the "evaluation program library". The data management and evaluation functions can be applied to mechanical and physical property test results reported by test laboratories in defined format and quality. however.) • average creep rate. 6000 COST. specimen. as a minimum. There is a big difference between the experimental materials data and the data from standards. Experimental materials data approx. parametric expressions and regression functions to test result data. (The program automatically sorts data according to isothermal criterion. alternatively. Table 1 shows the HTM-DB materials data content. etc. material and test control. constitutive equations. ISO. The records of the experimental materials data are measured data and contain. er. By . It is valid for ductile material behaviour and describes the relationship which exists between characteristic creep rates or rupture time and the applied stress. The program has the several analysis options: • minimum creep rate .) The user also has the option to remove minimum creep rate points which are not consistent with the general trend where upon the creep law will automatically be re-evaluated. (It can only be calculatedfrom delivered values. Table 1 : HTM-DB materials data content Number of records Data coming from Standard materials approx. BS. Each regression line ends at the minimum and maximum stress levels encountered at any particular temperature. Most of the specific property evaluation programs allow fitting of mathematical models. The evaluation programs are programmed in VBA and C and implemented under Microsoft Windows on the PC-side with access to Microsoft Excel for Windows. all mandatory information of data source.applied stress dependence. In general the results are best fit parameters and statistical information about the data. BPJTE.

The Petten Server stores all data entered by different customers and supports on-line data transfer to the customers (see Figure 7). Besides the analysis options. ■ 45CTC Calculated minimum creep rate data n ■ 41. : ■ .' ' ' · .17 Creep sties s (MP«.. : : ~_ .-.ι .-' /r/\ • 525C a.--:.. · ' .· ι M 55 ======== ———¿ss SS5££S n = 18. : 'ΞΞ —yrf ·^— :.) O 625'C n = 6. ===. Through comparison of adjacent curves the user is able to establish the reliability of each individual calculated creep rate value.37 Figure 6: NORTON CREEP LAW analysis The HTM-DB is available as PC or cliem/server versions with and without connection to the Server computer at Petten.ÍLj¿£.11 A 475C 1ΕΌ2 ι:-:::: ==~ . sstm :}::..d V i Y —~r—~SS. 13. the user has access to all intrinsic Excel functions for data storage and data processing.56 • O 600'C 1E-07 70 100 200 300 400 500 n = 9. jfr n « 13. An example of calculated creep rate minimum data is given in Figure 6. i g =#a n .—<■-. . IO .45 jf bs * i ψ. the creep rate data can be compared with those of neighbouring points in the data set. z~r r Δ 550-C S 1E-06 -~.— sr—^=r. ..-í <ψ= ' ±t <D n =1172 <D ry r- U Π 530C ï 1ΈΛ5 '—-.■ ■ ! y € : n = 10.' .73 — ' "8 1ΕΌ4 .. 102 selecting a single point from the chart. ' i r .08 * 500C £? 1E-03 ~:~S h ^=^:^:=niri=:::=:=:::==sn^^=^:^^ '¿zzz-.:.rr?r^r~r-tr¿SL r á ^ ™ — C~==b=:= ::^:=^.

Figure 8: HTM-DB User-Interface main window The PC-based User-Interface runs under Microsoft Windows and therefore may interact with other Windows software such as spreadsheets or the system help application. To 11 . 103 TKTIEN SESYER O l ft je»— .validation by the data supplier .quality control by JRC Petten Figure 7: Dataflow within the Replication Client/Server application The User-Interface and the evaluation programs are installed on the PC of the client. The data entry function and data transfer options to and from the Petten Server are also available as parts of the User-Interface (see Figure 8). Output options are available as "reports" (tabular presentations) and "table & charts" using spreadsheet options.

Typing mistakes and non­relevant queries are avoided. In a similar way the HTM­DB will operate as a dynamic. 1987) is shown in Figure 9. It uses advanced windowing techniques to assist the user formulating his queries. The databank and its evaluation programs & models interact in a dynamic way with the FEM processor to deliver the data and evaluated constitutive parameters for the stress­strain­life analysis. The buttons which are used are related to Microsoft Windows standards (Over. The proposed use of the databank within such a materials information system (Kröckel. Within a Replication client/server application (see Figure 7) the customer has also access to all released data which are regularly updated. 1993). Active windows are shown in the foreground with a blue frame whereas inactive ones are shown in the background with dark­gray frames depending on the Microsoft Windows setup. Structured Query Language (SQL ­ pronounced 'sequel') is the language used by most relational database systems. The SQL language was developed in a prototype relational database management system ­ System R ­ by IBM in the mid­1970's. 5. De Luca. "external" databank within the KBS system. A standardised database structure. The HTM­DB for instance has participated in the BRITE Ρ 1209 project to predict and extrapolate the component service behaviour under stress at high temperature. The HTM­DB represents many years of experience and expert knowledge in database management. ISO/EEC 9075: 1992 (E) 12 . the access rights of the user to his local database and the database of the server is controlled by his passwords and user identifications. Westbrook. HTM­DB functionalities matching the requirements of the KBS There are several concepts for the use of a materials databank for computer­aided in­service component life assessment. material science and soft­ & hardware applications. The User-Interface requires minimal user training. They enable the user to easily retrieve material data coming from different pools and evaluate the data on their relevance and quality before transferring them into the fixed data lists and calculations of the KBS as shown in Figure 3 for the SP 249 programme. a user­interface which offers intelligent user­guidance and an extended evaluation program library are incorporated in the HTM­DB. The complicated SQL string with all the links between the HTM­DB entities is gradually and automatically built up. programming. Information Technology ­ Database Languages ­ SQL. 104 guarantee data confidentiality. 4 A database management system requires a query language to enable users to access data. It furthermore eliminates syntax errors by making the syntax fully transparent to the user.

Conclusion Using the HTM-DB in combination with KBS-based plant life management for the power plant industry from the planning phase up to operation. Kröckel. Guttmann. Similar speed of access is given for the evaluation of the data. 105 HTM-DB rrrr I user I geometry FEM validated query processor processor query variables design loads ΐ finite DBMS element model constitutive (FEM) evaluation equations data file programs & constitutive structural models parameters stress-strain design life damage projection Figure 9: Conceptual scheme for linking of a material properties databank with stress- strain-life analysis by finite-element computation An expert using a KBS needs fast solutions for his problems especially if damage or failure is recognised on critical components. 1993) for which the data must be transferred from the data retrieval part of the User-Interface to Microsoft Excel is nowadays a task achievable in the order of seconds. The dynamic link between the KBS and the HTM-DB is already made as shown in Figure 3. in nuclear power plants. In conventional power plants. Fattori. In the meantime these conditions have changed. from similar components of other power plants and from safety experiments. Recent hardware technology allows to match the response speed of the KBS. other than computerised methods are becoming inadequate. neither from a PC/workstation client/server system nor from a standalone-PC which is used for on- site plant conditions. In a situation where the speed of the decision process is vital. Two years ago this high-speed response could not yet be provided by the HTM-DB. crack propagation is detected at a high stress and temperature exposed weldment of a critical component. Instead of about 4 minutes (2 years ago) nowadays a Pentium PC needs 10 seconds to retrieve the same HTM-DB data content from the local PC database (DBMS: SQL*Base). Therefore a databank which is linked to a KBS needs to contain the corresponding data and to allow the data access at the same speed as the system itself. 6. If.& software conditions the data access was too slow. a fast analysis is requested to decide whether the weldment can be repaired or not. Data which are retrieved and evaluated in the HTM-DB are transferred to the KBS-"internal" SP 249 databank and entered into the pre-arranged data sheets. g. Then often an inelastic analysis for re-design of the component and for definition of new inspection intervals is necessary to continue the operation. A Larson-Miller extrapolation within the HTM-DB (Over. both for component life assessment and material data 13 . it can be fed with data from acceptance tests and additional mechanical testing of the respective power plant components. the inelastic analysis can improve e. the assessment of creep-crack growth and/or relaxation effects. for instance. It will therefore provide the main material data input to the KBS. Due to the hard.

Eds. 7. J. Expert system in structural safety assessment.. R. Germany HTR Regelwerk (1984): "Erarbeitung von Grundlagen zu einem Regelwerk über die Auslegung von HTR­Komponenten für Anwendungstemperaturen oberhalb 800°C". H . Avignon. July 6­7. The development of an architecture for this integrated system is well advanced. Friemann. Kautz. Seminar No. H. of the Avignon '92 Conference Expert Systems and their Applications (Vol. The American Society of Mechanical Engineers. M... (1992). Proceedings of the VGB Conference on Assessment of Residual Service Life. 13. P. (1989). MPA Stuttgart Jovanovic. Overall structure and use of SP249 knowledge based system. Life extension and component condition assessment in the United States. 1992. Lucia. Friemann. Trans R. 1993 Over H. vol. (1987). H.. Soc. Process Plant and Structural Engineering'. März 1984. "Intelligent User Guidance for the HTM­DB". Advanced Computer Technology Conference 1992.. (1993). Schubert F. published by EPRI Palo Alto. Vol. F. Proceeding of the 20th MPA Seminar. R. Guttmann V. 53. 12th International Conference on Structural Mechanics in Reactor Technology. ISSN 0343 ­ 7639. 1 and 2. Viswanathan. 'Computerised materials­information systems'. Kröckel H. McNaughton.. Phil. "Data Management with the High Temperature Materials Databank". Kussmaul. Glossary AFNOR: Association de Française de Normalisation ASME C ODE C ASE N47-15 "Class 1 Components in Elavated Temperature Service Division 1". and the functional demonstration is the next goal. Notes in Engineering. De Luca D. Η. M. Jovanovic. Any hard­ & software costs are by far outweighed by the savings in operation time and plant availability.. December 9­11. Mannheim. August 23­25. Germany. Stuttgart. Κ. (1994). Bonissone. 'Knowledge­based (Expert) System Applications in Power Plant.. W.. 12th International Conference on Structural Mechanics in Reactor Technology. 373­391 Nickel H . Konstanz. ­ Leet. Over H. US. Fattori H. 106 retrieval & evaluation.. Post­ Conference. Jovanovic. held in Phoenix. Kernforschungsanlage Jülich. (1993). P. August 23­25. (1983). December 1992 Dooley. pp 707­718. B. Westbrook J. vol. (1992). Penkalla H. Literature ACT (1992). New York 1979 ASME: American Society of Mechanical Engineers BRITE 3070: "LMS Development of an advanced life time monitoring system for components of piping systems in the creep range" BRITE 5245: "Optimization of methodologies to predict crack initiation and early growth in components under complex creep­fatigue loading (C­FAT)" 14 . 3. Springer­Verlag Kröckel Η. A. 1992. H . Arizona. London A 322. Α. Proc. Jül ­ Spez ­ 248... Proceedings. R. Α.. 2­Specialised Conferences). 1993 8. C. Α. Nuclear Engineering and Design 76 (1983) 197­206 Over H. Practical Realisation of intelligent inter­ process communication in integrated expert systems in materials and structural engineering. "Mechanical Design Methods for High Temperature Reactor Components". Germany.

Final Report. 107 BRITE 5935: "Decision making for requalification of structures" BRITE 5936: "Reliability support system for metallic components susceptible to corrosion­ related cracking" BRITE Ρ 1209: "Lifetime Prediction and Extrapolation Methodologies for Computer­Aided Assessment of Component Service Behaviour under Stress at High Temperature". February. 1991 BRITE: Basic Research in Industrial Technologies for Europe BS: British Standard CLA: Component Life Assessment COST: Co­operation in the field of Scientific and Technological Research DBMS: Database Management System DIN: Deutsche Industrienorm EN: Euro norm ESR-International: "Expertensystem für Schädigungsanalyse and Restlebensdauerermittlung ­ MPA Stuttgart" ESR-VGB: "Expertensystem für Schädigungsanalyse and Restlebensdauerermittlung ­ MPA Stuttgart" HTM-DB: High Temperature Materials Databank ISO: International Standards Organisation IT: Information Technology IT: Information Technology KBS: Knowledge­based System KTA: Regeln für Kerntechnische Anlagen NDT: Non­Destructive Testing OOP: Object Oriented Programming PC: Personal Computer SPRINT RA230: "Methodology for development of knowledge­based systems" SPRINT SP249: "Implementation of power plant component life assessment methodology using a knowledge­based system" SQL: Structured Query Language TRD: Technische Regeln für Dampfkessel 15 .


The component lifetime assessment (CLA) of power plant systems is a vital activity for the engineers. reliability and safety of high temperature pressurized components is therefore extremely important. Power plants are forced to bum inadequate fuel. Where many of the power plants were built by western companies under license or in cooperation the level of CLA reflects the minimal normal level to the country of origin at the time of commissioning. Introduction The changes in the Eastern European countries have a long. Improvements of the methods and procedures used for the assessment and management of remaining life. 109 MIHAEL GRUDEN TECHNOLOGY AWARENESS DISSEMINATION IN EASTERN EUROPE WITH INTELLIGENT COMPUTER SYSTEMS FOR REMAINING POWER PLANT LIFE ASSESSMENT EUROPEAN UNION PROJECT TINCA Abstract Partners from Russia. the power plants received planed material. The power plants in Eastern Europe are critically in demand of such assessment since decisions must be met to prolong the operating life of the plants and propose environmental solutions to the plants where applicable. These two goals can be met with a high cost — low risk approach opposed to the highrisk— low cost approach possible with the limited funds in the new economies of the East European . The failure of high temperature pressurized components are a critical issue.). The previously strongly centrally planed utility had effect on the state of component life assessment in the utilities. Hungary and Slovenia are preparing an advanced intelligent computer system together with MPA Stuttgart. The minimal safety requirements to their operating staff and local residents are neglected. In the best cases. Many older plants in Eastern Europe have insufficient equipment to operate within prescribed local pollution limits. The level of practice varies from country to country. To this dramatic situation the growing respects of environmental requirements seem to ad spice. stop and re-inspect. reduce load. The scope of the European Union project is to disseminate the advanced technologies for remaining life assessment of power plant components to Eastern Europe.term orientation towards a market economy. A knowledge and experience exchange will provide a case and data base for materials and practices used in Eastern Europe for the future benefit of Eastern and Western European participants. The plant engineers are confronted with the responsibility of deciding what to do with the high temperature pressurized component and/or the plant itself (for example. maintenance and operating staff. violating the pollution standards and operating procedures prescribed by the equipment manufacturers. replace. All the power plant engineers are now facing decisions to keep their plants in operation. etc. On the power plant level of the utilities participating two benefits may be expected: • Extension of plant service life and • Reduction of maintenance cost. replacement parts and work-force to enforce component repair needed or not. The utilities now run on low budget programs where no life assessment nor any serious maintenance is performed. 1.

ESR VGB. 110 Utilities. promoting technology transfer to Eastern Europe. USA and in Japan has been devoted the development of such systems. 2. Much of the recent research effort in Europe. The European Union project TINCA has three main objectives: 1. reliability and safety of power plant components. The systems available are therefore not used in Eastern Europe in spite of huge needs. are to be high lighted. 2. All these activities in the Western expert community are today more and more often supported by complex expert or knowledge-based systems. Commonsense compromise solutions can be founded on modern CLA and life expectations estimate methods. These technical notes will contain detail information about the functionality of the software and about the data stored in the databases and hypermedia parts. caused by out — dated technologies and maintenance concepts. (members of the VGB — German Technical Association of Large Power Plant Operators) and SP-249 System. preparation and dissemination of information. In Eastern Europe these modern tools are actually unknown and hardly used in an appropriate magnitude. The leaflets are to be sent as a first information to interested parties for the seminars described later-on. MPA offered to coordinate and guide partners from some East European countries to participate and form a new East Europe oriented software expert system with the shortened name TINCA derived from it's long official name: Enhancing Technological awareness and technology transfer in the area of advanced INtelligent Computer systems for the Assessment of the remaining life. Besides a general description of the existing systems. • Reduced environmental damage. adaptation of software modules to the special needs in Eastern Europe.1. formatted and assessed.1 Leaflets In the first step it will be necessary to collect information for plant engineers in Eastern Europe about advanced technologies for life prediction of high temperature pressurized components and the available possibilities of support using knowledge — based software systems and their features. Objectives and Tasks of TINCA The main objective of the TINCA project is the chssemination of western European methods and procedures to the East European countries. 3.2 Technical notes and special reports As detailed information for interested parties are collected. • Standard plant component life assessment practice. 2. Examples of important current developments at the (west) European level are carried out by companies: ESR international (Expert System for Remaining life assessment).1. Detailed benefit analyses give additional allowance: • Reduction of energy production cost.1 Preparation and dissemination of information 2. • Improved power plant safety. TINCA intends to promote the exchange of East European practices to the benefit of all participants. special technical notes about the existing knowledge-based systems will be prepared. the basic elements of the methods for prediction of remaining lifetime used in the knowledge — based systems. The research results obtained during the development of the leaflets will be used to highlight essential points for the future user oriented application of knowledge-based systems in power plant practice in East-Europe. Information leaflets are designed and spread by each of the project partners of Eastern Europe. . 2.

In order to achieve this goal the partners organize or establish contact booths in Eastern Europe. Therefore MPA Stuttgart will prepare a seminar program together with the partners from Eastern Europe to inform plant engineers about the usability of knowledge-based systems for questions of maintenance.1.1 Preparation of seminar program The partners of Eastern Europe will take the role as multipliers for their countries to disseminate the information and to consult interested parties. maintenance and update procedures. guidelines. data and methods.2 Organization of seminars .. 2. additional software modules in hypermedia format have to be developed. The base organization of the expert software will be adapted. The concept is to develop several additive modules. area. Referring to the existing guidelines and standards in Eastern Europe. Ill These technical notes establish the expertise foundation for interested parties to assess the possibilities and benefits for the use of the existing knowledge-based systems for a particular application.3 Technology transfer to Eastern Europe Methods of technology transfer can be adapted to the existing level of knowledge at each partner. 2. The database with materials from eastern Europe will allow comparison of eastern and western materials and their properties. methods and procedures that should be integrated in the system. For example if an interested party buys ESL International System. 2. region. but common software will enable widespread use by partners without candid hardware. the partners can perform the training for other interested parties in their country.3. The project partners from Eastern Europe are trained on the knowledge-based systems by MPA Stuttgart in the subject of installation.3. 2.2 Adaptation of software modules to the special needs in Eastern Europe An essential emphasis is the adaptation of the existing software packages to the particular conditions in Eastern Europe. 2. After that. which can be linked into existing knowledge-based systems. it can add these modules for the possibility of a direct comparison of western and eastern standards. Every project partner from Eastern Europe will take care of this task in his area and will become a technology dissemination center for the spreading of the advanced software systems in Eastern Europe.2. Furthermore it is necessary to integrate typical case studies from power plants in Eastern Europe to the software system. standards. operation and application. to be qualified and integrated with the existing software.3 Pilot System for Eastern Europe A pilot expert system for Eastern Europe will be created including additional modules with special features for explaining and demonstrating the expert system for interested parties in Eastern Europe The demonstrator — tutor is also designed to tackle the languages of the partners in Eastern Europe (multiple-language add-on module) so that the language gap to the plant engineers in practice is closely bridged.. 2.2.2 Hypermedia modules The system should allow the comparison of Eastern European and advanced (West European) codes. 2. The level of translations to the native language necessary for the instructors in order to receive a proper level of understanding will be managed with modern programming techniques. 2.1 Database modules Different materials used in power plants in Eastern Europe require the development of new database modules containing relevant data of these materials.2.3 Establishment of information booths MPA and the partners in Eastern Europe gain admittance for the Eastern European users to the program systems.

guidelines. • Set up leaflet 1.3 Establish contact • Questions of software licenses.1 Preparation of • Contents seminar program • Media 3. year Adaptation of software modules to the special needs in Eastern Europe 2.2 Technical notes • Select and prepare information. Eastern Europe • Explain software 3. methods. Activity plan Task Description 1. software booths maintenance. ERÖKAR.1 Leaflet • Select and prepare information. updates. organize and carry out at least one seminar together with the interested plant Each East European partner (KORONA.3 Pilot system for • Explain methods. 3.1 Database modules • Materials database 2. 112 Every project partner from Eastern Europe will collect the addresses of the power plants of his country. year Technology transfer to Eastern Europe 3. modules • Examples (case studies) 2. • Consulting of interest parties.2 Organization of • Invitations seminars • Carry out the seminars .2 Hypermedia • Standards. These steps would take place between month 3 and month 18 of the project. MISKOLC. LENERGOREMONT) has sent an engineer responsible for preparation of local seminars and other technology transfer measures to the project coordinator (MPA) for at least 8 to 12 months. and special reports • Perform notes and reports 1. year Preparation and dissemination of information 1. • 2.

Design and supply of the additional software modules based on the developed systems. Develops parts of the hypermedia and database parts ISPRA 1. ERÖKAR. MISKOLC and LENERGOREMONT: MPA: 1. physical properties. 4. Implementation of additional developed software. 6.. 5. Special technical notes and reports needed to procure the system to the end user in the partner's country. dissemination of the knowledge necessary for the local use and local and remote maintenance of the software. KORONA. Translation of the systems language to users' languages is a responsibility of each partner and language. data base exchange and case studies' analysis have shown more problems than expected. Reflecting these needs the work started compiling the databases: 1. Is the East European partner coordinating the contacts with the partners of Eastern Europe and 2. etc. Each partner is responsible for the creation of leaflets for local users. Completing of the additional modules to be developed. Univ. 2. First results The first efforts were made in elaborating the complete task and schedule for the first and second year activities. As the activities will progress. elaboration. 4. In the field of software development. 5. 113 4. The basic approach to the opening structure is shown in Figure 1. Adaptation to the end-users' need and requirements. Prepares and organizes the seminars in collaboration with the partners from East Europe KORONA 1. Contacts established to the interest parties in Eastern Europe. Coordinator of the project. 3. ISPRA. ISQ 1. Coordinator of software development. Material databases: chemical composition. 7. temperature test data. The presented structure presents a starting point for the partners to engage with the task. 3. communication. Build-up of the pilot system. Organization of seminars performed in collaboration with ISPRA. 6. ISQ. 2. 5. arranges the elaboration of the technical part of the proposal and as EROKAR. Examine the end-users' need and requirements in terms of specific materials and components based on former experience and cooperation with the Russian partners. the detailed schedule will be fitted to involve all the recent changes and development. Trains the partners from Eastern Europe. . Definition of specific requirements to the system defined in cooperation with MPA. Univ. MISKOLC and LENERGOREMONT will 1. Organization performed to technology transfer. structural data. Roles and tasks of Partners The work load within the development of the TINCA package will be distributed among the parties MPA.

3. . guidelines used in the partner's area can be addressed from the basic window sheet. Inspection planing methods and guidelines have been addressed by generating a list of all potential users in the partners' areas. 5. More sophisticated hypermedia software will eventually be introduced later during the elaboration of the system. procedures. codes. physical properties. Design: standards. procedures. Figure 3 Reflecting the level of equipment available for the computerized expert system to be developed a widely used base software was introduced to achieve the transparency at an early stage of work. Standard materials from domestic steel producers including cross reference with similar western materials. Inspection planing methods and guidelines. State of striicture fo r the data Dase aceess Material databases' window including access to chemical composition. Combined with 'Visual Basic. 114 2. 4. Standard materials from domestic steel producers including cross reference with similar western materials Design: standards. procedures. RLA methods: standards. codes. codes. codes. guidelines used in the partner's area can be addressed from the basic window sheet yet similarly as the material database. structural data. procedures. are presented in Figure 2. Database D a t a b a s e Mode lop-Bot t oro All Countries Country ρ U tüitie> AlUtaïic» ¥ Pianti ΑΠ Plants 9 Block! Alt Blocs. guidelines used in the partner's area. For this purpose the package 'Microsoft Office' was chosen.' to establish the cross­reference modules it will enable to start and test and verify the collected data. etc.* Bade io Main Figure 1. guidelines used in the partner's area. temperature test data. RLA methods: standards.

Hungarians should be encouraged to participate in the SPRINT idea by not letting in the US or other competitors..Utility LOCATION Technical details Magyar-EK Energia Kozpont / Hungary-EC Energy Centre Parent Country Code HUN Contacts Tibor Bertók.25 X Chromuim Steels Country Country Code Standard Number Slovenia SI Ravne 1. It is strongly recommended that the Centre should be approached immediately in order to establish a link with Hungarian programmes for power plant engineering. resistant at temperatures increased up to 500 oC. While immediate profits might be low. sending experts from the Consortium could provide a decisive strategic advantage. covering areas such as training.7335 Note/Scope Steel. Md \ 1 Delete HHJmutiifelHH Select Utility Power Plant Back to Database Figure 3: Example for a database file window for a Hungarian utility center . Director Location Konyves Kálmán krt. education and information.. organisation of exchanges of experiences in energy planning and forecasting. 115 33 τ TINCA Material Info Database Material Data Chemical Composition Τ Doss Ref ei enee List 1—0 Material Name Material Family 13CRM044 Ferritic Steels Material Subfamily Material Group Low Alloy Steels 1-1. In addition the Centre has since Nov 1992 taken on responsibility for the THERMIE technology dissemination program in Hungary. 76 Strat Class 1087 Budapest Points Value 0 Phone 1 <36>(1 ] 1 3 3 1 Phone 2 <36>[1]133 8 Fax <36>(1)269 9 The Centre has been jointly established by the CEC and the Government of Hungary. Purpose: to strenghten the co-operation within Hungary and between Hungary and the EC in the field of energy management. Being European-minded. TECHNOLOGY TRANSFER. in particular energy conservation and energy efficiency. component parts for steam Main Menue Exit H4 MATERIAL INFO ►N Add Delete Figure 2: Sample entrance sheet to the TINCA material database Database . tubes. as a foundation.

Mihael Gruden. Croatia . 116 6. December 12-14th 1994 Proceedings Edinburgh 1994. More interesting details will be presented at our next meeting and could serve as a base for future expert system development in other countries. Mihael Gruden. Gradually the actual field state of the art is emerging. 3. The activities are progressing at pace and the participants are vigorously trying to fulfill all the current obligations. Opatija 1994. Conclusions The partners of the TINCA project started our work together in March 1995. UK. The participants will have the material for the opening presentation of the expert system in development prepared in time. ΓΕΕ International conference Life management of power plants. Work documents of the TINCA project 1994/1995 2. Angelo B r { ~ i ~ : The low capital engagement approach to the pollution control of Termoelektrarna Toplarna Ljubljana Energy and environment. 7. Literature 1. Urban Jan: Experience and improvement of power plant operation due to continuous monitoring of boiler drum life.



UK 1 INTRODUCTION The first phase of component assessment generally comprises a code-based calculation of life consumption and a review of operating and maintenance history. eg thermal softening. directly observable indicators are selected that can be used qualitatively or quantitatively in life prediction. this philosophy is extended to include the effects of different loading regimes on initiation. involving conventional NDE methods and various metallographic techniques. the prevalence of either is determined by the initial structural state and purity.DAMAGE. then the assessor should move to a hands-on inspection. have proved the most sensitive indicator of thermal degradation. Metallographic methods of component life assessment are designed to generate information on these processes. Microstructural degradation and corresponding thermal softening of the base metal can result in a variety of microstructural effects in the steel. The particular methods required to predict the behaviour of any defects found are also described. DEFECTS J M Brear. Creep damage (cavitation) can be measured directly. Interactions between processes are also considered. such as changes in composition. conditions of stress and temperature. structure. size and spacing of carbides. in ferrite composition. Should the component prove unacceptable on any of the indicated criteria. R D Townsend ERA Technology. 2 DAMAGE AND DEGRADATION Components exposed to high temperature conditions under which creep and other time dependent processes occur. or are correlated with life consumption. measured by direct observation or indirectly by hardness measurement. eg creep cavitation. in solid-solution strength. 119 MODERN APPROACHES TO COMPONENT LIFE ASSESSMENT . This paper outlines approaches available to assess the levels of damage and degradation present and to determine the remaining life of the component. The overall philosophy is to identify the physical metallurgical processes that are either directly life limiting. DEGRADATION. and in lattice parameter. will suffer degradation of their properties over periods of extended service. In low alloy steels creep damage leading to failure results from i) Microstructural degradation and continuous reduction in creep strength during service exposure ii) Intergranular creep cavitation Both process normally occur simultaneously. . For each process. In the presence of a defect. growth and fracture processes. Carbide characteristics.

for example. Although samples can be removed from most components. Samples extracted for metallographic examination provide similar information. Qualitative and quantitative methods of assessment are available and provide information that can be used directly in life prediction. or when repeated observations are required. etc) using surface replication and optical microscopy. The two major applications of replication techniques are (1) the study of microstructure (creep cavitation. They do. 3 METALLOGRAPHIC TECHNIQUES 3. It is a non-destructive technique that can provide data on any accessible point on the component surface. grain size. Similar data. through the wall thickness.2 Hardness Measurement Hardness measurement can provide information on the state of degradation of ferritic steel components. 120 Changes in grain-orientation also occur with strain during elevated temperature service and these can be monitored by direct observation. there are situations in which Ίη-situ' replication may provide the only possible approach to microstructural evaluation eg when the removal of a sample is geometrically difficult or is liable to affect the integrity of the component.1 Surface Replication Replicas can provide information on the condition of the material from which a component is made. The information obtainable includes: State of Degradation Precipitate growth and spheroidization State of Damage Extent of creep cavitation and cracking. The information obtainable includes: * State of Degradation Indirect measure of overall precipitate size and spacing Cross-weld Hardness Differential Indirect measure of creep strength differences . The methods of interpretation described here apply to both replicas and metallographic samples. through the wall thickness. 3. however. and (2) the examination and identification of small second-phase particles by extraction replica techniques. for the purpose of interparticle spacing determination. as. can be obtained from extracted samples. but at a limited number of positions only. They are non-destructive and can be taken from any accessible point. However this technique has not yet reached the stage where it is suitable for routine component assessment Implementation of the metallographic methods can be done by removal of samples or non­ destructive^ Ίη-situ' by replication. only provide data relevant to the surface of the component.

If several regions are damaged. they should be assessed separately.3 Carbide Extraction Replicas Carbide extraction replicas can provide information on carbide precipitate particle characteristics. creep resistant steels evolves with time at service temperatures. the microstructural regions in which damage occurs should be noted and the orientation. with traces of the weld-beads visible on etching as a consequence of slight differences in chemical composition. More detailed schemes taking into account both grain boundary and grain interior precipitates.1. If the weldment is subsequently renormalised. In the case of weldments. have been developed for base material and for heat affected zones (Ref. as fabricated.1 Qualitative Techniques State of degradation The microstructure of low alloy. the most obvious visual change being the coarsening and spheroidization of the carbide precipitates. and general distribution of the damage recorded. then uniform fine-grained structures are produced throughout. Damage location Creep damage . prior to . The structures occurring in a low-alloy steel weldment are determined by the temperature profile and can be related to the iron-carbon phase diagram.must be assessed correctly both for fitness for service evaluation and for life prediction. This is shown schematically in Fig. On examination of a replica. with respect to the weld. The precise evolution is dependent upon the initial.cavitation or cracking . it is an important part of damage assessment to determine the microstructural region in which damage occurs.10). 121 The measurements can be used for: * Temperature estimation Qualitative life prediction * Quantitative life prediction Weld failure location prediction The same standard of surface preparation is required for hardness measurement as for replication. state. specifically: * Carbide spacing * Carbide size and morphology Carbide composition The information obtained can be used for * Temperature estimation * Qualitative life prediction * Quantitative life prediction 4 INTERPRETATION OF SURFACE REPLICAS 4. 3.

Using a micrometer stage the replica to be measured is traversed along the direction of the maximum principal stress.5X eye-pieces fitted with a cross-hair graticule. The measurements are made with an optical microscope. Rule 2: A grain boundary is classified as DAMAGED if it contains one or more cavities (or microcracking) along its observable length including cavities centred on the triple point itself. giving a magnification level of 400X to 500X.2): Rule 1 : An intersected grain boundary is only observed between the first triple point on either side of the intersection. the same information should be recorded. otherwise the boundary is UNDAMAGED. The distance between the damage and the fusion boundary.5. 4.4). recorded according to the simpler methods.6). The location of the damage determines the quantification route to be used. should be given. On examination of a sample. preferably using green monochromatic light and a 40X objective and 10X or 12. These may be harmonised as sown in Fig. If the boundary extends beyond the field of view then the point at which it leaves is treated as the triple point. to be re-interpreted in line with the newer. If in doubt as to whether a feature is a cavity or not it is disregarded. Damage classification Various schemes of qualitative damage classification have evolved from the original proposals of Neubauer (Ref. As each grain boundary is intersected By the cross-hair point it is classified as either damaged or undamaged using the following set of rules (Ref. together with the position/variation of damage through thickness or association with the cusp region in the weld. This is intended to allow direct comparison between the different schemes and to enable historical data.2 Quantitative Techniques Two methods of quantitative cavitation assessment have been validated appropriate to different regions of the weld microstructure.11). The 'A' parameter is defined as the number fraction of cavitating grain boundaries encountered in a line parallel to the direction of maximum principal stress. . These have attempted to improve precision. * Weld metal Coarse grained HAZ A parameter * Parent material Fine grained HAZ Intercritical (Type IV) region Cavity density Cusp region in a double V weld Procedures for evaluating these follow The 'A' parameter Originally developed for use on coarse-grained HAZ material. this method has since been extended to parent material and weld metal. increase the applicability to a range of steels and microstructures and incorporate the effects of other forms of degradation (Ref. or similar unambiguous feature. 122 formal quantification.2 (Ref.

If there is any doubt as to the identification of a feature. Boundary J also illustrates the definition of a boundary in Rule 1 in that it extends only between the first two triple points. Microscope requirements are as for A parameter determination.3. If the number of damaged boundaries is N Q and undamaged boundaries Ny then the number fraction of cavitating boundaries. Intersections E and F are examples of triple point intersections classified according to the 'majority vote' of Rule 4. For each field of view. Boundary intersections H and I are both counted. Boundaries Α. 123 Rule 3: Multiple intersections with the same boundary are each counted and are classified with the damage state of the whole boundary. With reference to Fig. Alternatively. it is to be ignored. The classification of DAMAGED or UNDAMAGED is determined by a 'majority vote' of damage states of the three joined boundaries. (Small gaps between fields are acceptable). The total length of the traverse or the sum of the lengths of the fields of view is recorded. In cases of cavity linkage. Rule 4: Intersections with triple points count as one boundary intersection. A. This approach is often . with the addition of a camera if photographs are to be taken and a rectangular grid to allow precise definition of the area observed. that is E is damaged and F is not. a photograph of each field of view may be taken and the cavities counted on an enlarged print. achieving this by a series of parallel traverses. is simply defined as: Nv+Nc The length of the traverse (L) should be recorded and the grain size. it is usually necessary to count a minimum of 400 grain boundaries. and must have the same damage state (in this case UNDAMAGED) since they are on the same boundary (Rule 3). clearly identifiable linked cavities should be counted individually and the fact that linkage has occurred should be noted. Counting may be by direct observation through the microscope. The replica is traversed. the total number of cavities observed within the field in noted. G and J are UNDAMAGED using the same rule. Measurement may be by direct observation or through photographs. Β and C are DAMAGED according to Rule 2. separated by two fields of view. boundaries D. Cavity density The cavity density is most simply defined as the number of cavities per unit area. calculated: l = L/(Nu + KO) In order to achieve the necessary precision in A parameter value. in the direction of the maximum principal stress. ensuring that there is no overlap between successive fields of view. Similarly. defined by the mean linear intercept.

their images on the photograph should be pricked through. capable of predicting the evolution of damage.A parameter and cavity density . It comprises coupled equations for deformation and damage rates: ε= αΤ)σηεχ-μΐ{\-ω)η ω= ϋ(Τ)σνει-μ/(ί-ω)η with Τ temperature σ stress ε strain ω damage Solution of this pair of equations yields a relationship between life fraction. ε Ι εΓ. . It is necessary to relate the physical measures of creep damage . or the strain fraction. determinations of the cavity density for each traverse separately serve as a check on material and damage homogeneity. ω. It is also fully compatible with creep fracture mechanics (C* type approaches) and can be adapted to include cyclic creep and creep-fatigue the state variable. The continuum damage mechanics approach meets these requirements. 124 more accurate at high cavity densities. As with the A parameter.3 Life Prediction Continuum damage mechanics Accurate life prediction requires a model that is mechanistically realistic. degradation and deformation up to the end of life and able to use quantitative measurements actually obtained from plant. To ensure that every cavity is counted once only. strain fraction and damage: \-(tltr)Vtl =(\-εΙ 6rf =(1-0))^^ where η creep stress exponent μ primary hardening exponent A tertiary ductility ratio This equation forms the basis for predicting creep life and time to crack initiation. 4.

which have been confirmed experimentally: A Parameter: ω./ε.tttrvict I tr A = number fraction of cavitated grain boundaries. this time is the time to crack initiation. then the relationship . In the presence of a crack. S. εΓ = creep rupture strain 6S = Monkman-Grant parameter = ετη·ίΓ ε = minimum creep rate m tr = creep rupture life { = semée service life to date If the damage is uniform through the section. Calculations based on the A parameter In the absence of a crack. then this time is the time to failure. the following relationship may be used: LF= π .A π Cavity density: ε where £"ris the rupture ductility and NF the cavity density at failure. measured on a line parallel to the principal stress axis η = stress exponent for creep μ = primary hardening exponent Λ = ε. 125 Theoretical studies yield the following relationships. if the damage is localised. l-LF remaining service \ Τ ΤΓ where LF = life fraction consumed .

Ranges for three material states (all ICrViMo steel) are given. 10) and a brittle (high impurity content) coarse HAZ (Ref. Alternatively. Calculations based on cavity classification method An approximate calculation may be made by estimating the A parameter value from the qualitative cavity classification. . 126 {\-εΙ εΓ) = \--Α###BOT_TEXT###quot;\ ' defines the residual ductility fraction used in the crack growth rate equation. and an upper bound 'A' value can be selected. realistic or probabilistic calculations may be performed. times to failure (for through section damage) or crack initiation (for local damage) are given by LF=[\-{i-NA/NFy]u w tri ' Remaining = Service · 0~LF) / LP and all constants and variables are defined as before. Aare dependent on material. In the presence of a crack. A qualitative assessment of the damage level may be compared with the observed relationship between 'A' and Neubauer"s classification. μ. stress and temperature. The maximum "A' value could then be used in any of the 'A'-life fraction equations to yield a suitable life estimate. lower bound.2). (Ref. Values of the parameters η.2). These include a ductile parent material and a coarse grained HAZ material of intermediate ductility (Both Ref. the damage classification could be related to life fraction directly. Figure 4 gives a plot of damage classification vs life fraction whence minimum and maximum remanent life fractions (and hence lives) can be obtained. Calculations based on cavity-density In the absence of a crack. the remaining ductility fraction is calculated directly from: f \ A 1 V ετ) As for the A parameter.

can be described by the Sherby- Dom Parameter.7). improperly heat treat components • in combination with calculational assessment of remaining life and creep damage quantification. at temperature T. The resulting relationship is Η = f(P). The tempering responses of steels at typical service temperatures. 5. Assuming that hardness is inversely related to interparticle spacing. any measure of change in strength during service (eg change in hardness) may be used to estimate a "Mean" operating temperature for the component. as evidenced by hardness changes influenced by time (t) and temperature (T) of exposure. The hardness values measured can be used in a variety of ways: • as a means of identifying critical component regions where hardness is markedly different from that which should be expected for a satisfactory material. however.(q/T). molybdenum carbides). and measuring the change in hardness as a function of time t. Figure 5 is a schematic illustration showing a typical experimentally derived Η = f(P) correlation obtained on 2 l C r M o material having an initial hardness of H0 = 190 (Ref. 127 5 INTERPRETATION OF HARDNESS DATA 5. is unique to the starting material condition represented by the initial hardness Ησ. a formal description of these ageing curves can be defined. The materials deform plastically at ambient and elevated temperatures by the movement of dislocations through the ferrite crystal matrix. This approach is particularly suitable when strength changes in service occur primarily as a result of carbide precipitation and growth (microstructural coarsening). where Τ is in K. Hardness and creep strength are both a measure of the resistance to this movement offered by the matrix dispersion of alloy carbides (typically vanadium. allowing improved predictive accuracy and wider coverage of the component • as a quantitative measure of microstructural degradation for input to base material and weldment creep models.2 Temperature estimation The strength of low-alloy steel changes with service exposure in a time and temperature dependent manner. Strain-induced softening can often be neglected for the low strains involved in plant. eg overheated regions. log(t) . chromium. A correlation between hardness Η and the Sherby- Dom parameter Ρ can be obtained by ageing a given material. therefore. Thus. In practice several approaches have been developed. it should be possible to estimate creep strength and therefore expired and remaining lives from a measure of surface hardness. The curve. with initial hardness H0 (at t = 0). by analogy with Lifschitz-Slyozov-Wagner-Greenwood coarsening kinetics: (Ht-Hss)-' =(H0-HSS)-' +C0exp=^t . In principle.1 Hardness and creep strength The creep strength and hardness of ferritic steels are essentially controlled by the same microstructural process.

weld metal failure is expected. 5. parent material failure is expected. Life can also be estimated from the qualitative degradation class. In terms of application. (In some cases. Figure 6 gives the data currently available for 21/iCrMo steels. Plotting a point to show the current hardnesses of a weldment allows prediction of failure location to be made. it is possible to construct a weld predictor diagram (Fig. 128 where Ην is the saturation (solid solution) hardness level.stress and temperature . then refinement to the analysis using quantitative methods or accelerated post-exposure testing should be considered. This shows weld metal hardness against parent metal hardness and two lines corresponding to equal rupture strengths for sub-critically stress relieved welds and for fully renormalised welds (Ref.the measured hardness of the material can be used to generate a range of predicted life. currently no compensation has been made for possible variations in heat treatment or other process variables and therefore a wide scatterband in predicted rupture life capability exists. Most importantly. For known operating conditions . However within these databases.4 Weld failure location prediction Using the data of Fig. if measured component parent material hardness values indicate minimum remaining life in excess of the target life. both factors need to betaken into account. 5. databases relating materials hardness empirically to rupture life are becoming available. Typically the scatter is a factor of 3 smaller than that obtained using standard materials data only. damage give the better prediction. The rupture life axis is temperature compensated. . These relationships may be used to predict future softening trends or to determine mean temperature if two successive hardness measurments are available. the hardness axis is stress compensated. however. Figure 7 shows the relationship between degree of spheroidisation and life fraction for the same three materials as were included in Fig. the data already constitute a useful indicator of minimum remaining life capability. the hardness difference between 'hot' and 'cold' regions of a component may be used).8). Above the relevant line. Nevertheless despite the limitations. then no further refinement is required at this stage. whilst for the most brittle material. based on hardness measurement.6. It is immediately apparent that for the most ductile material. if hardness values suggest the converse. degradation class is the more sensitive indicator of life consumption.4. The temperature dependence of the Sherby-Dom parameter is thus q = Q/(R-\n) where R is the gas constant and Q is related to the self-diffusion activation energy.8).3 Life prediction Due to the extensive post exposure stress programmes that have been earned out over recent years. A lower bound fit to these data is therefore generally adopted. it is clear that for intermediate ductility materials. Below it.

9) are also available. λ0 is the spacing at t = 0. μ is the shear modulus. giving a transition in failure location with service life. and λ is the mean interparticle spacing. 6. causing the plotted point for the weld to cross the line. This approach is currently being extended to include Type IV failures. 129 It is possible for the hardnesses of weld and parent material to reduce at different rates with service exposure. and C0 and β are constants. and have met with some success. Thus: ί V ε =B0exp(kAT)x α'μο * h\ + C0 exp(/?7>r . The presence of carbide precipitates was postulated to result in a 'Threshold' stress which must be exceeded to allow dislocations to climb over the particles so that α' μο where a' is a constant.2 Life prediction A mechanistic model based method of quantitative life prediction has been established on the following principles. Τ is temperature in K. Methods of time-temperature estimation based on carbide composition and morphology (Ref. b is the Burgers vector.1 Temperature estimation Methods of temperature estimation analogous to those used for hardness measurements have been proposed. 6 INTERPRETATION OF CARBIDE EXTRACTION REPLICAS 6. The creep-rate equation under the effective stress can be written as where σ is the applied stress and Β is the constant containing the temperature dependence. defined as B= B0txV(kAT) The kinetics of carbide coarsening be described as Ä)=X0+C0exp(ßT)t where Xt is the instantaneous interparticle spacing at time t.

In Europe. in-situ carbide extraction techniques have to be employed which are difficult in plant situations. This is particularly true where the effects of a number of interacting metallurgical processes have to be correctly interpreted. the other constants needed.n. Because the creep rate ε is known. Therefore it is inevitable that the carbide coarsening kinetics specific to the component must be determined by taking samples or replicas from locations of known temperature. the expended life fraction can be calculated directly instead of using the carbide-coarsening model. and only recently have the effects of other degradation processes been incorporated into the interpretative schemes . the creep rate and hence rupture life an be calculated. Using the above model. and a'. The model is based on the premise that once the kinetics of carbide growth are known. Values for A„. C0. the failure criterion assumed in terms of a critical strain is arbitrary. and β in the carbide coarsening kinetics equation can be determined. For application of the model to a field component. still have to be assumed using bounding values of data obtained on other heats. a'. complementary techniques rather than relying on single methods of evaluation and assessment. if the temperatures and stresses are known. The application of the carbide-coarsening model at present has numerous limitations. Additionally measurement of carbide spacing from extraction replicas is extremely subjective. this equation can be integrated between limits of t = 0 and t = t. All these values are substituted into the O ' A ' above equation to compute a creep curve for the material. The carbide coarsening model thus contains many constants which are difficult to obtain and evaluate. Therefore. and β. From these values to constants λ0.Å0.kA. Further. emphasis is placed on using several. where samples cannot be removed from the component. the time to reach a given critical strain or the time to rupture can be estimated. For example.kA. monitoring of the carbide spacing À.C0. and the answers are expected to be at least as accurate. there has been much emphasis on cavitation based assessment. 130 By substituting values of B0. Even after the carbide coarsening kinetics for the particular cast of the component under examination have been determined. such as B0. the failure time tr at any arbitrarily selected value of failure strain can be calculated. the practical application of the technique is difficult. and the strain accumulated up to that time can be determined. Further. Further. if not more so. The service applied stress and the local temperature where remaining life estimates are to be made should be known. From the creep curve. as a function of time or at different locations of known temperature is necessary in order to determine the carbide-growth kinetics. This is difficult to achieve in practice. since local temperature measurements in components are rarely made.k.. σ. 7 HARMONISATION OF RESULTS In assessing the state of a component. The initial carbide spacing λ0 is usually unknown. Carbide distributions in steels are non-homogeneous and the starting microstructures for different components are never the same. and η have to be assumed. reasonable agreement has been demonstrated between rupture life predictions from precipitate size and actual rupture lives determined by experiment. ideally samples or replicas from three different temperatures should be removed and the carbide spacing Åt measured. requires significant time commitment to achieve a representative measurement and generally gives limited reproducibility. components operating at elevated temperatures. size and monitor defects such that it is now realistic to formulate rigorous procedures for their assessment. characterize.10).17-19). However.2. how damage and degradation processes interact to control creep life. It is considered. On this map. The procedure covers the following aspects of defect analysis. The procedure described here addresses the assessment of defects . Throughout. Ongoing improvements in non-destructive examination techniques have provided the means to locate. 8 DEFECTS No material or structure is free from defects. 10 . frequency and precision. the data of Figs. at present these modifications seem very limited in their applicability beyond the particular classes of material and component design on which they were derived.interpolated from the source data in Refs. that a realistic future expectation is the development of an integrated metallographic approach which correctly balances the influences of time. for typical service conditions. More extensive schemes (Ref. degradation only and mixed behaviour. As a preliminary move towards such an integration.9 to generate a damage degradation map. This showsthe evolution of the three materials considered in terms of damage and degradation class and contrasts damage only. nor immune to their formation. It includes treatment of crack initiation and growth under creep. therefore. but these are subject to the same limitations.4 and 7 have been combined in Fig.either actual or postulated .12-16) and ongoing research worldwide (Refs. fatigue and creep-fatigue from manufacturing/fabrication defects from accumulated damage .have been superimposed. fatigue and creep-fatigue. 131 (Ref. The principles of each stage of the assessment process are outlined and detailed calculation procedures are given. Failure Process Global deterioration • embrittlement • ageing • creep damage Crack initiation by creep. temperature and stress on softening and strain and damage accumulation. These show. They reflect current standards (Refs. life fraction contours . Such procedures give a firm basis for run. replace decisions and for defining inspection scope.6). the emphasis is on achieving an efficient compromise between accuracy and simplicity. have been developed elsewhere. repair.

Particular situations that may be discovered include:- • Evidence of stress corrosion or environmentally assisted cracking. • Evidence of overstressing. local metallographic examination (especially surface replication and hardness measurement). . Components The procedure covers components subject to steady mechanical and cyclic thermal or mechanical loading. If this is local. In any case the cause should be rectified.g. e. • Evidence of overheating.g. distortion sometimes accompanied by creep damage. to characterize the general material condition and any damage local to the cracking. at elevated temperatures in or below the creep range. 9 SERVICE PARAMETERS RELEVANT TO DEFECT ASSESSMENT define the general component conditions. regions experiencing cyclic plasticity are sufficiently small that the overall instantaneous load-deformation behaviour of the structure is linear. At present it is restricted to components subject to 'global shakedown'. then a repair may be the most cost effective solution. 132 Crack growth by creep. This should be considered in the same way as overheating. form and location of the defect(s).including dimensional checks . distortion plus excessive material degradation. and visual inspection . which should indicate the size. This will be based upon the findings of conventional non-destructive examination (NDE). that is.1 Cause of Cracking Prior to performing the calculational defect assessment. fatigue and creep fatigue interaction with ligament damage Failure criteria • global creep failure • critical crack size brittle fracture plastic collapse • leak-before-break Materials The procedure is applicable to ferritic and austenitic steels for which long term creep rupture and ductility data are available. together with some fatigue data. e. In this case further advice should be sought before proceeding. the most likely cause of cracking should be identified.

of long term duration . Normal temperature variation during operation can be accommodated by calculating an effective temperature for the life limiting process. • Evidence of a local end-of-life situation. This procedure may be used to underwrite such operating temperature can be dealt with by noting that a general time-temperature equivalence can be established for creep dominated process.3 Crack Parameters Defects are classified as: • known (or postulated) to be present at start of service • known to have formed during service • discovered during service and. mechanical) or secondary (in internal equilibrium . Previous code-based calculations will have divided the service history into periods of steady-state operation. each characterized by heating/cooling rates and pressure and thermal stress ranges. Care should be taken to use appropriate materials data if proceeding with an assessment in such cases. sometimes with excessive deformation. transient stresses and life fractions consumed to date by creep and fatigue.g.e. Such components should only be kept in service with cracks for a short time. based on the NDE data. e. e.g. Cyclic operation and start-up transients are included in the fatigue analysis. • Evidence of crack initiation and/or growth from a pre-existing defect. 133 • Evidence of a general end-of-life situation. thermal and residual). 9. Major changes . and identified distinct categories of service cycle. All applied stresses should be categorized as either primary (in equilibrium with external loads .2 Operating Conditions Loading and temperature histories are required for the total assessment period. past and future. each characterized by a stress and temperature. 9. as: • volumetric • planar • point . This information can be used directly in the defect assessment.g.e. degradation and/or damage or fatigue cracking local to a stress raising feature.g. until repair or replacement can be effected. general degradation and/or damage in the component. Account should be taken of the results of previous code-based calculations which should generate estimates of steady state stresses.. Sensible assumptions regarding future operation should be made.

134 In terms of length and through thickness depth . The generally irregular shape of a defect is idealised to an ellipse of axes 2a. The assessment should be based upon the projection onto the plane giving the highest values for these required together with as much information on the position and geometry of the defect as is available. code-derived stresses are used. If the defect is not aligned with a plane of principal stress. simplified inelastic methods are used where possible.g.g. An accurate measure of crack size .e. Neuber's method is commonly applied. due to forces which do not contribute to plastic collapse in internal equilibrium e. short range thermal and residual stresses • Peak . In general. the second in magnitude) Interactions between defects should be accounted for. If there are multiple defects. 2c. elastic analysis may be performed. internal pressure.4 Stress Analysis The relevant stresses are those which would exist in the neighbourhood of the defect if the component were uncracked. defects found during service are conservatively assumed to have existed from the start of service at the same size as when discovered. Stress intensification factors are calculated within the procedure itself. interactions may need to be considered iteratively. Alternatively. Further advice should be sought if: the defect is at an angle of >20° to this plane there is less than 20% difference in either of these parameters between two planes the highest stress intensity and the highest reference stress lie on different planes one of the principal stresses is significantly compressive (i. then it should be projected onto the three principal planes and the stress intensity factors and reference stress calculated for each plane. based upon the information available from the NDE data. . mechanical loads. shakedown analysis being preferred. due to loads which contribute to plastic collapse in equilibrium with external forces e. When greater accuracy is needed. long range thermal and residual stresses • Secondary . 9. due to local stress raising features Initially. with the results corrected for plasticity. Stresses should be classified as: • Primary . the effective dimensions after interaction are those of the overall containment rectangle.

using creep/relaxation and (cyclic) stress strain data . rupture lives and ductilities. Ductility and Life Where possible. 135 Initial elastic and creep redistributed stresses are required .μ . The timescale for redistribution should also be determined. most of the information required for defect assessment can be obtained or estimated from standard data tables. In the absence of data appropriate to such models. Such approaches have great value where raw data are in short supply. exponents . These can be used if no better information is currently available. Unified model This is based on continuum damage mechanics and describes the accumulation of strain (ε) and damage (ω) at a given stress (σ) and temperature (T): ' σ ν ί ε \χ~μ (Ι-ω)~" exp{-Qc/RT) ε=ε0 νσ 0 . Simple power law expressions can be used as a first approximation.13). simplified defect assessment. unified models of creep and plasticity are being derived.hysteresis loops derived. detailed creep data appropriate to continuum damage type models should be used. Ductility data are required for crack growth assessment. In many cases. This procedure is formulated to use these approaches where possible. 10 MATERIALS DATA REQUIREMENTS FOR DEFECT ASSESSMENT An understanding of materials behaviour improves. They are also suitable for a preliminary. typical operational cycles should be analysed and . ν. Potentially.η. complete integration of these with the fatigue models is possible. For fatigue and creep-fatigue assessment. 10. simple approximations are also available (Ref.1 Creep the critical point(s) for initiation and through the structure for crack growth. from creep/relaxation data. damage and hardening. From these the stress and strain ranges may be obtained. \sj ί σ ν ί ε \ι~μ ω νσ„7 (1-«Γ \ej exp(­ôz) / RT) with the following materials properties: ε0(σ/σ0)" Mexp(-Qc/RT) So fundamental flow rate and strength QOQD activation energies η. thus allowing consistent description of flow and creep strengths.

by interpolation or use of parametric formulae. Unified model No simple stress relaxation expression is available. in the stress analysis. It is therefore useful to have actual relaxation data. 136 and R . but the form of equation due to Feltham is broadly consistent with the creep model: Δσ = σ 0 £ " In (Δ/χ 3600 + 1) where Ασ is the stress decrease due to relaxation σ0 is the initial stress At is the relaxation period (in hours) B" is a material constant Standard data If the minimum creep rate ¿w. A r C»min 10. for the initial stress. the rupture life (tr) and the ductility parameter: ε. the Gas Constant = 8.3143 Expressions for ductility and rupture life are also available. rather than rely on forward creep data. tr. ( ■ "Ì Rupture ductility (εΓ) may be obtained from the minimum creep rate Cr min derived as above.2 Stress Relaxation Stress relaxation by creep can occur during cyclic operation. then Aa=£'£min At where E is the tensile modulus . is derived. = £ r . Creep curves (strain-time and thence minimum creep rate) may be estimated by plotting the iso-strain data and then interpolating at the required stress across the series of curves. as above. σσ. Standard data Rupture lives may be obtained directly from standard data.

10.this is compatible with the creep formulation Unified model This is a simple inversion of the creep equation: For load control: \η-μ Ε σ (σ £·=£·.2% pro o f stress) and ultimate tensile strength. where E* is the effective mo dulus and ε is the applied strain rate or σ is the applied loading rate Standard data Tensile data are directly o btainable fro m standard data which pro vide stress-strain data including yield (o r 0. Fo r creep-fatigue assessment. A realistic estimate of the ultimate tensile strength may be obtained from the hardness of the material. . 137 10. Unified model This is still at the development state. Standard data Information available in national codes is presently used. These tables give minimum values.4 Fatigue Data Endurance data are required fo r the initiatio n assessment. °-\ {l-œ)"exp(Qc/RT) ε0 For strain control: μ l/n 'f f ρ\^ ~^'μ Μμ σ = σ0(\-ω) εα exp(Oc / RT) κ μ.3 Plasticity Data For fatigue assessment. Ordinarily a Ramberg-Osgood relatio n is assumed . cyclic plasticity data are preferred to simple tensile data. data fro m tests with an appropriate dwell period are preferred. parametric relatio ns between cycles to failure and strain range are available for several materials.

The standard expression for fatigue crack growth (da/dN) has not yet been formally linked to the unified model: — = C(AK)m y J dN where da/dN is the crack growth per cycle ΔΚ is the range of stress intensity factor over the cycle C.5 Crack Growth Creep crack growth data. Fatigue crack growth data in terms of the parameter K.003/fr φ = n/(n+1) where η is the stress dependency factor for creep. are required. . The standard expression for creep crack growth rate (o ) is directly related to the continuum damage model for creep a = A(C*Y where A is a function of creep ductility and φ is related to creep stress sensitivity If the creep ductility (εr) at the appropriate conditions is known. 138 10. m are material constants At present the conservative values C = 8x10" 1 1 (consistent units) m = 3 may be used for ferritic and austenitic steels. are required. then A = 0. in terms of the parameter C*.

at 0. growth and final fracture. (Creep life fraction is a poor alternative to strain fraction).2 Crack Initiation Cracks may initiate from creep damage accumulation. 139 10. 11. 13). The effect of these processes on yield strength and toughness must be determined. The results of the stress analysis determine whether the initiation time corresponds to failure or whether a safe crack growth period is possible. as these influence initiation. CMn steels 2%CrMo steel 100-500 150 100 Low alloy steels Wrought AISI 300-600 140 105 300 series Type 316 steel austenitic steels Toughness values quoted are K1c based on IC. Initiation of creep cracks from pre-existing defects is assessed on the basis of critical crack tip opening displacement and local strain accumulation. Estimates of creep life from standard data or measured damage levels may be used to assess the timescale. CMn steels Al killed CMn steel 300-380 196 146 Al killed C. Temper embrittlement is sometimes significant. Creep-fatigue initiation is based on a linear summation of creep strain fraction and fatigue cycle fraction. Actual Material Temperature °C Toug mess Material Mean Lower Bound Range Si killed CMn steel 300-380 164 99 Si killed C.2 mm crack extension.6 Fracture Toughness Values of fracture toughness are presently being collated. .1 Global Deterioration Crack assessment must include allowance for global deterioration. At present the following may be used (from Ref. 11 THE PROCESSES CONSIDERED ΓΝ DEFECT ASSESSMENT 11. Thermal ageing and creep cavitation are the most important. Fatigue crack initiation is based on endurance data for the appropriate operational cycle.

Critical crack sizes are determined by reference to a two-parameter failure assessment diagram. in a full assessment. The total life due to crack initiation and growth. If transient behaviour is expected. Creep crack growth is predicted using C* correlations. is then determined. regions may be plotted on this representing stable defects. . taking due account of crack closure. 12 LIFE PREDICTION The time to failure by global deterioration is first calculated. For creep-fatigue. initiating defects and crack growth regimes.4 Failure Criteria Failure by global creep processes should always be considered. then C. load and material. account must be taken of any deterioration in materials properties ahead of the crack. The changes in toughness and collapse load with global deterioration should be included.11). 140 11. (Fig. the creep element is determined per cycle. preliminary assessment. to the fast fracture limit. crack growth is calculated on a time base. For pure creep. since these may differ. A leak before break analysis should be performed.10). The stress and temperature gradients ahead of the crack should also be considered and crack arrest calculations performed where appropriate. calculation of through-thickness behaviour is usually sufficient. For a given structure. Consideration of the sensitivity of the defects to overloads is required. the total crack growth per cycle is obtained as the linear sum of the creep and fatigue components. For creep fatigue.3 Crack Growth Standard solutions for reference stress and stress intensity are used. calculations are used. Fatigue crack growth is predicted using AK correlations. as this may impose the effective limit to operation. Safety factors are applied to crack growth in the period before full stress redistribution has occurred. 11. as this may be life limiting. to calculate growth in both the through-thickness direction (crack size a) and that perpendicular to it (crack size c). In all cases. For a simple. Comparison of these timescales gives the overall life of the structure (Fig. It is usually wise.

Jovanovic Α. 1980. October 1993..D.V The Spheroidisation of some Ferritic Superheater Steels CEGB Report SSD/NE/R138 9186 8 Brear J... Bewertung der Restlebensdauer zeitstandbeanspruchter Bauteile durch zerstörungsfreie Gefugeuntersuchungen 3R International 19.J. & Tack A. Maile K. Salonen J..M.M. Metallographic methods for predicting the remanent life of ferritic coarse-grained weld heat affected zones subject to creep cavitation Proc Int Conf "Life Assessment and Life Extension'. Mechanistic creep models for 2ViCrMo Welds and Parent Metal ERA European Conf 'Life Assessment of Industrial Components and Structures' Cambridge. Paper 4.. Holdsworth S. H11 pp628-33 5 VGB Technical Report VGB-TW-507 Guideline for the Assessment of Microstructure and Damage Development of Creep Exposed Materials for Pipes and Boiler Components VGB Essen 1992 6 Auerkari P. & Townsend R. The authors give their special thanks to all the partners in that the references show. It represents not merely the work and experience of the authors but also the efforts of many direct collaborators and other workers in the field . 14 REFERENCES 1 Cane B. 141 13 ACKNOWLEDGMENTS This paper is published with the permission of ERA Technology Ltd.. Brear J.B.J. Seco F. October 1994 4 Neubauer Β. Particularly. & Day R.J. VGB-EPRI-KEMA-CRIEPI The Hague 1988 3 Poloni M.. Stuttgart. Reference Micrographs for Evaluation of Creep Damage in Replica Inspections Nordtest Report 170 1992 7 Carruthers R. Borggren Κ. Prediction of remaining life in low alloy steels Proc Seminar 'Flow and Fracture at Elevated Temperatures' ASM Philadelphia 1983 pp279-316 2 Shammas M. D'Angelo D. Fuzzy Analysis of Material Properties Data 20 MPA Seminar. much of the recent consolidation of the approaches discussed has taken place within the EU sponsored SPRINT Project SP249.3 .

Berkeley UK 1994 17 EPRI Project RP-2253-7 Remanent life of boiler pressure parts .. Pineau Α.. Struct.. Metallographic techniques for condition assessment and life prediction in SP249 Guidelines 20 MPA Seminar. Palo Alto USA. MPA Stuttgart. No 9. Mat... August 1993 Paper FG15/1. Evaluation of microstructural parameters for the characterisation of 2%CrMo steels operating at elevated temperatures Proc 4th Int Conf 'Creep and Fracture of Engineering Materials and Structures' Swansea. Moulin D. H Fracture at High Temperatures Springer-Verlag. Metallurgical life assessment for 2!¿CrMo hot reheat pipe welds Mitsubishi Heavy Industries Ltd Nagasaki 1987 11 Brear J. Stuttgart. Berkeley UK 1986 16 Nuclear Electric Assessment Procedure R5 An assessment procedure for the high temperature response of structures (Issue 2) Nuclear Electric.crack growth EPRI. & Fedeli G. Eng. Nishimura N. Poette C . Bhandari S.. Haneda H. Vol 14.M.a French approach for fast breeder reactors SMiRT-12.. pp871-885. Faidy C . pp139-144 pub Elsevier Science BV 15 Nuclear Electric Assessment Procedure R6 Assessment of the integrity of structures containing defects (Rev 3) Nuclear Electric. Comparison between two assessment methods for defects in the creep range Fatigue Fract. Ricci Ν.. 142 9 Benvenuti Α. 1988 18 Riedel. April 1990 10 Masuyama F. 1991 . de Araújo C. October 1994 12 British Standard Published Document PD 6493 : 1991 Guidance on methods for assessment of the acceptability of flaws in fusion welded structures BSI London 1991 13 British Standard Published Document PD 6539 : 1994 Methods for the assessment of the influence of crack growth on the significance of defects in components operating at high temperature BSI London 1994 14 Drubay B. Vol 12.. Defect assessment procedure . Berlin 1987 19 Piques R. Auerkari P.... Molinie E.

schematic . pear li te started to spheroidise but lamellae still evident Spheroidisation complete but carbides still grouped in their original pearlitic grains Evenly dispersed carbides £ (no trace of prior ferritic/ pearlitic structures) Evenly dispersed carbides but some carbides have grown. Β carbides precipitating on grain boundaries Intermediate stage of spheroidisation. 143 Ferrite and lamellar pear lite Spheroidisation has begun.1 : Microstructural degradation . through coalescence with others Fig.

3 3.1 4.1 3.3 5 t VGB­TW 0 1 2a 2b 3a ¡ 3b 4 507 5 ISQ 0 0/1 1 1/2 ¡ 2 2/3 .' 3/4 4 ι 1 ι ι Fig.' 3 . Comparison of Damage Classes Damage no no isolated orientated microcracks micro­ scale damage cavitation cavitation cavitation cracks Neubauer A Β C D Ë NTTR170 0 1 2.3 4.2: Harmonised cavitation assessment schemes .2 2.2 4.1 2.2 3.

B.C.l.G.H.E UNDAMAGEO BOUNDARIES: O.3: Rules for A-parameter determination . 4^ Ui DAMAGED BOUNDARIES: A.J Fig.F.

% Fig.25%CrMo Steels 5 τ 4-· (/) ro 3 -- O 4­ ω σ) -♦-Ductile ro ­β A " · " Intermediate Ε 2­Ι­ ro ▲ Brittle Q « I * 0 + + + Η Η 10 20 30 40 50 60 70 80 90 100 0 Life fraction.4: Relationship between damage and life fraction . Damage in 1­1.

147 INITIAL M*R.5: Experimentally derived softening curve for 21/iCrMo steel . Π0 + Δ IbO O ΓS HV·20 150 ô D V AG Δ V 0 130 120 A Β e 'f 16 -17 -Ib -15 -14.D ZfyCr-Ma 7 iooe¿ β 7 2 5 "C ■ 150°C Fig. -13 LOCJ t .DWLSS «BO ö V *^9&.4- On Τ. l(=320 + 550 C X 575* C 0 feCO'C o fa2b°c NORMALISED A N D Δ feSo'C 0 (o75°C TLMPE-RE.

compensated for hardness .3 .23030/T(K) Weld metal -25 .1 .t) = log (t r ) .6: Normalised stress-rupture plot for 2%CrMo parent and weld material.5 Stress/Hardness Fig. 148 -21 ι I I -23 Parent metal f(T.

7: Relationship between degradation and life fraction . Degradation of 1-1. % Fig.25%CrMo Steels 6τ ro 4 -.ι % ϋ c o 4^ ro ■o ro j _ ^"Ductile σ> „ ■•^Intermediate 0) 2 Q A Brittle 1 -i !■'♦■*■ A A 0 Η + + + + + Η 0 10 20 30 40 50 60 70 80 90 100 Life fraction.

150 25Q HV weld or CGHAZ Parenf material failures 225- 200 175 Weld/HAZ Failures 150 175 HV Parent Fig.8: Weld failure location predictor diagram for 2VÍCrMo steels .

9: Damage and degradation interactions .25%CrMo Steels 5 χ 4-- (Λ 0) ro 3 ü 0) O) ro ε 2 ro Q 1 -- 0 0 Degradation Class Fig. Damage/Degradation/Life Map for 1 -1.

152 ι » τ—>—ι—"—ι—- Brittle fracture Assessment line 1.4 - 0.growing defects Fig.0 D.initiating defects (g) .stable defects 0) .10: Failure assessment diagram .$ ¿ In sec [ f Sr] J " " Two-parameter failure assessment diagram showing regimes for (s) .6 h 0.8 Structurally disallowed «ï j .2 L Plastic collapse KT or Vor .0.

Formation of a short crack when the crack opening reaches a critical value t>t Creep crack growth CO A: o cc o Relative timescales for: (i) . reduction in critical crack size with global deterioration 0) . crack growth (c) . Cracking blunting ? t=t. crack initiation (g) . as a function of crack size and material degradation (t f ) time to fast fracture l cd ) .11 : Life limiting processes associated with defects .time to failure by continuum damage Fig. 153 t=0 Initial sharp crack t «t. remaining life of the ligament ahead of the crack.

Endesa. MPA Stuttgart. Ismaning. Finland. Spain. Linkebeek. 2. namely: AZT (Allianz. FR Germany. Belgium. Portugal. Base line of the project In order to achieve its main goal. under the coordination of MPA Stuttgart. GKM Mannheim. ERA Technology. UK. Leatherhead. Paris. Finland. EDP Lisbon. EdF. Overall. Jovanovic. Ponferrada. Jovanovic. 154 INTELLIGENT SOFTWARE SYSTEMS FOR REMAINING LIFE ASSESSMENT . namely of a) a set ofSP249 CLA Generic Guidelines (a "paper summary" of the technology to be transferred). and b) a knowledge-based system (SP249 KBS) for enhancing the transfer of the CLA tech­ nology. ΓνΌ. The project addresses pressure parts operating at elevated temperature (in creep and creep-fatigue regime) in fossil power plants (Brear. SP249 CLA GuideUnes SP24i> Knowledge Based System (KBS) WÊÊÊÊÊËÊÊÊÊÊ Figure 1: Two basic elements of SP249 . ISQ Lisbon. the SP249 project foresees (Figure 1) development and use of two main elements. M. Tecnatom. Espoo. Laborelec. assuring diffusion of modem state-of-the-art plant CLA technology among power plant utilities and research organizations in Europe. Ireland. Introduction Following a proposal of 13 European partners. Portugal. ESB Dublin. Vantaa. Madrid. a SPRINT Specific Project (designated SP249) has been approved and is currently running (1993-95). main goal of SP249 has been to enhance the transfer of component life assessment (CLA) technology for high-temperature components of fossil fuel power plants.The SP249 project A.). 1992). Spain and VTT. FR Germany 1. France. FR Germany. FR Germany. Friemann MPA Stuttgart.

Germany. their organization and contents are given in the paper of Brear and Jones (1994). A more detailed review of the "CLA-related" knowledge-based (expert) systems is presented more in detailed in the work of Jovanovic and Gehl (1991). 1994). A series of more than 40 issues has been examined and ranked according to the recipients' priorities. Main recipients (users) of the SP249 guidelines and KBS will be utilities in Belgium. France. 1992). Portugal and Spain. The hypermedia based parts/modules "cover" the background information built into the system: the CLA guidelines. SP249 CLA Generic Guidelines are being implemented in a knowledge-based system (KBS). As the SP249 CLA Generic Guidelines. Weld repair guidance. SP249 Knowledge-based System (KBS) Exploitation of the KBSs within the CLA technology has been successfully demonstrated in programs such as those of presented at the ACT Conference (1992). The whole system is designed as an engineering "tool box". The object oriented programming (OOP) appears both at the level of overall SP249 KBS architecture (each part of the system is an object exchanging messages with other ones) and at the level of its single parts. The issues. Maile. Friemann. The architecture allows to introduce new modules or to reorganize the existing ones any time. repair. 3. The "KBS-supported" use of the guidelines and the corresponding training of end-users personnel are major issues in the project. built on top of commercially available software (a more detailed description is give in the work of Jovanovic and Friemann. The CLA technology coming from different sources will thus be "packed" into a framework similar to the one used in MPA ESR system (Jovanovic. The shell is thus "aware" of the contents of the documentation bases as well of the relations among the single documents (and even parts of single documents). 1994). 1992: Jovanovic. 155 The basic idea of the project organization (Figure 1) is that the knowledge coming from the power plant should be first summarized in the form of guidelines (paper) and then transferred into the KBS. Advanced assessment route) have been selected for the contents of the CLA/KBS technology package of SP249. case studies. standards and other documents. This decision is based partly on the regulatory . by the ESR project of MPA Stuttgart (Jovanovic. Kautz. run). which serves as the main tool for transfer and application of the target CLA technology.g. based on the novel approach of mirroring the contents of the hypermedia documentation bases in the expert system shell (Jovanovic. Their overall content has been identified during the definition phase of the project (1991-92) by means of an inquiry performed among the partners. 4. in particular case of SP249 project. the SP249 KBS will cover: a) Decision making according to SP249 CLA guidelines (decision aid for making the "3R decision" (replace. The system appears as a conglomerate of single software modules controlled by an overview module. and. All hypermedia-based modules and all other modules in SP249 KBS are controlled by the expert system shell. together with some others (e. The KBS technology appears in the SP249 project at two levels: • as a part of the modem CLA technology • as the principal means or "vehicle" of the CLA technology transfer. Freimann. Ireland. 1992). Off-line crack sizing. SP249 CLA Generic Guidelines Bulk of the CLA technology to be incorporated into the SP249 KBS and to be transferred in the framework of SP249 is summarized in the SP249 CLA Generic Guidelines. Finland. More details about SP249 CLA Generic Guidelines. frequently used codes. Maile.

retrieve necessary standards. 5. Expected benefits SP249 will facilitate wider exploitation of CIA technology in the Union. thereby maximizing potential of capital investment and reductions of forced outages. for SP249. Jovanovic. reduced downtime and inspection resources. too). b) Recommendation regarding the annual inspection (revision) c) Damage analysis Using the system the user is expected to be supported by an "intelligent environment". and in it. This means that there is a lack of use of advanced (existing!) CIA technology. environmental and economic benefits. . component. in other words. Drawing parallels in Europe savings of 5. 156 guidelines. • Life management plays an important role in predictive maintenance strategies. optimized refurbishment schedules. leading to.A European de facto standard The project has defined the principal levels of the CIA-related problem tree. or. etc. and it is therefore clear that the project must address the issues of how the technology can be brought into use at the recipients of the technology. increasing plant efficiency and reliability. helping him to calculate. These tasks are recognized to be considerable exercises. that the project must address the issue of modem and successful inter-European technology transfer. as main causes the "Uneven distribution of CLA technology" and the "Uneven distribution of experts/resources" (in Europe. Further financial benefits accrue due to optimized replacement and refurbishment planning. partly on the experience and heuristic knowledge incorporated into the CLA guidelines. These would include the following ones: • The technology facilitates life extension of aging plant.). The KBS technology has been identified as a modem and appropriate one (Brear. Taking the significance of critical high temperature components in retirement decisions. Strategic goals of SP249 . a need to consolidate and adapt this technology identified.5 billion ECU's per year are estimated. but it is also realized that there will be a number of spin-off benefits. obtain advice and. needing frequent review to ensure success. 1992) and. in order to allow the incorporation of the CLA technology into the knowledge based system. and assuming that 20% of plant may have its life extended by 10 years a financial benefit to European Industry of 200 million ECU's per year is estimated. finally. leading to: • optimized plant component life assessment practice • improved plant safety • reduced environmental damage • increased economy of plant operation/maintenance 6. associated lost production/alternative generation costs. There is an estimated 4 billion ECU investment in boiler plant in Europe. and the ability to make run/retire decisions without employing specialists. it has been necessary to bring the technology into "computer digestible form". In other words. particularly in the way of guidelines and procedural documents that will pave the way towards the defacto European standard desiredfor plant life management (in terms of CLA). retrieve data (about material. Experience in the USA has shown that predictive maintenance can save costs by reducing unscheduled down time. elsewhere probably. find an optimized solution for his problem (Figure 2).

90 j Rati* 178. IO6 PTA « .NDETMfa~ng Look up & Pari 12012 321 87 1D:79 InlUillMUKUitti plan Pv. ¡M Calculate \ SP249 awaror. with an estimated 70% of fossil fired units in excess of 20 years old. ESPRIT. It is a forerunner for the exploitation of the technology for other components. 157 Life management strategies have environmental benefits arising from obtaining maximum availability of high efficiency plant.56 65.02 19. It also serves as a vehicle for the practical application and demonstration of knowledge based system technology for life assessment. This has significant European impact. repowering options offer a means of meeting European power demands into the next century. (Repowering normally involves the integration of combustion turbines in existing steam cycles giving major efficiency advantages).5 PTA/kWh χ 5 years = 59.12 MiiiMtttn Per* 12 03 78 85 1 2 * 5 Parb 180 02 η 27 97*6 Look up Pire «¿BS 64. those in the USA and Japan (EPRI. in other utilities and other industrial sectors. expected benefits can be seen on an example (here the Compostilla Plant of Endesa utility company in Spain): In the long term.12 45. Introduction M tantan taitun Mtmmt Vakje 1 VUue 2 Vili* 3 j 2.103 kW χ 6000 hour/year χ 1. S S P 249 Advanced Assessment Route Activation o f w * ^ Exchange of data ^Ξ2Γ related documents SP 249 Generic Guidelines AcDoli / ! and Case Studies . extra­sales would be 1312.02 73.000.l 231105 75 32 23. 5 211.79 7832 2 1 5 3 KBS Matenal3 21102 IB 56 65 12 MMrtiM 1203 7865 1 2 * 5 MMHU5 180.27 07. while making protection of the environment a high priority.45 M a t n t l l fl 45 98 84 12 45 08 SP249 Generic Guidelines Figure 2: "Intelligent" environment for the CLA analysis in SP249 • SP249 project exploits and disseminates technology developed under CEC research initiatives such as BRITE­EURAM. It will also consolidate an advanced level of European expertise in the field to compete effectively with Japanese and American initiatives For a single utility company and in the long­term.53 «niiitnituitttRti Par. CRLEPI) and other industrially funded research<-o*>. and to plan the refurbishment to optimize plant utilization and capital investment.l mzMssmmBm 1. if it is possible to extend the operation life of Compostilla Fossil Power Plant 5 years. Walt-tal 1 MJMMUI2 1 WIM 1 12012 231106 ■■■■] Vilu· 2 VIQM 3 121 B7 10. and from optimizing repowering strategies. and SP249 provides help in achieving this goal. Life predictions allow the utility to assess where repowering will be most cost effective based on the condition of the overall plant.

of both large scale catastrophic failures and of small scale but extended duration environmental degradation (use of new sites for new plant. 1992). with relevant DLN. each year in Compostilla power plant. Again.360. the annual benefits would be 0.106 ECU = 360. 7. Such factors. If it is possible to decrease this cost by 2% using the SP249. IO6 ECU.103 kill χ 6000 h = 2. Endesa has 4 fossil power plants in addition to Compostilla power plant. 158 400x IO6 ECU. Current status of the SP249 KBS The status of the project can be measured according to the achievements on major deliverables of the SP249 project. though of great importance. are not easy to quantify. Endesa has 4 fossil power plants in addition to Compostilla power plant.30 PTA/kWh.g. These are: • Generic Guidelines for CLA produced The bulk of guidelines has been produced in Nov. TRD. • SP249 KBS The first version of the system has been produced and distributed to partners in May 1994. ASTM and other materials) A . Besides. BSI. Both are highlighted in utility questionnaire responses (Brear. Optimized component life assessment leads to reduced risks .Parameter Calculation Hardness Calculation TULIP (Tube Life Prediction) Case History Selection and Management Crack Dating SP249 Remanent Life Calculation .000 ECU. and bugs eliminated.30 PTA/kWh χ 1312. but some of the most important guidelines are yet to be produced (e. In the short-term the expected benefits would be decrease the maintenance costs. Partners' comments and wishes are currently being implemented.106 PTA «) 18. higher emissions etc. Jovanovic.02 χ 18.). 1993. VGB and NT standards) Material database (with relevant ISO. ASME. individual utilities expect to benefit from simplified maintenance/inspection planning resulting from higher precision component life predictions and from an ability to deploy precious human resources more effectively. The total annual maintenance cost for Compostilla power plant would be (considering a maintenance cost of 0. 0. DIN. Until midterm of the project following modules have been programmed and have been implemented into the SP249 KBS system: (Overall structure of the system) Object management modules Advance Assessment Route Case history management (with about 100 case hystories) Documentation management (with all CLA Generic Guidelines. the crack assessment one).

as well as data. Both installation and corroboration will follow the pattern as the one initially foreseen for Carregado power plant ("Worked examples"). The SP 249 KBS is a modular object-oriented software. Furthermore. lx environment. The integration of these tools is realised via the Windows system facilities DDE. in the other side. MS Access. Based on the experience with end-users the appearance and usage of the software is similar to standard MS Windows applications. 159 Inclusion of oxidation effects SP249 Material Database Inverse Stress Calculation TRD (new version being programmed) Creep and Fatigue Usage Calculation TRD (new version being programmed) Cavity Density Linear extrapolation according to Generic Guidelines 003 and 004. an Observer group' of over 25 European and world experts has been established in order to ensure widespread dissemination of experience gained within SP249. OLE. In contrast to more conventional systems the user interface is not presented by a single application. All the development tools were used to create single modules and therefore are part of the user interface. DLL3 and launching with command strings. Guide.1 The SP 249 software baselin e The system is built on the basis of five development tools1 in the MS Windows2 3. Main reason for the early start up of the training activity is the need of direct interaction between the developers of the guidelines and the system. Single software modules of the KBS are objects. Further training has been planned as per partner. at seven different power plants). 8. Chemical Composition influence onto remaining life The modules yet to be developed are (being programmed and the corresponding fiinal guideline yet to be produced): • Defect Assessment • Training materials for CLA and for SP249 KB S The first (generic) training has been performed in June 1994. on one side.2 Software structure The SP 249 software system is organised as "conglomerate" of software modules (performing 1 KAPPA-PC. on-site training (8 one-week sessions in 1995. See [3] for details. and the end-users. Microsoft and Windows are trademarks of Microsoft Corporation 3 DDE = dynamic data exchange . 2 MS. information and knowledge in the system knowledge base are also handled and stored as objects. MS Visual Basic. MS C++. • SP249 KBS Implemented at all Participant Utilities The first version installed for testing purposes. Structure of the system 8. OLE = object linking and embedding. 8. DLL = dynamic link library .

structure.). elbow. etc. mainly via DDE. superheater header. Spain.g.).l. logging of the session Advanced advice for the next action to the user Assessment Route Documentation Base for background information and on-line documentation Case Studies background information for support in decision making Single Calculations to calculate single results (as input for AAR) The modules communicate with the kernel module. "system" (e.2. etc. This communication contains the main results and other status information. "component" (e. 160 specific tasks) linked to the kernel of the system represented by the SP 249 Workbench. GKM. Data are stored as objects in the SP 249 knowledge base in a hierarchical structure. E. The hierarchy is schematically shown in Figure 4. This structure is shown in Figure 3. Inputs and outputs of the calculations/analyses performed are also handled as objects. Table 1 : Single modules and their tasks in SP 249 KBS Module Task Workbench overview/control of the modules. etc. the TRD calculation can be performed on a component. Gruppo No. etc. called Workbench. "plant" (e. 1. Block No. A list of available analyses/calculations is given in section 10.g. The hierarchy of objects containing data relevant for the SP 249 analysis (the "plant objects") is stored as a sub . Single Calculation Modules ! Module 1 Module 2 Module 3 Module η 4 * ~Φ~ —«·· System Kernel * * » Intelligent Flowchart Advanced Workbench " " Η Assessment 1 Route * - Case study m Hypertext Environment Selection Documentation Base e single case single document single document single case I sin single case I single document ί Figure 3: Overall Structure of the SP 249 KBS .g.). the remaining life calculation based on hardness measurements can be performed on a location.). main steam pipe. and "location" (e.). T-piece.g. Further levels in the hierarchy are "country" (e. etc. having Europe as root.g..g. Carregado.g. Germany. while the mam tasks of each module are given in Error! Not a valid bookmark self-reference. etc. "block" (e. location n.). These "calculation/analysis objects" are then attached to various "plant objects". weld χ upper side.

. the user can get an advice (or "second opinion") from the AAR Besides having a generic flowchart. the SP 249 project is offering him two tools: the SP 249 Generic Guidelines for component life assessment (CLA) in a "paper form". c) "Get an advice" type of support: In addition to a) and b)... if one defines a tubing to be built of material 13 Cr Mo 4 4. Block 10 Germany GKM <^ _ .. and the computer software SP 249 KBS. this will cause all parts of this tubing to have this material property as a default. Applying inheritance helps to avoid unnecessary user input and facilitates storing of data. The work done during a session is logged in a transcript. steam vvpipe . selected correspondingly to his case and data. Single steps in this analysis are linked with all other modules. General use of the system Use of the system is schematically shown in Figure 2..3 Data structure The hierarchical structure of the "plant objects" simplifies their recovering. Calculation and Analysis Modules 10. .„. b) "Calculate" type of support: based on what the user has found in the background information modules he can then proceed with analyses or calculations. Started from the Windows Program-Manager or SP 249 System workbench (see Figure 10) as "unbound" (separate) application • the user has to input all necessary data. ^ . 15 „ „Main . 10. A SP 249 user would have to solve a problem related to a high temperature power plant component. etc. The software part offers essentially three types of support: a) "Look up" (and store) type of support: The user can look up (and store) information in the case study collection in the Advanced Assessment Route (AAR) and in the calculation part of the system He can also look up the guidelines and other documents like standards.g.Y piece VEngland ERA Spain Endesa Figure 4: Example of the object structure 9. The structure and use of the Guidelines "on paper" is described in work of Brear (1994). the user can have also his "specific route".T piece Weld 01 V ^ . norms. . In order to solve it. Europe ν \ ^Block „ . which can be stored and printed out. . Furthermore the software system provides the means to store all relevant data. Furthermore it al­ lows to apply the principle of "inheritance" e. He can also store this specific route and use it later for further analysis..1 Use of calculations and analyses Several calculations can be used within the system The calculations are developed as separate modules which can be used in three ways: 1. as well as the material data. Block 01 . 161 8.

Here.2. • the user does not need to care about data storing.2. Y-piece and header geometry. After performing the measurements with an optical microscope the values have to be typed in into the software. The calculation of the usage due to cyclic loading is possible in the three different ways. The stress calculation is possible for straight tube. • the result is returned to the kernel. Therefore the coupling of all single modules has been automated.3) as an underlying module. as described in TRD code. 3. Creep and fatigue are taken into account. • in addition the result is also returned to the AAR module. In the absence of a crack the Life fraction LF is calculated. The software then calculates the remaining life and necessary statistical data. b) On the other hand.2. The main task of these modules is the integrated use with full integration of data and program start-up. 10. p-T tables can be imported from an on-line data retrieval system. The modules developed and integrated into the system are Usted in chapter 7. which will use it for further examination and advice. 162 • the user needs to care about saving input and results in a file. Moreover the SP 249 material data base as the supporting module for the calculations is described. T-piece. 2. using the system as described in the introduction the calculation modules serve for the higher goal to decide upon "run/repair/replace".Parameter Calculation The 'A'-parameter is defined as the number of fractions of cavitating grain boundaries encountered in a line parallel to the direction of maximum principal stress. started from the AAR in the ExpertChart application as a "bound" calculation (Figure 2): • like in the second way. The calculation/analysis modules can also be used as single applications in the MS Windows environment.2 Single calculation/analysis modules Based on the idea of the engineering 'tool box' the object-oriented architecture allows to introduce single modules.2 'A' . the TRD inverse design calculation and the 'A'- parameter module are described as representative examples. which shows the accuracy of the measurements. 10. in . The module uses the SP 249 material data base (described in section 10. which will use it for further examination. The modular construction was based on the end-users requests to reflect the way they work.1 TRD inverse design Calculation The TRD calculation module calculates the life fraction based on inverse design following the German TRD design code. started from the SP 249 System as a so called "bound" calculation (see Figure 12): • the necessary basic data is passed to the calculation from the system kernel. Therefore they use the modules like pocket-calculators. elbow. a) On one hand they have to deliver single calculation results. 10.

­. c oj« U NQA. · .07B a Ol 351 L »p­cK­J «fandard davialian |SDC »][: Ώ7" Ò'Dl 3 " 'amase Abaoluto v a * » oí 5D(0b*)­SDtEit ta. 4 'i »Μι: 1 tkfDli j ■: :: ■■:-#M |i­ ¥:A : ' ί I 'Τ ") i IO* f ?15 : 22 ' 237 : aos3 aos3 a 03337 Me*n vehje ol A (SiM-tAi/Tlfc Kiosi­x­ 1 aòirs' "àum 243 0.::::­:*:::::. DÌ Ci«oà H.: β Μ * * & > : : : ) Ab... 163 the presence of a crack the residual ductility fraction.a­y. « κ :­:::::«:::::y.ν¥*»δν*:::^Λί*Μ*-' i J i V Ä J b o * * Corutanti on »hich I h · caietA alnn M b — d ΛΛΪΙΛ j$5sg5M 11 fKn !U B.>­:­. Λ^3ύ12 Ρ.*lt DatcuUtcdVaojoir '*..4 : * * Λ ί Α : · .­5S 11H8 r OQ3Ì7S : : : : : : : · : · : .. Results window fjle View window Spedii Help Igggggiggiigggiggl^ mi St atlull α Iwtëff-L· t j«t . O P « M I : n­f­hr. μ and Λ on which lifetime calculation is based are included as standard values in the program Figure 6 shows the appearance of the module on a large screen. E O W d d Melat B e .·*». ■ . 3 " 3 .Î0C3 . File Edit View Data input Calculation Results Window Help a i t i s i *I^|IS Inside damata c< Wall thickness of Inside diamele! ol Wall thickness ol Length of inward Ojt-ot-icundness Design wall thicki VrmVrnf. C . } .2. D.■ 4 4 * > « ^ : .00(·3» U W I m e Calculation :■:■:■:■:·:·■>·:. '-:■:■::'■:'■:'■:■ Conlvfcne» band«idti | S D [ E * V « * ( T n : G i l d e d D i m i ™ el I n a i A «ahi· [SDlA Jt ρ. is calculated. Figure 5: The TRD inverse design calculation.τ .3 Supporting module .ÏÜ" 'ambsi' Obiarvvd ttcndaid deviation (CDIUbiJt fra­ras '1 230 D.: ■ * · : ■ : ■ . The parameter η. B... F O HA Z: C u « . : ■ .MAGE0 BOU NDARIE 5' A. : ■ ■ : ■ : ■ : " .:|. **w*tW¿* — — . : : ■ : ■ : .: S . ■ . DAMAGED BOUNDARIES. y:yj&hi»py. : .:ï j Figure 6: The 'A'-parameter calculation window. : : ■ ■ ■ : ■ .[l-(l-f*^J :::::::ν:::. ..:­' « ϋ OPFI χ:χ::^­. . used in the crack growth rate equation. . .-:-.. .The material database The SP249 material database comprises data for most of the today's materials used for high 10 .ΡΓΛ/ tuiw/?. P n n l : F .cpcu«Mi*x. 10.

blank fields in the ISO data sheets are filled. Given that the users of the SP249 material database are subject to different statutory requirements. source specification. 11. creep strengths. average rupture strengths. The matrix contains typical combinations of component types and materials. stress dependency of rupture life (explicit). AFNOR. A second way of searching is a selection by keywords. given by standards like ISO. allow expla- nation of actions performed in the AAR • case studies . The contents of the data tables are: Title and description of the material. the following approach has been adopted. All cases within the selected class and their substructure will be found. tensile data. allowable rupture strengths. as well as some data from other sources. Hypertext modules The hypertext modules display documents from different sources for different goals: • frequently used codes and standards allow explanation of actions performed in the SP 249 system • CLA guidelines for advanced assessment. rupture parameters. These case histories are stored in a format agreed by the project the user in decision making. The system then will display the mentioned document and scroll to the appropriate chapter. The hypertext documents are linked where appropriate. where the system cannot decide on the basis of strict rules. The user simply needs to click on the text where another document is referred. physical property data 11. 164 temperature components in power plants. II .creep strength relationship. which is then displayed in the hypertext environ- ment. ASTM. easily navigating through the documents structure. The information from each relevant summary document has been included into a common format. classes and subclasses forming a hierarchical structure.2 Casestudies The SP249 KBS contains a series (currently 102) of case studies (histories) describing interesting cases of high temperature component damage and/or life analysis. The search of case studies is carried out within the dialog box shown in Figure 9. rupture strength . A search is started by selecting an item in the hst of materials and another on the hst of components. 11. The user selects a single case out of a range of the listed cases. All data are stored in twelve data tables. DIN and others. formula or table. stress dependency of rupture life (parameters). produced in this project. DIN. Since the ISO data provide the closest existing approach to a consolidated data set. where possible. BS. the materials are grouped into families. with blank fields where data are not provided/available. expanding and collapsing documents to display different levels of detail Figure 7 shows the main page with the hyperdocuments stored in the system. They are managed by the corresponding case study selection module. with the best available data from elsewhere. The number of entries found is shown in the upper right corner while the case names are listed there below. while Figure 8 shows one single document. range for which the data are expected to apply.1 Documentation Base The hypertext based environment allows the user to view documents on-screen. For convenience and to help comparison. The two "dimensions" (materials/components) are hierarchically structured.worked examples and failure cases .

The case studies are in this way used as an additional explanation how single steps in the AAR have to be performed. Generic Guidelines for Component Life A ssessment­ GG001 Metallographic Methods: . 165 Tille« tnfoB [ Documentation base |Ca8 doe «men Z3ZJTRD 300 mm 301 U U 508 ? ' DIN mm 17155 P U 17175 ¡ ¡ ¡ | ¡ 54150 Ü Ü Í V G B . displayed in the hypertext module 12 .Carbide Extraction Replicas Prepared by: J M Brear Approved by: SUMMARY 1 INTRODUCTION 2 AST A ND RD PROCEDURE F O R SURFA CE REPLICA TION 3 INTERPRET A TION OF SURFA CE REPLICA S 4 AH RDNESS MEA SUREMENT 5 INTERPRET A TION OF HA RDNESS MEA SUREMENTS 6 AST A ND RD PROCEDURE F O R TA KING CA RBIDE EXTRA CTION REPUCA S 7 THE INTERPRETA TION OF EXTRA CTION REPLICA SI »tíft j Main M<f«u : Öne*i«*nt!>«<irv S AM gemuti : . | Back CftrtitftfifS ■£»|Mftd:!: Quit Figure 8 Single document.Surface Replication SPRINT Project SPI 249 . Guidelines Hü R 509 L Recurring Examinations of Pipe Une Systems : (1983/6) Examination of Surface Microstructure : f e c h n .1 -1989 Appendix D WM SP 249 specific ¡Üü CLA GG 000 l i p CLA GG 001 I CLA GG 002 ¡¡¡¡¡j CLA GG 003 H P C U GG 004 Ü Ü 004 Annex 1 HP CLA GG 005 ~ CLA GG 006 ¡|¡¡¡ CLA GG 007 CLA GG 008 MÊÊ CLA GG 009 CLA Advanced Assessment Route VTT Inspection criteria of hot pipework Figure 7 SP 249 Documentation base The usefulness of the system was further enhanced by linking the case studies with AAR. Agreements (1992) Guideline for the Assessment of Microstructure and T5-Reports and Damage Development of Creep Exposed Materials (draft) C2QNordtest ¡¡¡¡¡¡Ι NT 170 Reference Micrographs Γ ? 1ASME: U U ASME B31.Hardness Measurement Commercial in confidence . attached to single steps of AAR and keywords attached to the case studies. This is realised by means of keywords.

5%Μο steels Ö Other Components (CROH) ■S­SlViCrMoVsteels — &Creep Range Inside heating ° & 1 ­ 1 % C r steels G Superheater Inlet Header O Cn Superheater Outlet Header ©T11 QReheater Outlet Header ΘΡ12 — B M a h Steam Pping θ 1 Cr «Mo Steel (1 '¿Cr '/A to Φ Ê32ÜQ· steels O Bend Φ & Intermediate Alloy Steels C J-1V-Piece Φ ÊîHighAloy Steels ©Valve/Cooler Φ Q A usternie Steels Φ C3 Reheat Steam ttping Φ Q Other Materials Ê3 Other Components (CRIH) mm^mf^wmm^ Figure 9: Dialog box of Case study matrix 12. Figure 11 13 . 166 Materials S3 Components & f errBc Steels Φ Q Yield Range φ G 3 C / C M n steels — (öCreep Range Outside heating = B L o w A Boy Steels GD Superheater tubing ΦΟΟ0. to add data or add new objects. The user can open an existing or create a new file.2 Working with SP 249 objects ("plant objects") As described previously. The kernel help file explains how to use the overall system.3%Μο steels C3Reheater tubing ΦΟΟ0. 12. In parallel to these steps the following parts of the system are accessible: • the hypertext module to review documents and case studies • auxiliary tools (e. the "Workbench1 win­ dow (Figure 10) appears. He would then probably edit the object structure. This file contains the user defined structure of objects and their data. transcript). g. given by the user. The SP 249 Workbench 12. An on­line help using the Windows help facility is included in the system Each of the modules has also it's own help file which explains how to use the module (explanations for the assessment side are handled in the CLA guidelines). For the overall analysis he would then start the Advanced Assessment Route. report generator. SP 249 Workbench ­ [Untitled] (no actual object] File Object View Execute Options Knowledge Archive Help ■ Documentation Case Base Studies Material Database Open Edit Objects Select Object Help Calculations Figure 10: The start­up and main window Next step would be to select the object to analyse. the system is working on the basis of an object structure.1 General After starting the SP 249 system from Program­Manager in Windows.

Temperature Ufe OXIdation... SP249 CLA calculation (+ CIF). the system will launch the corre­ sponding module.. If there is no object selected.. he needs to select it first.. Extrapolation. It combines all the calculation modules to an overall advanced assessment route. '<R£G«).3 Use of single calculations from the SP 249 Workbench As described previously the execution of a single calculation in SP 249 KBS also is possible... AAR Documentation Case M Cavity Density.. The AAR itself supports the user to decide upon the basic goals of the SP 249 System application (run/rep air/replace). If there is an object selected (displayed in the title bar of the Workbench window..WIS] (Header Body] Eile Qbject View I Execute I Options Knowledge Archive Help TRD Calculation....WfS} ittft « t t t a N b j c Irl: File Object V Edit Object Tree ^^¿MwMJMtS^ | 33Europe "¡>£¿3 Germany ■3· ÊJ Belgium Documentation Case ■O­C3 Spain Base Studes a — G) Portugal Q ISQ ­QEcP >an#o\««dai Q Carregado Uni 5 "CaCarregadoUntS . Figure 12: Selecting a calculation to perform on the selected Object' 14 .. 167 shows the dialog for editing the object tree. The user can add. WM Crack A ssessment.4 Module description The Advanced Assessment Route (AAR) module is the key module in the SP 249 System.IBffJIWHüSflBlinS Q Main Steam Pipe QCarregado Ur* 7 ■fr Q Great Botati φ Q Wand φ G3 France ■ü­QFnland m*SuB*t*A*Qultø Figure 11: Editing the 'object' base SP 249 Workbench ­ [CARREGAD. rename. delete and edit objects.... here 'Header Body1). Advanced A ssessment Route. the user will be asked if he wants to start a calculation for the selected object ("bound") or an independent calculation. If the user wants to analyse an object in the object base.. the user can only start an "unbound" calculation. TUbe Life Prediction.. After confirming the "unbound" mode start­up. Report Save Help Base Studies De Hardness. Calculations Crack Dating. He would therefore use the 'Object Selection' dialog.. A­Earameter. The 'Execute' menu as shown in Figure 12 lists the available calculations. 12.. 12..

"informed" and "ended". the result is sent to the SP 249 system kernel. The SP 249 Advanced Assessment Route The AAR is implemented as an active flowchart.2) can be reviewed. 13. When the life fraction based on Α-parameter measurement is calculated. 1.7) would then be performed with the help of the SP 249 KBS calculation modules. The main level (activity) consists of 6 sub-activities. Advantages of such an appearance are: • Complex activities are divided into hierarchically ordered. he can request for appropriate pieces of hypertext by clicking on the description area of an activity. Information means the entry of a value that describes the result of an activity.5) then the next step will be performing the Α-parameter method or cavity density method. The AAR then includes the life fraction based on A-parameter into it's interpretation process. When "qualitative damage assessment is not possible or inspection interval not adequate" (activity 2-. Dependant on the user's decision one of the methods will be performed. The calculation of remaining life based on Α-parameter would be done. The following step (activity 2-1. Boxes are activities which can have sub-ac­ tivities. • AAR automatically layouts activities and the connections between them • AAR stores every user input and uses it for calculations about the possible next steps in the execution of one activity. The kernel then informs AAR about the result and a "history" event is produced.1. After having started an activity with sub-activities the first sub-activity can be started. These activities represent Phases (or 'main'-activities) 1 to 5 of AAR The description of an activity includes an activity number and the CLA guideline attached to that activity. in this particular case the interpretation of surface replica. • Compared to rule based systems the process is presented in a very clear way to the end-user. So one can limit the portion of details he wants to look at. the user would then do the Α-parameter measurements (activity 2-. The "information" can also be performed by a calculation module.4.1. Dependant on the user's task. 1. Figure 14 shows an example of AAR. Dependant on their preconditions other activities can be started. If that was Α-parameter. The marked activity shows the coupling of the AAR and the calculation modules. This calculation module is started by the system.1. when the user starts the corresponding activity. Activities can be "started".4. Activities that are finished can be ended by a click on the green triangle in the lower left. 15 . Facts coming as input during the session are then combined with the rules stored within AAR.1). and a recommendation is produced for each actrvity/subactivity.4 of Phase 2. Each of the boxes can be connected to a node in a hypertext document. The user has to click on the green triangle to start the corresponding activity.1. By corresponding keywords a fist of supporting case studies (see section 11.1 Integrated use of AAR Figure 13 shows the main level of "Advanced Assessment Route" in SP 249. small entities of information. executed through the start of the activity. This is covered by sub-activity 1.4. 168 13.2.6) with the help of generic guideline on CLA(GG1 chapter 3.

Joint effort of CEC and European industries (utility companies and consulting and research organizations). reduced costs of scheduled inspections due to the optimized inspection strategy. numerics including finite elements. or Knowledge-based Intelligent Integrated 16 . reduced unplanned costs. Conclusions The SP249 Project and the development of the SP249 knowledge-based system and its future deployment in power plants should help in achieving a series of economic and technical benefits: e. It opens the way for further applications in the area and establishes the KBS technology both as a part of the modern CLA technology and as a powerful vehicle for technology transfer. or improved possibilities for the life extension of the plants. either because he needs an explanation of the question or because he is asked to provide (to the system) some additional information which is currently not known/available. They often tend to "block" the dialog at the moment when the user is not sure what to answer the system. Possible answer to this "blocking" of the dialog and similar related problems characterizing conventional expert systems is to integrate tightly all additional systems modules (e. unproved availability of systems and plants.reduced costs of daily operation (specialists called only when necessary). so that the user is hardly aware that he/she is dealing with different kinds of software. From the point of view of the applied KBS solutions.). shorter and better utilized maintenance periods. 169 Advanced Assessment Route Phase 1 General calculation Review of Operation Results of general calculation OK Results of general calculation critical Phase (1-6) Continue Monitoring Phase 2 Inspection-based assessment Access for NDE & Metallography Inspection results critical j Phase 3 Repair / Replace / Refine assessment Inspection results OK OR NO replacement necessary Replacement necessary Phase 4 Phase5 triplement Monitoring 8 Replacement / retirement Strategie Inspection Strategy Figure 13: AAR: highest level of the flowchart 14.g. based on the large scale European application of KBS technology (the total value of SP249 project is about 2.5 MECU) mark a milestone for the KBS technology applications the area of power plant operation and management in Europe. g. databases. etc. the conventional (production rule based) expert systems alone can deal successfully only with a very limited range of practical problems in the domain of CLA technology. integrated. the SP249 system is a modem. This idea is the base line of the MPA's approach called "KISS" - Knowledge-based Integrated Software Systems. object oriented system As indicated by Jovanovic and Bogaerts (1991).

Linkebeek.:': Phase (2­1. which has been applied in SP249. L. Spain and VTT. Madrid. U.3} Interpretation of surface :'. (Parsaye et aL 1989). Espoo.10) (GG 1 ­3. to Mr. de Araújo ISQ. to Mr. Vereist. Acknowledgments The author wants to acknowledge herewith precious help and collaboration of partners in SP249 (in alphabetical order): ATT (Allianz). EdF.1.3) (GG 1­3. Luxembourg are highly appreciated.1.3. Thoraval and P. Santos Endesa. P. SPRINT TAU. both as a part of the modem CLA technology and as a powerful vehicle for technology transfer. Endesa.1 . Rantala. ERA Technology. Aguado Tecnatom. FR Germany. McNiven and Mr. Ismaning. Löwe. Ponferrada. Ireland.4) Merrnine the inspection interval v Damage assessment NOT possible OR Inspection rterval NOT adequate v^ Phase (2. Spain. to Messrs. France. and it follows the same line of thinking which lead to (e. Belgium. G. EdP. A BisselL to Prof H. Friemann and 17 . to Dr.4. GKM. to Mr. J.4) {GG i .1) Figure 14: Detail view of the AAR (A­Parameter calculation) 15.3) Calculate remariñg Ite CONTNJE CONTMJE wfh Phase (2­1. ExpertChart ­ IBA UT2. By combining efforts of the CEC and European industries (utility companies and consulting and research organizations) SP249 will become a milestone for large scale applications of the KBS technology. IVO.1. Laborelec. ERA. ESB Dublin.2) (G01 ­3.4. Portugal. Brear and J. E. to Messrs.CHT] S&j E'le View Window Help M Phase (2-1 j . EDF. to Mr.1 ASI (GG 1 ­3. Intelligent Hypermedia Systems KBSs with Hypermedia Support. Finland.13) Determine the damage dass for this zone V Phase C2­1. P. creep damage Damage assessment possible Phase C2­1 ­1. in particular to Dr.) Intelligent Databases. Batista. M. Tecnatom.12) h­Yicaie microstructiral zones contarne. IVO.«. Auerkari VTT. to Mrs. Rivron. J. Portugal.1) Identity the microstnjctural rones covered by the repica V Phase (2­1. to Mrs. GKM Mannheim. 170 Software Systems. to Messrs.1 AS) (GG 1­322) Cavty densty method Phase (2­1.3) (G01­33. EDP Lisbon. C.1 ) CONTNUE V Phase (2­1.1 . ISQ Lisbon. Allianz.g. UK. Vantaa. etc. Special thanks go to the persons involved in the project on behalf of their companies.1 AS) (GG 1 ­32) QuaKative state of degradation wth Phase (2­12) Cxjanttative damage assessment Phase (2­1. Leatherhead. Hagn.1) wth Phase (2­1. Support οι The C ommission of the European C ommunities and the staff of SPRINT-TA U CEC.41 )(GC 1­3. R Kautz. to Mrs. FR Germany. Laborelec. Paris. M. Finland.1A . A. Jones.

vol. Overall structure and use of SP249 knowledge based system. Vol. Avignon. H R. 171 Kluttig of MPA Stuttgart. Proceedings. (VdTÜV). (1991). (1989). published by EPRI Palo Alto. and MPA Stuttgart. J. Advanced Computer Technology Conference 1992. W. Jovanovic. Brisbane. Deutscher Dampfkessel­Ausschuß (DDA). M. (1994). Chignell. M. MPA Stuttgart Jovanovic. deductive hypermedia technologies.. Vols 1 and 2. ERA technology. A consolidated approach to component life assessment in SP249. J. G.. Chichester. December 9­11. Α. Α. 13). John Wiley & Sons Inc. Phase I ­ Definition. December 1992 Brear. Expert Systems With Applications. Avignon '91 Conference Expert Systems and their Applications (vol. May 27­31. vol. New York. TRD ­ Technical Rules for Steam Boilers. held in Phoenix. Essen 18 . (1992).. 1992. S. A. References ACT (1992). (1991).. Jones. UK.. Khoshafian. Gehl. Tutorial Nr. Practical Realization of intelligent inter­process communication in integrated expert systems in materials and structural engineering. Friemann. Friemann. Α.. 26­28. Toronto. and Wong. and to all others who have in or the other form contributed in the realization of this large European project. H. Hybrid knowledge­based and hypermedia systems for engineering applications. Singapore. SPRINT Specific Project SP249 "Implementation of Power Plant Component Life Assessment technology using a Knowledge­Based System". K. May 1992. 1991 Jovanovic. Leatherhead. Avignon. 13 "Expert Systems and AI Applications in the Power Generation Industry". 16.. Kautz. 1991 Jovanovic. Proceeding of the 20th MPA Seminar. Final report.. Some expert systems for power plant components in Europe and USA Proc. M. Proc. S. 3. Jovanovic.V. Vereinigung der Technischen Überwachungs­Vereine e. (1994). K. 479 pp. (1992). Bogaerts. Aug. M. (1992). 5: 465­477 Parsaye. Intelligent databases: Object­oriented. 2­Specialized Conferences). Α. US. Α.. Proceeding of the 20th MPA Seminar. Arizona. MPA Stuttgart Brear. FR Germany Jovanovic. ESR­A Large Knowledge Based System Project of European Power Generation Industry. Maile. of the SMiRT 11 Post Conference Seminar Nr.. of the Avignon '92 Conference Expert Systems and their Applications (Vol. Hakone (Japan). M. pp 707­718. 3..




Pertti Auerkari


Espoo, Finland


Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. The non-
mandatory approach is becoming overwhelmingly dominant and provides routes for
improved overall economy in the inspection policies. However, the trend cannot override
fundamental component life and safety related requirements. This creates both a need and an
opportunity for systematic methodologies to manage the process of inspection planning. For
certain aspects of the process such tools already exist and are widely used, because they have
been available and useful even for the mandatory inspections. These tools include eg project
type planning and execution timing for the actual off-line work, as well as data management
and mapping of the inspection results. Until recently, however, much of the decisions related
to actual content and timing of non-mandatory inspections were not subject to such
systematic tools or methodologies. This is about to change with the increasing integration of
inspection data management, inspection planning tools, and decision making methodologies.

Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. The rules
typically specify or suggest some aspects of

• selection of the targets and methods as well as timing of inspections;
• extent of inspections and management of inspection results; and
• approach towards inspection results in terms of consequences.

The relatively stiff mandatory rules have generally best served their purpose in cases where
multiple failure mechanisms and relatively fast damage accumulation are not unreasonable


(eg for boilers) or where specific additional safety concerns apply (eg for pressure vessels of
nuclear plants). However, although the mandatory rules often reflect some industry
experience, they tend to be the same for all possible cases and therefore do not generally
provide optimal inspection policies which can be expected to depend very much on particular
cases and plants.
Instead, the non-mandatory approach of condition based maintenance is becoming over-
whelmingly dominant route towards improved overall economy of inspections and life
management. This implies that within certain limits, only plant and case specific data are
used to define the inspection strategies for the specified components. Since this cannot
override fundamental component life and safety related requirements, the background data
should exist in a form that can be used for such decision making, and the decision making
process should extend beyond the simple ways of the mandatory rules. In the same time, ever
increasing amounts of on-line and off-line measurement data are available throughout the
service life of a power plant. This creates both a need and an opportunity to extend the use of
systematic methodologies to manage the process of inspection planning (Jovanovics et al,
For certain aspects of the process such tools already exist and are widely used, because they
have been available and useful even for the mandatory inspections. These tools include eg
project type planning and execution timing for the actual off-line work, as well as data
management and mapping of the inspection results. Until recently, however, much of the
these items have not been combined together. More importantly, the decisions related to
actual content and timing of non-mandatory inspections have not been subject to such
systematic tools or methodologies. This is about to change with the increasing integration of
inspection data management, inspection planning tools, and decision making methodologies.
Advanced inspection planning makes full use of such tools and appears to carry considerable
promise for avoiding unnecessary outages, inspections and repairs, and for focusing the
inspections towards controlled life management.
For the present purpose, such tools must make use of

• the rules to decide and quantify the order of merit between plants, if the analysis is
extended to account for inspections involving several plants;
• the engineering factors that define the present and foreseeable future condition of
• the non-engineering factors that affect the final decision-making on inspection planning;
• the decision-making methodologies that create the logical flow of inspection planning
using the above rules and factors as framework.

Below, the examples are mainly confined to the hot pipework of fossil fired power plants,
looking at the creep dominated regime of operating conditions. Also, the main domain of
consideration is limited to cases where predictive rather than corrective maintenance is likely.



The inspection planning process may initially involve decisions on timing between several
plants according to the plant characteristics and availability needs. However, here the view is
basically limited to the narrower view of planning and timing of inspections for one plant.
Then of the engineering factors to be considered, some are related to service loading, ie

• stresses, temperatures and time in service, and their distribution
• number and character of startup cycles and other major thermomechanical cycles; and
• environmental effects on components (oxidation, corrosion etc).

The environmental factors are generally not very significant for the pipework except for
indirect use in oxide thickness based temperature/time assessment and oxide dating of cracks.
The other service loading factors can be initially tackled by using stress analysis (to indicate
locations of interest if nothing else) and life consumption assessment methods analogous to
TRD 508, ASME CC N-47 or equivalent approaches. Even such a nominal type of
assessment is not possible without knowledge of another major group of significant
engineering factors, related to the material and component response to the service loading. In
its elementary form the required information includes the nominal materials data for the
given materials type, the geometry of the piping and the boundary conditions for the support
system. For actual life assessment type of evaluation, much more information is needed, such

• material, component and location characteristics in detail; and
• existing service-induced, manufacturing and assembly-related damage, ie results from
recent and earlier inspections as well as details on how these were carried out; such
measurements of damage indications can also include displacements, strains, hardness
values etc.

These essential items are mostly not available without (possibly repeated) measurements or
inspections in plant, for which guidelines exist (eg VGB R509L,1984; VGB TR 507,1992;
Auerkari et al, 1992; Auerkari, 1995). A further set of important engineering factors is related
to the inspections or measurements themselves. Such factors involve

• component and location-specific accessibility for inspections;
• component and location-specific sensitivity and resolution of measurement; and
• quality, coverage and representativeness provided by the techniques that are used.

It is also important to realise that much of the available information on the engineering
factors and the state of the structures is patchy at best, and almost never as complete as eg the
life assessment theories would ideally require. However, there are often ways to overcome
such difficulties because


• not all factors are equal in value for actual inspection planning; for example, usually the
latest measurements provide more important information on the component condition than
earlier measurements or nominal (design) data;
• missing data can be often replaced by parallel information or from other experience; and
• inspection strategies can be designed to improve thin and patchy databases with minimum
effort in additional inspections.

Classical examples of rules on inspection timing can be seen in the applications of replica
inspections. In this case the typical extracted rules for planning of the next inspections are
based both on latest measurements in the inspections and on more general experience (Table

Table 1. Example rules for timing of the next inspections, based on most recent observed
class of creep damage; t = time in service. The numbers in parenthesis for the
Neubauer/Nordtest case refer to recommendations after the service time exceeds 100 000 h.

Recommended maximum service time to next inpection

Damage Neubauer/ Linear fraction Linear fraction,
class Nordtest 010 (Shammas direct) (evened lower bound)

1 (no No specified 7.33 t 4t
cavitation) limits

2 (isolated 20 000 h 1.171 1.5 t
cavities) (40 000 h)

3 (orientated 15 000 h 2t/3
cavitation) (30 000 h)

4 (micro- 10 000 h 0.19t 0.25 t
cracks) (20 000 h)

5 (macro- 0
scopic cracks)


As is seen from Table 1, one may need to select between alternative rules. This is not merely
a task to appease personal preferences but should reflect other available information or any
hints from the service or maintenance experience that could weigh in favour of a certain
approach. For example, if it is known that the location of current interest has not experienced
any significant additional loading excursions during its service time, it may be appropriate to
use the life fraction rules of damage (linear fractions in the Table 1). These are supported by
a limited experimental evidence for a low-alloy steel (Shammas 1988; Tolksdorf & Kautz
1994). However, if we know that significant additional loading has occurred, eg because of
supports of the pipework have not functioned, then it is likely that the damage process has
accelerated towards the end of the expired service time. In such cases it may be safer to use
the Neubauer/Nordtest type of fixed time rules (Neubauer & Wedel, 1984; Nordtest NT NDT
010,1991), because these are based on results from plant inspections, including a
considerable number of cases with non-functioning pipework supports.

In the future, the new 9 to 12 % Cr steels and other newer steels will be used in an increasing
proportion, and for many of the materials the experience-based evaluation mies are yet to be
created. This means also that caution is needed in utilising the present rules for new situations.
It is seen that the number of potentially influential engineering factors can be quite large, and
the available information variable in character. To assess all the necessary engineering
factors separately each at the time would impose a serious burden to any person trying to
deduce optimised inspection plans, and hence there is a fairly obvious opportunity for
computerised decision-making tools to help in creating such plans (Jovanovics et al, 1992).
Naturally, no such tool is any better than the rules on which it is working. In addition, it is
necessary to consider non-engineering factors that are essential for proper inspection

The underlying criteria of optimisation of inspections include the overall economy in plant
operations, including the value of availability, economy of inspections, economy of analysis
itself, as well as safety and environmental requirements. To the extent these are not hard-core
engineering factors, the engineering factors therefore do not determine optimal inspection
plans alone. The non-engineering factors may be required as boundary conditions to the
inspections, or enter directly as optimising variables of which money consumption must be a
major one.

Consequently, some of the most important non-engineering factors include eg the price of
replacement power, availability requirements and the local price of any action needed, such
as inspections, repairs or replacements. The background economical factors as cost of
potential consequenses to be avoided are at least partly measurable as insurance premiums,
but there are local variations. Many of the variations can be seen in local, national or regional
mandatory rules and traditions, at least in their extreme forms. Furthermore, in spite of their
engineering background, there are borderline and non-engineering features in the differences
between the local, company-related, national or regional traditions in design, inspections and
life management. For example, design rules based on ASME / BS or similar codes, and TRD
and equivalent codes produce somewhat different results because of some tradition-based
compromises that attempt to balance between engineering simplicity and rigorous analysis.
Some of the differences in tradition can be seen from Table 2.


Some additional inherent differences are revealed by looking at the failure statistics of these
regions. In case of the ASME/BS tradition, the literature citing failure cases refers fairly
often to the problem of ligament cracking of headers. Such cracking which is rare in the TRD
regions appears to be caused by the thermomechanical cycles of plant operation, combined
with the relatively thick (compared also to the ligament width) header material in the
anglosaxon design. This is exacerbated by using low alloy materials such as 2.25Cr IMo
steel, which requires much thicker walls than the higher alloyed steels like X20CrMoV12-l
with same steam values.
The TRD type of tradition appears to have its specific Achilles' heel also. In the literature of
past 20 years or so its nearly exclusively from the germanic design origin that problems with
steam line bends are reported. This frequency is by no means high: it is at least two orders of
magnitude less than for creep damage observed from major circumferential welds. However,
since the creep damage in bends is more severe from the safety point of view, i.e. unlike
damage in welds it can lead to catastrophic failures in the base metal, it has occasionally
received much attention in inspection programmes.

Comparable traditions can be seen in local, national or regional mandatory, eg safety-related
Table 2. Typical regional features (until about 1990) in large coal-fired power plants.

RegionMax.superheat Top ferritic Max unit size LP rotor Specific pressure
deg C material MWe type vessel authority

ASME/ 565 2.25CrlMo 1000 discs on shaft noni

BS etc. or l/2CrMoV

TRD& 545 12 Cr 600 monobloc/ exis
equival. welded

Also, the non-engineering factors include the local experience and possible training needs of
the employees or inspectors involved. For example, when very experienced operations and
maintenance personnel retire or move company, it may even become optimal to extend some
inspections or other measurements a little to provide the new personnel the additional feel of
the plant condition that was perhaps lost with the experience.

Of the regimes where some mandatory boundary conditions will remain in the future safety
and environmental issues are probably most important. In the recent past much of the safety
issue has been tackled so that more or less standardised engineering, organisational and
regulatory solutions exist everywhere. Meanwhile the environmental issue has gained more
and more weight, becoming a very significant cost issue. Therefore, any component


dysfunction that deteriorates the plant performance in these terms also becomes a cost item
and must be included in inspection planning somehow. However, for our example case of hot
pipework this is hardly an issue, whereas the safety aspects are.


The optimisation process for inspection planning in practice translates into balancing the
necessary information for such planning with the economy of obtaining it. The economical
aspects include the economy of inspections, analysis, and consequences of not meeting the
desired condition for the specified time. Such consequences are measured in cost of
replacement power, required repairs, insurance etc, but also in more fuzzy terms such as
possible impact to environment, company image in public relations or towards regulatory

A significant though apparently hidden cost factor within inspection planning analysis can be
the inconvenience of obtaining the required information. If the system that is used for such
analysis is internally very "stiff, ie accepts only complete sets of extensive data on each
location of interest, it is eventually likely to fall into disuse because at least initially any data
on the components are necessarily scarce. To minimise such problems, a good inspection
planning optimiser would accept patchy initial data and cover the missing pieces with default
values from nominal data or parallel experience. This also makes the process much faster for
the user, and provides easy paths to "what if' analyses. These in mm can pinpoint the most
valuable additional data that could be obtained in the next inspections.

One challenge in the optimisation process involves comparisons and weighing of data of
incompatible types. The quantities to be measured and compared may not be easily
measurable in same terms or units. For example, the user may have obtained a service time of
187 300 hours and a repair cost of 12 400 USD for a specific component, for which the
service temperatures, pressures and material strength can be given as probability
distributions, but the secondary axial stresses can only be classified as "higher than in other
comparable components". This requires combination of different types of information,
whether in form of crisp numeric results from measurements, distributions of probabilities, or
more fuzzy expert opinion.

As stated above, optimisation for inspection planning means balancing the cost of obtaining
the required information with the cost of missing it. Because of this balance, it is generally
not optimal to extend the quest for information and analysis beyond a case-by-case dependent
point from where onwards the value of additional information no more pays off. For the very
same reason, this point is generally not well known and it does not pay off to find out its
exact position. Consequently, much of the optimisation deals with a balancing process using
information and rules that are "good enough" rather than "optimal" themselves, and the
optimisation process is not like in classical minmax mathematics. However, by using a
number of different quantities for indicating the condition and cost items of the components
of interest, much of the uncertainties are reduced to an acceptable level in spite of possibly
considerable uncertainties in single quantities.


This evaluation is the most controversial part of the process. it is often necessary to give some reasonable weighting to the important factors affecting the inspection plans. The ability to compare successive inspection results may be regarded more important than clear differences in resolution or inspection cost. The maintenance costs amount to a relatively high proportion. It is clear from above that this is not a simple task of translation to local language. Fortunately. there is an important issue of repeatability and comparability over long periods of time. . A new share between materials and processes for power production will slowly emerge. Meanwhile. and adjustment is necessary only when systems are transferred. and for many of the materials the experience-based and even partially accepted creep damage classification and evaluation rules are yet to be created. up to about 10 % of the total operational cost of a power plant (EBSOM 1993). The "anglosaxon" preference uses typically mechanical polishing in all cases and relatively large (eg 20 χ 50 mm) sheets for covering a weld. Even in the cases when the local mandatory (regulatory) issues do not prevent using generalised procedures. regional or even wider global unification takes place through international standardisation. Clearly. The non-technical parts would include the obvious aspects of language and training. there are two main regional traditions in Europe and elsewhere to make plastic replicas from hot pipework during off-line inspections. ie component life. there are national and other local views of acceptability. the heart of the optimisation process will remain basically untouched by such modifications for local use. such views will delay very much any unification in the procedures of replication. and the maintenance cost can be easier to adjust than other major cost items such as cost of fuel or capital. which provides an essence to the important decision on the recommended time to reinspection or repairs. In the future the boundary conditions are likely to change somewhat. no customisation removes the ultimate needs for evaluating the observed damage on technical terms. much of these factors remain fairly constant within one region and hence mostly within a single company or plant. For most countries. and multiple spots for covering a weld. Furthermore. 180 3. generally parametric changes only take place in technical customisation. For example. electrolytical polishing (except for cracks). this will mean a wider spread of high temperature materials than at present. Such acceptability does not need to refer to any engineering absolutes to be significant even in engineering sense. However.2 LOCAL AND REGIONAL FEA TURES: CUSTOMISA TION It is seen that there can be plenty of local aspects in the task of optimisation for inspection planning. this is likely to apply to at least some traditions and mandatory rules though very gradually. As in the case of boundary conditions from mandatory rules or future development for rules that would be acceptable for new materials. as for any comparable system. Whatever advantages or disadvantages each technique may have. These local factors typically require that any automatic tool for creating inspection plans must be locally customised. In some of these factors. The "continental" tradition prefers smaller 10 mm dia spots. because skills in optimal maintenance strongly affect the competitive edge of the plant.

A significant though apparently hidden cost factor within the inspection planning analysis can be the inconvenience of obtaining the required information. P. P. and consequences of not meeting the desired condition for the specified time. SUMMARY Inspection planning for pressurised power plant components is traditionally directly or indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. VTT Report VALB96. To minimise such problems. analysis.a review. much of the decisions related to actual content and timing of non-mandatory inspections were not subject to systematic tools or methodologies.724. which however is reduced by using a number of different quantities for indicating the condition and cost items. insurance etc. For example. and fuzzy expert opinions. 5. a good inspection planning optimiser would accept patchy initial data and cover the missing pieces with default values from nominal data or parallel experience. 41 p.. inspection planning tools. December 1993. It is generally not optimal to extend the quest for information and analysis beyond a point where the additional information no more pays off. probability distributions. The economical aspects include the economy of inspections.European Benchmark Study on Maintenance. 181 4. a new share in terms of materials and processes for power production will slowly emerge. optimisation for inspection planning is necessarily a dynamic process. the future boundary conditions for inspection planning are also likely to change. Föreningen Underhållsteknik (S). 1995. company image in public relations or towards regulatory bodies. but also in more fuzzy terms such as possible impact to environment. MAINE / EBSOM Kunnossapitoyhdistys (Fl). ie accepts only complete sets of extensive data on each location of interest. it is eventually likely to fall into disuse because at least initially any data on the components are necessarily scarce. EBSOM. Because of the time-dependent changes in plant. such as crisp numbers. Until recently. . 22 p. This leaves a factor of uncertainty. Espoo. Such quantities can be made comparable even when they are initially of totally different type. IIW Commission IX WG Creep. For many of the newer materials the experience-based evaluation rules are vague or non-existent. Borggreen. NDT for high temperature installations . However. K. and in this sense the optimisation is also internally a moving target.. The optimisation process for inspection planning in practice translates into balancing the necessary information for such planning with the economy of obtaining it. REFERENCES Auerkari. If the system that is used for such analysis is internally very "stiff. Norsk Forening for Vedlikehold & Den Danske Vedligeholdsforening (DK). Auerkari. 1992. J. required repairs. This is about to change with the increasing integration of inspection data management. Such consequences are measured in cost of replacement power. NORDTEST NT Technical Report 170. EUREKA Project EU. & Salonen.. Reference micrographs for evaluation of creep damage in replica inspections. and decision making.

J. Knowledge­based system aided evaluation of replica results in terms of remaining Ufe assessment of power plant components. Essen. Auerkari. 83 p. on Measures for Assessment and Extension of the Residual Lifetime of Fossil Fired Power Plants. 18th MPA Seminar.. M. S. Power Engineering. Stuttgart. Β. 1994. May. Int. M. Gehl. & Viswanathan. Α. Rantala. Experimental methods for determination of the creep and fatigue damage conditions of power plant components. J. 238­244. Wiederkehrende Prüfung an Rohrleitungsanlagen in fossilbefeuerte Wärmekraftwerke. Κ. 1991.. & Kautz. VGB Conf. 1992. & Wedel. E. 28 p.. Den Haag. VGB Conf. p. 182 Jovanovic. Moscow May 16­21. Maile. l i p . Conf. VGB. R. Shammas. P. 17 p. Paper 48. 44. 1992. 6 p. NDT: Replication avoids unnecessary replacement of power plant components. Assessment of theoretical models for determination of remaining life.S. 1994. Tolksdorf.. on Measures for Assessment and Extension of the Residual Lifetime of Fossil Fired Power Plants.. VGB. 1988. E. Guideline for the Assessment of Microstructure and Damage Development of Creep Exposed Materials for Pipes and Boiler Components. & Hald. Int. U. Int. 1984. 24 P­ Neubauer. Vrhovac. + app. NORDTEST NT NDT 010.. Tolksdorf. Vol ILL p. on Life Assessment and Extension. H. VGB­Richtlinie R509L. M.. 10 .R.. Metallographic methods for predicting the remanent life of ferritic coarse­ grained weld heat affected zones subjected to creep cavitation. Moscow May 16­21. VGB­TW 507. Friemann. Remanent lifetime assessment of high temperature components in power plants by means of replica inspection. Essen. 1984.

iL R Kautz3. is considerably shorter than the true average life for these components at nominal (design) service loading leveL Reasons for this include e. not only because of the direct cost impact but also because any unnecessary maintenance compounds to a significant additional risk for damage and failures. overheating or other disturbance not accounted for in design. FR Germany 1. P. overloading. The amount of relevant background information. • hot parts of the steam and gas turbines. Nevertheless.The BE5935 project A. these components need additional monitoring. Due to ageing. extent of data of the service.and Y-pieces and the rest of the hot pipelines. repairs and replacements. such a system for computer-aided planning on forthcoming inspections of high temperature piping in fossil-fired power plants has been developed in the European Union research project BE5935 [6]. extension of life or inspection periods is to be recommended.GKM Mannheim.000 service hours and 1000 cold starts. using lower bound values for material strength in design and upper bound dimensions in rnanufactxrring. valves. The safety aspects of design impose that the nominal (design) life. Auerkari2. .VTT Espoo. superheaters and reheaters. Introduction The power plant components operating at high temperatures are important targets in the in- service inspections and measurements. Initial concept for the part regarding the inspection results interpretation has been gfven by Auerkari [1].MPA Stuttgart. as well as needs for relatively long term systematics and expert experience. T.g. in connection to the recent guidelines of Nordtest [2] and VGB [11]. excessive residual stresses. S. Apart from being large and expensive and subjected to complex mechanical and thermal (creep-fatigue) loading in service. • headers. embrittlement or cracking after unnecessary repair welding and local heat treatments will occur at a non-zero (and high in case of susceptible materials in stiff structures) probability. The extent (or occurrence) of excess life potential is not certain. Whenever feasible. 200. FR Germany 2 . Following the initial concept [7]. number of locations of potential interest in a large system. considerably shorten component life. Furthermore. all support the view that much of the work would be ideally handled by an application-oriented decision support system. Finland 3 . Jovanovic1. timing of maintenance is always an optimisation problem. since also too lax maintenance or too long maintenance periods will lead to costly unexpected shutdowns. 183 INTELLIGENT SOFTWARE SYSTEMS FOR INSPECTION PLANNING . e.g. Ellingsen1. can on the other hand. H P. Psomas1 1 . these components can limit the availability of the whole plant. For example. inspection and maintenance history. Such components include typically • boiler tubing.

5Mo-0. As a consequence. Failures of bends can be sudden and catastrophic. Very often the conclusion resulting from the inspections or other measurements is a recommended time period to next inspection. In an ageing plant some weld damage and failures are very likely and can be economically important events. on average perhaps by a factor of 3 to 10. In principle. etc. • inspection history. Basic inspections principles The most important factors affecting the conclusions made from the inspection results of the hot pipework are • service history and its deviations from the expected (in design) range. and • limited systematics (holistics) in producing the final conclusions. Therefore. Normal consequences of (circumferential) weld failures are not catastrophic and do not require consideration of personnel safety. and are of little interest before attaining a very long service life ( » 250. but are in general rare. length of such a period is limited by • extent and quality of the available information of details e.000 h). Many of these inadequacies are partly addressed by using an appropriate decision support system. etc. Experience suggests that in the straight pipes and most areas (possibly excluding some bends and T-piece bodies) of hot steam pipings the nominal design life is relatively easily exceeded. However.g. if even then. when the steam temperature exceeds the value of about 500°C.5Cr-0. [10]) have been devised for initial selection of the methods and location of first inspections. Based on the integration of several elements the system produces a final output in form of a "component vs. this safety factor on life is probably of the order 1. in the service history and future service. 2. Le. In the welds that are likely to fail first. 184 Input to the system are results of previous inspection (if available).5 to 3 on average. and hence while it is important to include the critical welds in the inspection programs. and mainly limited to susceptible materials such as 14 MoV 6 3 (0. and • expected consequences of failure (cost and safety aspects). • optimum failure risk level for the plant and the component. sets of experience-based recommendations (e. because only the accumulating inspection results provide means for improving the accuracy of optimisation. data about the piping component and strategic constraints resulting from the importance of the component. which hence should be useful in minimising the economical impact of too frequent or extensive inspections. . Straight pipes are usually not included in the programs. in the beginning it is not possible in general to optimise exactly in this sense.g. the inspection programs tend to concentrate on welds and treat bends (and some T-piece bodies) case by case. year" matrix showing: • what inspection technique (replica. how percent of welded joint examined) should be applied at a given location/component during the next inspection (overhaul). the results from earlier measurements. • inherent inaccuracies in the evaluation methods for limiting failure. as well as in the maintenance history. the desired level of confidence.g. it is equally relevant to use these programs for finding those welds that determine overall life and possible corrective actions. In the welded joints the variation of life is also large.) and • to what extent (e.25V) after less than successful rnanufacturing. • materials and manufacturing / repairs / modifications. ultrasonic.

• substandard or clearly less than ideal design (e. 100. 100. naturally any indication of overloading or overheating. Local decrease and variation in life time is most common in welds. period about 4 years) inspections for certain components such as boiler drums. 185 In addition to experience-based general rules or earlier inspections.000 h). . if there are no specific reasons to deviate from this rule (such as earlier inspection results or damage / failures). due to improper hanger supports or other inadequacies related to design. As may be seen. These components typically also determine the manufacturers' recommended maximum rates in changing temperature and pressure during startups and shutdowns. The extent of the first inspection of this kind can often be reduced. Particularly loading and thermal transients tend to concentrate relative lifetime accumulation in thick-wall components such as main valves of the boiler and turbine. improper hanger supports) or overheating. 1 gives an understanding to the overall decision problem formulation and modelling of the inspection planning of power plant components. a new evaluation is recommended not later than at the time of an additional 80 % of the nominal life (max. Locations and inspection criteria The locations of interest are usually selected from components and areas with • earlier indications of defects or other deviations. when the steam temperature does not exceed 480°C (e. Indications of the possible early damage or service incidents of significance can be checked by regular hot and cold walk downs. Normally the extent of such a first inspection can be as shown in Table 1. critical headers and turbine rotors.000 h). as well as manufacturing or material defects. this applies only to the obvious locations of interest according to Table 1. Apart from the more frequent (max. If then no deviations or defects are found. it is recommended that the extended inspections targeting for life assessment of the hot pipework are started by the time when 80 % of the nominal design life has been consumed (max. • significant overloading (e. operation or maintenance. seam welded pipes).g. and replaced completely by ordinary periodical inspections when the steam temperature does not exceed 400°C.g.g. safety valves and other components included in the usual periodical pressure vessel inspections. Damage could be also induced before attaining 80% of the nominal life. the process of inspection planning is divided into two levels: 1. The timing and selection of locations after a given inspection are defined according to the results. including noting of the general condition of hangers and supports of the pipework. and • higher than average damage rates from general experience. • suspected material/rnanufacturing defects. „All component level": Selection and prioritisation of the components/locations for the actual inspection purposes. First inspections are recommended for quality purposes to be taken and documented at the time of taking the plant / components into service. nozzle joints and perhaps some bends.1 Generic Fig. 4. 3. Decision problem formulation and modeling 4. recovery boilers). is useful in determining the locations of interest.

mark the measurement points Note: consider internals where high fatigue life consumption is expected. RT = replica testing.and Y-pieces near MT/PT+RT. Surface quality requirements as in VGB R509L. 100 mm heavy valves (at least MT/PT+RT (welds). extent and methods of first inspections for pipework operated for 80 % of the nominal design life (max.7 4) hoop measurement of the body and the nozzles. Bend MT/PT+RT MT/PT from bend 2 ) bends of lengthy pipes minimum wall (UT) + RT at least from curving up/down (min. UT at MT/PT of welds fixed points (at least 2 + welds. RT one) check of inside according to wear/cracks findings Bends near fixed points. according to crack indications additional UT. MT/PT + End caps. to one per line) MT/PT-results T. MT/PT n. MT/PT = surface inspection. . ovalityl) extrados + ace.000 h) Locations Methods Extent Header nozzles and Endoscopy. strain^) UT for body Deaeration/dewatering MT7PT+UT where Inside of the joint nozzles (spot test or by water may be experience) trapped^) Flange joints near fixed MT/PT+RT. wideof welds. Visual inspection in all cases. 186 „One component level": Inspection and determination of the next inspection time for each selected component/ location. wall 100 mm wide. except for RT as in SFS 3280. RT main branches) thickness U and ace. welds in welds RT. d > 300: MT/PT of four zones ~ 200 mm wide 3) especially near the boiler and when the nozzle dia ratio > 0. 100. welds MT/PT externally points UT = ultrasonic testing. Table 1: Recommended locations.UT nozzles (20 %) Safety valves. surfaces or endoscopy) Main steam valves other Check of operation. steam Check of operation Component internals coolers (UT ofint. 1) +recalculation of stresses 2) d < 300: MT/PT whole bend. to indications.

Fuzzy inputs are e.:^·-:· One Component Level Selection of inspec­ tion item ace. Safety. I Performing of inspec­ tions. fuzzy and/or random inputs has to be solved. Priority I from COLOR Decision node: INSPECTION STRATEGY ADVISOR OSTRA) Determination of Inspection Extent for Rank inspection strategy Stråteer Y-PIece for inspection item zero attention VI 3 low profile + TH 2 WSMífâiWiw -t-MT.. Goal of the complete system represented in Figure lis to provide a new inspection plan for all selected components/locations. the stochastic input variables (e.g. Diffi­ culties. Random inputs are e. Risk. compon.g. determination of next inspection time and scope RESULT: INSP ECTIONI 'LAN Component/ 1994 1995 1996 - New inspection plan Location #2873 MT. Inspection Results I 5S'·.ΡΤ inspection items mmmm W&M mm Figure 1 : Generic flowchart of decision problem formulation There are different problems that have to be handled when co­ordinating complex actions like those in the decision making process for inspection planning of power plant components.ET fu. Environment Past history. locations) Criteria: Importance. Decision node: COMPONENT LOCATION Ranking of inspection RANKING (COLOR) items ace.g.g.g.. In . "high risk") represented in terms of membership functions. certain crisp numbers (e. results I Cost. Prev. All Component Level User's preliminary selection of inspec­ tion items (systems. number of operating hours). those involving linguistic variables (e. to ranking Component/ Type Rank criteria Location #2873 Header 2 #3987 T-piccc 3 I WmsSm Y-Ktci ίΐϊίί'·.g. Crisp inputs are e. to position in ranking list Criteria: Inspection cost. 187 On each of these two levels the critical decision node is placed where a multi criteria decision­ making (ranking) problem with crisp.PT VI UT for selected #3987 ΜΤ. temperature) represented in terms of probability distributions.

4. multi criteria decision analysis modules (COLOR. are both application-oriented and will be described later on in detaiL However. with the use of the system the user avoids possible overlooking of significant aspects of the procedure. The applied methodology [8] is an extension of Saatys ΑΗΡ [9]. where modelling of uncertainties is mainly based on experience. as well as ranking of alternatives with respect to each criterion. they model also situations where no or stochastic uncertainty exist. Both modules can also handle crisp and stochastic inputs.4 Hypermedia and numerical calculations modules Both modules are integrated in the decision support system in order to provide related background information in each step of the overall process. In this way.2 In teiligen tflowch arting m odule The modelling of the problem domain is done with an "intelligent" flowcharting program. a hypermedia part providing the explanation facility.g. developed for the analysis of the two decision nodes shown in Fig. the resulting integrated module acts as a user-advisor. The "intelligence" of the program refers to its interaction with a knowledge-based system controlling all movements in the flowchart. ISTRA) optimising the selection of possible alternatives in each decision node. Hypermedia) but also to input his personal thinking and/or experience. on which the decision has to be based. a knowledge-based system part controlling the user's movement through the procedure. . 1. assisting the user in facing the problem in a recommended way and allowing him not only to obtain information and recommended actions from the other modules (MCDA. Furthermore. may be retrieved automatically. a flowcharting part enabling to model the inspection/evaluation procedure graphically. Finally. consisting mainly of experience-based recommendations and/or guidelines. numerical calculations based on them can provide additional input. The related information. In that way. 5. calculation of consumed life according to standards. 4. 3. it is much easier to model uncertainties regarding comparisons of criteria.3 Multi criteria decision analysis (MCDA) modules The two modules.). In such a way. etc. namely COLOR (COmponent/LOcation Ranking) and ISTRA (Inspection STRategy Advisor).4] in order to incorporate fuzzy comparison ratios. 188 order to cope with all of them. 2. 4. as amended by Buckley [3. the underlying methodology is very general and applicable also in other fields. the developed decision support system consists of the following elements: 1. where uncertainty exists. a numerical calculations part providing additional input (e. 4.

Safety aspects (including regulatory safety aspects) 5. For each type of component there might be different locations. of component Fuzzy 0. 0. location 3. Cost of replacement of component 4.070 (40) Environmental aspects Fuzzy 0. Environmental aspects (including regulatory environmental aspects) 6.1 Alternatives The alternatives for the ranking procedure are the different components of a power plant. • superheater.122(70) Safety aspects Fuzzy 0. 189 5. Fundamental importance of the component for the present plant (seriousness of failure/downtime) 2. location 2.070 (40) Qualitative past service history Fuzzy 0.140(80) . Results of previous inspections 3.175(100) Cost of replacement Crisp 0. etc. Table 2 gives the types of input values and an example of the relative weights of the different criteria calculated by a pairwise comparison. 5. • économiser.053 (30) Expected change in the op. location 1. Quantified past service history 8. • boiler tubing. Stoch.2 Criteria To model the multi criteria decision of component/location ranking the following criteria were defined: 1. conditions Fuzzy 0. relative importance of the component in comparison with existing alternatives).140(80) Alternative supply patterns Fuzzy 0. Table 2: Types of input and weights for criteria of component location ranking Name of criteria Input type Relative weight Fundamental imp.105(60) Quantified past service history Crisp. location 4. Qualitative past service history 7. Alternative supply patterns (Le.122(70) Results of previous inspections Crisp 0. In general the hst of the alternatives may be like the following one: • boiler tubing. Expected change in the operating conditions used for the analysis so far 9. COLOR MODULE 5.

45 "extensive" program 2 weeks 5 0.15 "standard" program 5 days 2 0. Table 3: Overview of the extend. Additional difficulties due to access 3.30 "extended" program 7 days 3 0. Implicit risk due to safety aspects 4. max. ISTRA MODULE 6. cost inspection "zero-attention" program 0 days 0 0 "low .2 Criteria To model the multi criteria decision of inspection strategies the following criteria were denned: 1. Table 4: Types of input for criteria of component location ranking Name of criteria Type of input Optimisation goal values Inspection and other Fuzzy Minimise (for higher level inspection related maintenance cost patterns cost are increasing) Additional difficulties Fuzzy Minimise (more difficult access to due to access inspection region forces lower level inspection pattern) Implicit risk due to safety Fuzzy Minimise (higher safety needs forces aspects higher level inspection patterns) Component priority Crisp Maximise (higher component priority (result from COLOR) forces higher level inspection patterns) .1 Alternatives The selection of the inspection strategy is done after the selection of the inspection locations. preparation cost) 2.profile" program 3 days 1 0.g.8 6. 190 6. costs and reliability of the alternatives Name of inspection Inspection Relative inspection Rehability of strategy time. Component priority (result from COLOR) Table 4 gives the types of input values. Inspection and other directly related maintenance cost (e. The relative weights of the different criteria should be calculated by an expert. For the selected locations on the different components there are five types of inspection strategies or patterns possible: • "zero-attention" program • "low-profile" program • "standard" program • "extended" program • "extensive" program In addition the detailed description of methods and extents of inspection are given in Table 3.

and c) WHEN (Le. namely in GKM. 7. Finland (Table 1). after how many operating hours ) to inspect The methodology developed in BE 5935 enables to substantially improve current engineering practice for achieving answer onto each of these three questions. Table 5 Overview of considered for preUminary and detailed industrial problems in Task 5­1­3 (Decision Making for Inspection Planning) Partner Type of component Preliminary analysis Detailed analysis MPATVO/VTT sample steam line Yes partly MPA/GKM full piping ('"Kessel Yes No 14") MPA/GKM piping ("Kessel 15") Yes Yes They illustrate how the methodology developed in the previous tasks of BE 5935 ("The BE 5935 FB­MCDM Methodology") can be practically applied on industrial level.1 General So far.Z813 Z815 Λ Z20 Z205 m m ^âí Figure 2:Sketch of IVO sample steam Une (including component ID) The following (slightly modified) example works through a steam line. Germany and in IVO. within the BE5935 project. the methodology for inspection planning has been deployed in two power plants.2 WO-Example 2801 2 3 Ttmo ^ °' 2202/ \ 2802 ^ 2806 / __-Z8G7 ^ v J / ^ Z805 \ / 2004 rflW^C­­ 2803 Z804 \ ^ * _ Z808 ^•^ \ ^ . which components/locations and with which priorities) to inspect b) HOW (Le. A sketch of this piping including the component LD's mentioned in the input and output tables is given in Figure 2. 191 7. . In both application it was necessary to provide a practical and usable engineering answer to the following main questions: a) WHAT (Le. PRACTICAL APPLICATIONS 7.^ 2203 ^-^^>^2t)02 2201 ZOOS _-Z80» T-y«se3îaZ301 lähtee Soja 2RA14 z ^ ^-Z810 ^ s ^ ^Z811 <T --Z812 2204^ __.^. using which inspection methods and in which scope) to inspect.

To support the complex decision making process a modelling tool.000 service hours. End sil· etion condition X (¿) User's decision lo tai* one component fot dele: gannii cm of the next inspection time 3 (3)'One component level' Perform detailed analysis for one selected component (€) Oth tr ccmponinU m the COLOR ranking Est available? (7) End analysis Figure 3:Inspection Scheduling "All Component Level" Begin selection condition (3) Selection and ptioritisation (3. 192 The material is 13CrMo44 with a nominal temperature of 545°C. namely ExpertChart. Perform analysis and calculations The problem domain is modelled through flowcharts. and the situation is given in 1993. which can be a complete chart on his own (activated by the small checked rectangle in the upper left comer of the box). Model the problem domain 2. Traditional if-boxes are translated into pre.! selection of components to be considered Decision (32) Run COLOR in order to obtain a ranked list node"A" of components Figure 4: Selection and príorítísatíon of components and locations 10 . Each activity may be detailed on a sublevei. Lead the user through the problem 3. Provide background information 4.1) Prehnñnejy uset*. Activities are represented by boxes and their interconnections by lines and arrows. was developed and integrated to the decision support system ExpertChart is used to: 1. The flowchart modelling the inspection planning for the IVO steam line is shown in the following figure: Inspection Scheduling "All Component Level" [ (1) F»*t inspection? J Ho (inspection results exist) (2) Review rendis from Begin selection condition previous inspection 3 ©Selection and pnDnujalion of compontot* and locations. These conditions are also needed for leading the user through the flowchart.and post-conditions. after 110.

Micro­ structure. bend« within 13 kh 30 kh.3 (Hordtest) (Nordlest) Microcraeks without (32. micro crack».5) R u n I S T R A m order t o c h o o s e i n s p e c t i o n strategy Decision Β e gin t e s t i n g c ondition node"B" (5.10) Reinspect wilhm 20 kh. End damage condition B e g i n delete c o n d i t i o n Β (5 _?) P émanent strain.7) Reincpacl within (5. A d d results t o inspection p l a n (matrix) τ Figure 5:One component level. imatl amount acceptabla up to 30 kh Condition 1:11 Condition 1:12 Mierociack* with cavities O.'i 7) If carne» tima > 100 kh and (5. h e r d n β cc. to -L. foi bande confidar replacement IOCrMo910. b en ds wrihin 10 kh (58 4) Remove if abundant.4) R e ­ r u n COLOR Ín order τ ■Quantitive p a s t history* t o o b t a i n n e w value for ■ C o m p o n e n t priority* for I S T R A (5.8 13) Reinspect within critics farbends consider replacement 13 kh. (3.6) Ferform t e s t i n g ecc.10) S h o w r» c ö m m a n d e d tim· for next inspection. funy^ (52) Recalculation n e c e s s e i y ? (53) R e c t i culote remtuning X I kTe for COLOR i n p u t (S.8.' Significant indications found? (52) M aero cracks. bends within 20 kh (522) Repair / remove.. I (5.1) Maximum damage type7 Cavities class 3.8..1) Change m service rendition»? ('Significant' observed or | Sigrnffcanl c o n d i t i o n change I everted changes.\ (N oratesi) (Mordlest) Macrocracks vilhout cavities (3£j6)Retnspectwdhifi (5. Showtime condition (5. bends within 13 kh (523) Rep atr/remove.8. microcraeks.. oxidation (5T) Delete this l o c a t i o n from inspection p r o g r a m s b u t r e c o n s i d e r after 10OOOO h End strain condition if t h e r e are damage i n d i c a t i o n s nearby. Significant indications yes table " M e t h o d s and extents .3) Repair and remspect within materials are 13CrMo44 or materials are 13CrMo44 or 10 kh.8.1 C aviti · ( clacc 2. 193 (5) "One component level" J_ (5. reinspect any repass within 10 kh. re· inspect any repairs within 10 kh Cavities class 3 2 Cavities class 11 Maerocracke with cavities (Noxdlest) (Hoi diaci) (5 2 . bends within 10 kh 20 kh.8.11) Reinspect wdhin 13 kn.8) Remspect within 10 kh. cavities. cavities 11 . consider replacement particulary for bends Cavities d a s t 3 3 Cavities class 2 .14) If servici tim* > 100 kh and (3. periods can be doubled I0CrMo91D periods can b · doublad End damage cendrtion Figure 6:Macrocracks. perform detailed analysis S i g m a a n i indie triions yes (5:8) Macr ocracks.

User selects the above mentioned [Box (3. 12 . hardness or oxidation (593) Find cause. Step 4 Running of COLOR. 4. components with ID No. Step 2 Access to the database of previous inspections. microstructure. tc-t(ec/e) Micro structure. 7.4) T>10°C above the expec­ ted avg. #507 are in a critical phase. 5) Step 5 Which component should be analysed? (List of components). hardness. According to "Results of previous inspections" criterion. User selects [Box(l)] "No" since there are previous results from 1986-87. Micro- J_ (55. Step 3 Initialisation of COLOR. #815. 3.1)] components as well as five others for further analysis.1W> observed? JC e>0.1) Permanent strain e >0.2)] weights of each criterion.9) Permanent strain.6) Reformulate the recommerv dalion (next inspection time) obtained front step (5. 2. The ranked list in descending priority order is the following: (6. the system gives a priority to each selected component. or set retest schedules a c e t o observe d strain level.2) Unexpecte d macro- structure. With the data from Table 8 and the relative [Box (3.9. He fills all related data in Table 8. consider iep an/ replacement if near ec— 19Ό. #813. restore or retest ace.Z) "Microcraeks. 8 to determine next inspection time. 1. 194 m (5. found (55. 8. [Box (4)] According to his experience user selects component No. hardness or o s datier. oxidation The inspection planning procedure for this example follows the steps below: Step 1 Is this the first inspection for this system? (Yes/No).1% or unknown (5. macro- cracks cavities' End strain condition figure 7: Permanent strain.1% e <0. User selects the [Box (2)] appropriate inspection data and reviews inspection results.5) Correct. service temperature or doubts of toughness? > 10 degrees C (55. to calculated tc as in case of observed strain 0-9.

(5-9-6)1 13 . Step 10 Initialisation of ISTRA For component No. 8 is suggested.1%. 195 Step 6 Change in service conditions? (Yes/No).000 hours. Since the new priority of component No.1)] monitoring results show a +5°C change in average temperature for the service conditions of component No. (5. 5).5)] weights for each criterion given from an on-line pairwise comparison. standard inspection program for component No. [Box User selects "No". COLOR [Box (5. User selects [Box (5.3)] data the system provides a new value for the criterion "Quantified past service history" regarding component No. [Box (5. user selects "Yes". (5-8-5)1 Step 16 Permanent strain (el observed? (e>0. With the data from Table 6 and relative [Box (5. failures are repaired and [Box reinspection is scheduled within 10.6)] "Yes" since there exist some indications. (5-9. 8. unknown). Step 12 The system retrieves the recommended actions related to the [Box (5. (5-9-1)1 Step 17 Unexpected microstructure.OR With the new data of Table 8. User performs the recommended tests. Step 8 With use of the calculations part module and based on the new [Box (5. 4.2)] User selects "Yes".1)] Step 15 According to the recommendations. The [Box observed permanent strain is less than 0. Step 13 Significant indications of damage found? (Yes/No). 3. Therefore. According to these values. 7.1% for this component.4)] produces a new output.1%. Step 14 Maximum damage type found? (Several options).6)] standard program (tests and their extent). in order to find out the best inspection strategy. Step 11 Running of ISTRA. 8 is even greater user proceeds with the same component. New temperature [Box (5. Step 9 Running nf COT . User selects the [Box appropriate option (Microcraeks with cavities). 2. next inspection time remains [Box as it was. hardness or oxidation? (Yes/No). The ranked Hst is now the following: (8. e<0. Step 7 Is a recalculation of consumed/residual life necessary? (Yes/No). 8 and for all possible [Box (5. 8. Table 8 is also modified. 6.2)1 Step 18 Since no other problems were found. 1.8. the system gives a priority value to each inspection strategy/program.5)] strategies user fills all related data in a table (Table 6).

system returns to step 5. 2. to reduction 20.10)1 Step 20 Since there are other components in the COLOR list available. No. UT for body 6 #813 Butt weld within next MT/PT.5 program Table 7: final output of the system (Recommendation 1993 for the example case) Comp.000 h RT ace.e. Table 6 : Input values for ISTRA (TVO-Example) Comp. to indications. [Box (6)1 6. 15. 7. Comp. 196 Step 19 Recommended time for next inspection is added in the inspection [Box plan. i. inspection welds to indications. 6. 1. RT ace. 4. 5.000 h RT ace. The same procedure is then followed for components No. valve body 14 .2 program standard medium difficult medium 0. RT MT/PT 100mm wide. Π) type inspection 8 #507 Steam mixer Monitoring + MT/PT. Step 21 With the end of analysis the final output of the system is given in ΓΒοχ(7)1 the form of Table 7. RT. 1 #803 T-piece weld within next MT/PT. 8 Cost Additional Implicit risk Component priority difficulties due to due to safety (importance) accessibility aspects zero-attention none easy high 0. UT at 100mm wide. and 7. RT MT/PT 100mm wide. 20. MT/PT for welds next year int. to indications. RT MT/PT 100mm wide. 3. to indications.4 program extensive very high difficult low 0. 7 #815 Terminal weld within next MT/PT.000 h RT ace. Component Next Method Extent No. 1.3 program extended high difficult low 0.1 program low profile low standard medium 0. (5. The analysis is then stopped after user's decision.

Type of Typical Situation ace. Π) component downtime previous replacement priority mental past service service history service availability cost inspections [ECU] priority history [Equivalent hours] conditions 1 #802 Τ piece weld Medium 1 25k Medium Low Mild 110000 No changes Average 2 #803 Τ piece weld Medium 3 25k Medium Low Average 110000 No changes Average 3 #202 Pipe bend down High 1 10k High Low Mild 110000 No changes Average 4 #204 Pipe bend High 2 10k High Low Mild 110000 No changes Average horizontal 5 #205 Pipe bend down High 2 10k High Low Severe 110000 No changes Average 6 #813 Straight pipe / Low 4 8k Low Low Severe 110000 No changes Average bend weld 7 #815 Reduction valve Medium 3 45k Medium Low Mild 110000 No changes Relatively low +welds 8 #507 Mixer Medium 5 30k Medium Low Average 110000 No changes Relatively low 9 #801 Straight pipe Low 2 8k Medium Low Average 110000 No changes Average weld 10 #301 T-piece Medium 2 25k Medium Low Average 110000 No changes Average nozzle+weld 11 #201 Horizontal bend High 2 10k High Low Average 110000 No changes Average 12 #804 Straight pipe / Low 2 8k Medium Low Average 110000 No changes Average bend weld 13 #805 Straight pipe Low 2 8k Medium Low Average 110000 No changes Average weld 14 #806 Straight pipe / Low 2 8k Medium Low Average 110000 No changes Average bend weld . Table 8: Input values for COLOR (IVO . to Cost of Safety Environ­ Qualitative Quantified past Future Alternative supply No.Example) Comp. Comp.

After this analysis. In that way. a nominal pressure of 250 bar and the situation is given after 200. shown in this figure. this component was the valve DH 14-la. For components. 198 7. Apart of the inspection planning input data. including a detailed picture of the respective location of the piping. A graphic illustration of COLOR results for GKM piping is shown in Figure 10. All collected information were then saved in a database (see Figure 9). hangers etc. bends. valves. default values were used [7]. Some of the more critical are shown in Figure 8. since it integrates all criteria inputs.000 service hours. where the criteria values were not known.). The type of the input values are either crisp or linguistic. corresponds to the priority of each component. In the GKM example. Figure 8: GKM Piping The piping consists of 67 components/locations of potential interest (T-and Y-pieces. the output is a list of components ranked by their priority for inspection. The result value . corresponding welds. For the COLOR analysis.3 GKM-Example On this application the whole piping from a boiler to the turbine inlet is considered. The input values for each component are collected using the power plant data available. the linguistic statements must be transformed to fuzzy numbers using appropriate membership functions. The material is 10 CrMo 9 10 with a nominal temperature of 530°C. . Table 9 gives the input values for the COLOR calculation needs of the first 12 of the preselected components. the database contains component/location specifications. the most critical component achieves the highest result value.

Bogen medium 4 75 high high average 200000 no changes relatively low 11 114 .Werkstattnaht low 2 15 medium high average 200000 no changes relatively low 5 201 .Montagenaht am medium 5 15 medium high average 0 no changes relatively low Kesselaustritt . .Bogen medium 6 75 high high average 170000 no changes relatively low 7 202 .Montagenaht low 4 15 medium high average 200000 no changes relatively low 3 Bl .Bogen medium 4 75 high high average 200000 no changes relatively low 9 109 .Werkstattnaht low 2 15 medium high average 200000 no changes relatively low 10 ¡54 .Montagenaht low 4 15 medium high average 200000 no changes relatively low S B2 . Table 9: Input values for C O L O R ( G K M .E x a m p l e ) Input data CRITERIA COMPONENTS Typical Results of Cost of Safety Environ­ Qualitative past Quantified past Future Alternative downtime previous replacement priority mental service history [equivalent hours] service supply cost inspections [1000 ECU] priority conditions pattern kind of input data fuzzy crisp crisp fuzzy fuzzy fuzzy crisp fuzzy fuzzy weight 70 100 70 40 40 60 30 80 80 No. Component No.Montagenaht low 4 15 medium high average 200000 no changes relatively low S B3 .Name max max max max max max max max max 1 200 .Montagenaht am medium 6 15 high high average 111000 no changes relatively low Kesselaustritt 2 101 .Montagenaht low 4 15 medium high average 200000 no changes relatively low 12 203 .Bogen medium 4 75 high high average 200000 no changes relatively low 4 4 .

^>ι^.SVVWTÌS:::> kJ v:^. S . . interacts with the whole decision support system.. the next step when using the decision support system is to establish the appropriate inspection strategy using ISTRA module.a. J É Í JVGB Ersetzt euch Übegangsstück nach Zeichnung OM S i i s 754a Fa. : .^:^^^.■^^.ψί::ì:φS::ί:^.00 140.00 Figure 10: Graphic illustration of the result of the ranking tool COLOR As already shown in ΓνΌ example. The developed database. 200 Microsoft A ccess Eile ¿dit X¡ew ßecords yyindow Help Components poctian plaitniog i tMcJkxj-iion « M K JMontagenahl am Koctiauartt » «4 coetfoneat piãgít bcc ññü i ^ íjííSSíííí: " o ^W^jlSWMisdTkrteu^eb^Untesuc^boMPÆMn. ÄW etive*«W*>.t«*»<M«.^■. jrdabveVlow :i ^VWV.­^^>v:: Wj^n'ecadî flgsiwsstfet *ΦΡΦ)^&^:?&$.l^pgiSmSiwm^4»aäa £ ^ | | | K | P A S37 270 / 5 ΐίίίΚίΚ fotote »«irtee tonati«»* Ino chances & f ^ AA . ι «m* ni 'J"U. .­. Vorfändet R­1177 KE 270 832 Ι Ι Ι ρ . and to perform the recommended tests.■w^^.^..UIIl'TJLUJLIJJ ¡^ niiiaiswi mmmmi Figure 9: Database interface for inspection input data 160. enabling a feedback of all the information gathered with the various techniques applied.:.

1987. Bath U . Buckley. MPA Stuttgart 8.. No. Conclusions The applications of the decision support system in the IVO and GKM power plants. Weber. June 8. 1992. pp. Auerkari... VGB. Auerkari.P.21­31. VGB­TW507e.. In the future. Κ.. Weber. 1993. R. confirmed the capability of the system to efficiently use the experience of local domain experts and the service history to quickly make a first draft of the inspection plan.­J.. Vereist. R. Espoo 2. Rönnberg. L. SPRINT SP249 Technical Report. Α. Jovanovic.. the system will be coupled with a NDT database and used primarily for preliminary screening and "drafting" of the annual inspection plans. Nordtest. Document TEC­ T31­01.. H. BRE­CT93­3039 (fellowship for the stay and research of Mr. P. H. H.. R. Jovanovic. De Witte. NT Technical report 170. S.." Fuzzy Sets and Systems 15. J. "The Analytical Hierarchy Process ­ What it is and how it is Used". P. Acknowledgements Some of the work presented in the paper has been accomplished within the European Union research projects SPRINT SP249 and BRITE­EURAM BE5935. Borggreen. S. 1992. 1994. Α. pp. Essen 11. Psomas. P... 1984. U .. Auerkari. K. M. 1995. McNiven.. "Decision Making for Power Plant Component inspection Scheduling" Report on Task 4. VGB­R 509 L. "Reference Micrographs for Evaluation of Creep Damage in replica Inspections". Vol. VTT metals Laboratory. Document TEC­T4­01. Salonen. Modelling. Psomas. Essen 19 . H. Α.. Document BE3088/89. Psomas. "Multi­Criteria Decision Making ­ Modelling Technology" Report on Task 3. Espoo 3. "Guidelines for Inspection criteria of Hot Pipework". Α.. Part JJ ­ Application in GKM and IVO power plants".. "Decision support system for planning of inspections in power plants. Ch. North­ Holland. "Wiederkehrende Prüfungen an Rohrleitungsanlagen in fossilbefeuerten Wärmekraftwerken". In addition. 5.. J.. to be presented at the Baltica Conference. VGB. J. 233­247. S.. 3­5. J.. Jovanovic. Experts' revision of these drafts will remain a mandatory part of the overall procedure. Ellingsen. "Fuzzy Hierarchical Analysis.1 of BE­5935 Project RESTRUCT ­ Decision­Making for Requalification of Structures." Fuzzy Sets and Systems 17. 1993. Bath. U .. pp. J. W. 1990.. P. MPA Stuttgart 7. The overall system represents a helpful tool for maintenance of power plant structures. 4. References 1.. 1985a. L. "Decision Making and Uncertainty in Life Assessment and Management of Power Plant Components". Kautz. 201 8. Saaty. 9. R. J. North­Holland. This support is gratefully acknowledged here. Kautz.. 9. Psomas at MPA Stuttgart).2/4. "Ranking Alternatives Using Fuzzy Numbers. 161­176 10. S. Schwarzkopf. Vereist. Auerkari. 1995. Zimmermann.3 of BE­5935 Project RESTRUCT ­ Decision­Making for Requalification of Structures.R.. 10.. 1985b.. MPA Stuttgart 9. 6. Lieven. Math. Buckley. "Guideline for the Assessment of Microstructure and Damage Development of Creep Exposed Materials for Pipes and Boiler Components". Jovanovic. some of the results have been achieved under the Brite­Euram Fellowship Contract No.


ERA'S Plant Life Usage Surveillance (PLUS) System was SG/BC61 ADMI/GTJ/doc-572 . The particular concern in this case is ligament cracking in steam headers arising from increased cyclic operation. Competition is also forcing operators to reduce costs by demanding. means that thick section components such as steam headers or reactor vessels operating at high temperature in the creep regime. and reduced maintenance schedules. Leatherhead. All these factors make once-off investment in all forms of condition monitoring increasingly attractive. Surrey KT22 7SA United Kingdom ABSTRACT The increasing importance of flexible operating capabilities for modern power and process plant. Market forces are now demanding that plant. high temperature components on-line for creep-fatigue degradation. This paper provides a description of a PLUS system. Key issues regarding susceptibility to cracking and meaningful life monitoring are given to demonstrate the benefits of on-line life surveillance. can fail by creep-fatigue mechanisms induced by operational transients not allowed for in design. increased run times between outages. with reference to a case study application. now experience more temperature transients which introduce thermo-mechanical fatigue as an additional damage mode interacting with creep to limit the life of the components. 203 THE 'PLUS' SYSTEM FOR OPTIMISED O&M OF POWER AND PROCESS PLANT Β J Cane G Τ Jones J D Sanders R D Townsend ERA Technology Ltd Cleeve Road. PLUS is a unique system in which on-line monitoring of operating parameters is integrated with off-line condition inspection data to provide accurate real-time optimisation of component life usage. originally designed for base-load operation operate more flexibly. This paper describes a system designed to monitor thick section. Experience has shown that even plant that has been designed for cyclic operation. ERA therefore has developed a system known as the Plant Life Usage Surveillance system (PLUS) to assess the creep-fatigue life utilisation in thick section components. 1 Introduction There is a growing awareness by power plant operators of the benefits to be gained by applying on-line plant condition monitoring techniques.

and a maintenance planning tool. This figure highlights the key components which facilitate various features. or monitor cracked components using sensors to signal local failures. The basis function of PLUS is to convert signals. 2 Data validation module interrogates the sensor signals and applies a number of consistency checks and marks data deemed to be faulty. 2 Scope of the PLUS System PLUS is a fully integrated on-line system. It therefore serves as both a life usage monitor an operations adviser (or alarm system) and thereby may be utilised as a damage controller. Using SG/BC61 ADMI/GTJ/doc-5 72 . such that any file can be retrieved by means of a time and date identity. with real time data monitoring and processing. design and history to be accommodated together with the specific local operational behaviour. Depending on the requirements of the application. thick section reactor vessels operating in the creep range. the more important ones being: 1 Data capture module connects PLUS system with the site sensor data collection system. Using built-in advanced algorithms these values are converted and summed to give a realistic measure of damage accumulation in real-time. providing periodic on-line analysis with facilities for integration of off-line inspection/interrogation data. the component specific stress functions and selected inspection data pertinent to the assessment. The scope of the PLUS System developed on a UNIX Workstation is shown in Fig. equally applicable to other power plant components such as main steam pipework. It is. obtained from sensors (usually just thermocouples and pressure transducers) strategically connected to critical locations on plant. Its precise structure is therefore dependant upon the nature of the existing facilities. to local stress and strain values in real-time. or at convenient periodic intervals. using temperature surveillance and calculational methods. This is achieved by the incorporation of component specific temperature/pressure to stress calibration based on off-line FE analysis using operator specific on-line data. This module is also responsible for filing the data in a time order. chests and casings and for process plant. 3 The database module holds relevant aspects of the component geometry. however. 204 originally designed to address the problem of ligament cracking of steam headers prevalent in Europe and the US. There is also considerable interest in applying it to Heat Recovery Steam Generators (HRSG) which are notoriously susceptible to creep-fatigue failures. 1. It is fully customised for each application and enables plant specific geometry. Life prediction algorithms are implemented according to the components and degradation processes being monitored. It may also be used as a simulator to assess likely effects of changes in operation model. future PLUS systems will be made to monitor and predict crack propagation.

000 hours of operation). shown in Fig. i. This again raises the possibility of catastrophic failure. That is the cracks are straight transgranular gaping and oxide filled with no associated creep damage. to be consistent with high strain thermal fatigue generated by severe thermal transients. Oxide notching has been proposed as a crack initiation mechanism however this is not supported by European investigations. where the cracks radiate out from the perimeter of the tube hole is associated with isolated or more widely spaced penetrations. Ligament cracks removed from service (both in the US and Europe) have been found. creep relaxation of high transient stresses contributes to the crack initiation and propagation mechanisms. 5 The life analyses modules in PLUS are determined by the components and degradation mechanisms being monitored. 3. 205 template objects it allows customers to process additional monitoring points on components of similar geometry. The analysis modules are periodically activated by the operator to assess the consumed life based on newly available on-line plant data and the last calculated life usage for each component. Starburst cracking is unlikely to cause catastrophic failure but may cause steam leaks. with a relatively long propagation period generally exceeding 50.000 hours. All studies support the pattern of development in which multiple cracks first initiate from the inside corners of the tube penetrations. 2. Ligament cracking along the circumferential direction may result in fast fracture with the header breaking in two. Where adjacent element rows are closely aligned along the length of the header. It also holds the damage history for all monitored components.e.000 to 20.000 hours. and crack propagation is generally much more rapid. Localised cracking with a star burst distribution. This classic form is particularly severe since it can result in catastrophic failure. Although no creep damage is observed for headers operating in the creep regime. This is contrary to the US experience where crack initiation occurs much later. without exception. 3 Nature and Incidence of Ligament Cracking Ligament cracking develops from the inside crotch corner of the header and tube intersection and propagates along both the header and the tube internal walls. European experience indicates that crack initiation occurs relatively early (10. illustrated in Fig. ligament cracks may also develop in the axial direction where the crack surface are normal to the dominant hoop pressure stress. but growth is dominated by the primary cracks which propagate across the ligament and towards the outer wall. 4 The display module allows the operator to select a location on a component and to display the stress at that location in real-time. Assessment methods are therefore based on creep-fatigue analysis. SG/BC61 ADMI/GTJ/doc-572 . over 100. The classical form of ligament cracking occurs where the adjacent tube penetrations are closely spaced such that crotch corner cracks propagate across the ligament. The difference in behaviour may be attributed to differing operating practices.

with some manufacturers' components being inexplicably more vulnerable. No correlation is evident between incidences of cracking and operating hours or number of operating cycles. The US and European experience indicates that secondary and final superheater outlet headers were the most susceptible. There is also European experience.1 Factors Affecting Susceptibility to Ligament Cracking Analysis of the available data on the incidence of ligament cracking in Europe (Table 1) and US (Table 2) reveals that the European and US experience exhibits common factors as summarised below: ♦ Header Type Superheater Outlet headers were found to be susceptible to ligament cracking. 206 3. SG/BC61 ADMI/GTJ/doc­572 . ♦ Boiler Maker and Unit Size The European experience indicates that all makes and sizes of unit are susceptible to ligament cracking. Cracking has been observed after comparatively few cycles. with all headers exceeding a certain thickness exhibiting cracking. also show increased incidences of cracking. Larger units. supported by US experience. ♦ Operating Hours and Starts Figures 4 and 5 show graphs comparing observed cracking data in Europe with plant operating hours and number of starts respectively. less than 500. ♦ Header Geometry The susceptibility to cracking increases with decreasing ligament width and with increasing wall thickness. but little US experience of cracking in primary and interstage superheaters. contradicting the view that ligament cracking is a two shifting problem.

final and reheater outlet stub headers and manifolds. The accuracy of the estimate and definition of critical locations are improved as the number of tubes being monitored increases. 4 Case Study The PLUS System addressed here was commissioned to monitor the creep-fatigue damage accumulation. The significance of ligament cracking on plant integrity and the benefits of life monitoring are demonstrated by consideration of the background to the problem.2 Predictive Assessment and the Requirement for Plant Monitoring Concern about the incidences of ligament cracking led to quantitative assessments being carried out on ex-service headers in Europe. predict crack initiation in the platen. 207 3. demonstrates the importance of on-line monitoring for accurate life prediction for headers. and crack initiation times and propagation rates were determined using high strain fatigue cyclic endurance and creep ductility exhaustion models. Additional thermocouples are also required to provide back-up in the event of thermocouple failure. Analysis of monitored data indicated that major contributors to ligament cracking are: ♦ Emergency shut downs following tube leaks ♦ Spraying operations during cold start-ups ♦ Temperature cycles during hot starts (for example problems with coal flow and mills) The fact that thermocoupling and continuous monitoring of vulnerable headers confirmed the occurrence of previously undetected transients. as well as to monitor creep-corrosion damage in the associated boiler tubing in 8 boiler units of 350 and 650 MW. However much more severe local transients were identified under certain operating conditions.1 Plant Monitoring Requirements Inlet steam temperatures are normally obtained from thermocouples on inlet tubes. Finite element analyses were carried out for typical service start-up and shut-down cycles. The analyses predicted considerably longer crack initiation endurances and much slower crack growth rates than that determined by oxide dating techniques applied to the removed samples. Temperature monitoring to investigate the cause of thermal cycles responsible for ligament cracking has confirmed that temperature ramp rates associated with normal two shifting operating cycles are insufficient to generate the plastic strain ranges required to account for observed cracking. SG/BC61 ADMI/GTJ/doc­572 . 4. The conclusion was that additional cycles were present that were not considered in the analysis.

208 Thermocouples were installed on the two lead units of the case study plant. steam ramp rates and temperature changes are analysed for each geometry.2 Stress Calculation For the purposes of PLUS it is assumed that the ligament stresses can be uniquely determined from a number of temperature differences and rates of temperature change within the header. 9. 5 Multiple parameter linear regression analysis is then carried out to relate the ligament stresses from step 4 to the temperature differences and rates of temperature change. The outputs of the thermal analyses are compared with plant data. 8. . 3 Each geometry under consideration is modelled using 3-D finite element analyses techniques. 4. This analysis produces relationships (stress functions) of the type a = F(AThTi) where ΔΓ. The outputs of these analyses provide inputs to and validation of the stress functions generated in step 5. 2 A wide range thermal transients using realistic heat transfer coefficients. Thermal boundary and heat transfer conditions are refined until an optimum coaelation is achieved. The selection of the thermocouple locations was based on the identification of critical components using previous analyses and taking existing thermocouple locations into account. . The thermocouples installed provided preliminary surveillance data for the finite element analyses performed during PLUS customisation. A selection of these thermocouples was then used for the on-going monitoring by PLUS ensuring that sufficient redundancy is provided. analysis of 32 geometries was required. 6.rate of temperature change An example of the stress functions so developed is given in Fig. SG/BC61 ADMI/GTJ/doc-572 . 1 Surveillance data from the thermocouples and pressure transducers are processed and analysed to identify the thermal boundary conditions. Some of the thermocouple surveillance data are also used to validate the FE heat transfer analyses output. The hoop stress at a crotch corner location under a hot start condition is shown in Fig.temperature difference T. Figure 7 provides an example of the comparison of the FE thermal transient results and measured temperatures. typical and exceptional transients experienced by the components. The process undertaken to generate the stress functions used by PLUS is illustrated in Fig. For the case study. 4 Finite element stress analysis is then performed for the thermal transients.

Both of these components contribute to the overall damage which is calculated using the 'linear damage summation rule'. 10. The fatigue damage component. 6. The elastic-plastic cycling causes low cycle fatigue damage and stress relaxation causes creep damage by a process of ductility exhaustion. The hysteresis loop for each cycle is constructed from the strain-time data generated by the stress functions by means of the offset zero form of the Ramberg-Osgood equation ε=| +ι where A and β are temperature and strain rate dependant materials parameters. is calculated by a means of the ductility exhaustion approach using: Dc=¡-fP^dt where t d is the dwell time Ê(t) is the instantaneous strain rate obtained from the stress relaxation relationship SG/BC61ADMI/GTJ/doc-572 . D f. Dc. In carrying out the analysis each transient is resolved into discrete cycles. 4. providing input to the creep-fatigue calculation. The creep damage component. The stress functions are implemented in the PLUS system to provide real-time variations of ligament stresses as shown in Fig. is obtained from the relationship where ε< is the total strain range of the cycle and Nf is the fatigue endurance as a function of the total strain range of the cycle described by means of a suitable parametric equation. Stress limits can be periodically updated by PLUS using the latest calculations of life usage.3 Creep­fatigue Damage Calculation The methodology adopted for PLUS assumes that any arbitrary cycle can be separated into an elastic-plastic cyclic component and stress relaxation dwell component. These real-time displays provide the operator with an instantaneous output of the ligament stress which can be compared with a stress limit set to prevent crack initiation in a specified number of starts. 209 These functions enable direct calculation of stress and therefore strain from measured temperature values obtained during PLUS operation.

4 The Creep-fatigue Crack Initiation Program Built into the PLUS system consists of the evaluation of the above two types of damage mechanisms algorithms. The modules read in the time and temperature data collected by the monitoring system and calculate the associated stress and strain using the stress functions. A commit may be initiated where new off-line data is supplied. Alternatively. is defined by a clock setting exercise using inspection and condition assessments SG/BC61ADMI/GTJ/doc-572 . PLUS is set up to automatically update the life estimate on a monthly basis. 11. After resolving the data into cycles and dwells. The total damage for each location is established and stored in the PLUS database. Dt. The life analysis may also be performed at any time upon user instruction.5 Steady State Creep Damage Calculation For periods of steady operation the accumulation of steady state creep is determined in PLUS using Oc = E ^ + Dc. The results are displayed to the operator but not stored. shown in Fig. is calculated using the linear damage rule Dt = Df+Dc 4.„.~ da/dt σοβ" 1 ε (0 = — Τ * ~Γ Μ with σο . where tr is the allowable rupture time at the current operating temperature and stress t is the time for which the operating temperature and stress remain constant DCini.time β" . the two algorithms establish the LCF damage and creep damage components for each identified cycle and dwell period respectively. The creep-fatigue life analyses are performed periodically by PLUS using past life estimates and newly available plant data. 4. Cyclic life usage calculated from monitored data is shown in Fig 12. The operator may initialise the life analysis in two ways. the results are stored in the data base.peak stress at start of the dwell t . 210 σ = σο[1-β"1η(7+1)] • . The total creep-fatigue damage for each cycle.temperature dependant materials parameter. and the life estimates are updated based on these and latest available on-line data. life calculations may be performed at any time using the latest available data.

4. 4. i.e. it is possible to use inspection data to refine the system analyses. A simple stress correction factor calculated from the observed differences can be applied to future PLUS calculations as a modification to the stress/strength ratio thus scaling calculated lives to the observed damage accumulation rate.7 Monitoring Crack Propagation Where cracks are detected in a component or predicted by PLUS. crack propagation may be monitored by PLUS. Since rupture life is governed by the ratio of stress/strength it is not necessary to know whether this difference is due to stress or materials effects. system loads acting to increase or decrease the stress. Besides the use of PLUS to guide on inspection locations and times. Crack monitoring utilises creep-fatigue crack propagation algorithms based on linear summation of cyclic fatigue and creep damage. the predictions can be compared with observed damage or strain measurements. Any differences between life fractions consumed determined by the reference stress technique used by PLUS and that determined by off-line quantitative damage assessment will be due to materials properties and/or the system stress uncertainty. The case study PLUS system is set up to enable quantitative microstructural damage assessments made during an inspection to refine creep-fatigue damage assessment by PLUS. In setting up the monitoring system assumptions were made regarding the position of a material in its property scatter band and the evaluation of the reference stress. 211 Gret is the reference stress calculated for each critical location on the monitored components using inverse design procedures. Since the creep life prediction algorithm can predict damage or strain evolution as well as final failure. The cyclic component is obtained from fatigue crack growth laws utilising stress intensity factor (ΔΚ) solutions for the defect geometry: {%)rA(AK)"> where A and m are materials properties and the creep component is obtained from creep crack growth laws utilising C*: *r\ = jw„ · dt SG/BC61 ADM l/GT J/doc-572 .6 Integration of Inspection Data in PLUS The relationship between the calculational assessment route resident in the monitoring system and off-line inspection results should be reciprocal.

7 References V A Annis. Sept.' London. The customisation process and life monitoring functions of PLUS have been illustrated by a case study application. 5 Conclusion Temperature surveillance and life monitors have been demonstrated to be a very effective means of monitoring high temperature components subject to thermal cycling. Leaks may be detected by AE significantly sooner than the effects can be observed by normal plant operating systems. C M Jeffery and J M Brear "On-line Creep-Fatigue Monitoring of Steam Headers. The PLUS system addressed in this paper is currently being delivered to the clients. and periodic damage and life assessment. Plant Transients and Off-design Operation." IMechE Seminar 'Load Cycling. c ƒ In cases where a leak before break situation is predicted. Liege. 212 and å = ( f ) c = ß(C-)M where Β and k are materials properties The total crack growth 'per cycle' is obtained from da Ida\ (da\ dN ~ \dNJ \dNj . Wide experience of potentially catastrophic ligament cracking in headers has shown the damage to be attributable to normally undetected thermal transients. 6 Acknowledgements This paper is published with the permission of ERA Technology Ltd. monitoring for steam leaks using acoustic emission (AE) provides a practical safe alternative to the above algorithm based approach. 11th Conf. "Software Requirements for On-Line Condition Assessment in Power Stations" Proc. ERA has developed a PLUS system which provides real-time temperature monitoring and processing. In this case the acoustic sensors are interfaced with PLUS allowing the system to raise alarms in the event of a leak. 1993 SG/BC61ADMI/GTJ/doc-572 . The system also enables off-line inspection results to be used to refine the analysis. 'Electrical Power Stations'. April 1991 C M Jeffery and G Τ Jones.

Proc 'Life Assessment of Industrial Components and Structures'. Conf. Sept 1993 EPRI Report ' An Integrated Approach to Life Assessment of Boiler Pressure Parts'. EPRI Project RP 2253 -10 SG/BC61ADMI/GTJ/doc-572 .Current concerns and research in the US" ERA Report 93-0690. 213 R Viswanathan "Life Assessment of High Temperature Components . Cambridge.

214 Table 1 Summary of Inspections for Ligament Cracking in a European Utility Header Type Number Number Maximum Inspected Cracked % Cracked Depth. % wall thickness All Superheater Outlet 80 33 41 100 Primary 9 3 33 100 Interstage 28 8 29 100 Secondary and Final 43 22 51 69 All Superheater Inlet 32 2 6 20 All Reheater 18 0 0 N/A All Headers 130 35 27 100 Table 2 Summary of Ligament Cracking Experience in the US Header Type Number Number %Cracked Inspected cracked Secondary Superheater Outlet 157 44 28 11/4Cr 73 26 36 21/4 Cr 76 17 22 Op.T>_1050C 14 6 43 Reheater Outlet 118 2 2 All Others 101 4 4 .


2: 'Classic' ligament cracking in a boiler header at an advantage stage of development G BC61ADMI\GTJ\DOC-572 . 216 y<κ£* fü* l i is " mr'™ Fig.

!-«:.- Fig.3: 'Starburst' ligament cracking at an isolated tube stub penetration inside a boiler header G. 217 '^««■oSafe'.BC61ADMnGTJ\DOC-572 .

218 All Superheater Outlet Headers 120 r 100 co c I ■Z¿ 80 Ι­ Ο k. υ *Q 60 ­ υ O 40 f- i i Σ ! ■ ■ ■ '■*♦·♦—♦—» aw -♦ AM ■ ThousanŒ» 150 100 Number of Operating Hours Secondary and Final + Primary and Interstage Fig.4: Susceptibility to ligament cracking as a function of the number of operating hours .

219 All Superheater Outlet Headers 120 Thousands Number of Starts Secondary and Final φ Primary and Interstage Fig.5: Susceptibility to ligament cracking as a function of the number of starts .



Plant surveillance
data j— xr -{°-»°- )
Filtering Generation of FE
& plotting of models

Thermal boundary Thermal transient
conditions analyses
and data for FE

Companson of plant and FE


yes ι

Stress analysis of
thermal transients

Generation and verification
of Stress Functions

Fig. 6: Finite element analysis route for the generation
of stress functions aurina customisation of PLUS

Fig. 7: Comparison of FE Thermal analysis results and thermocouple datat
- Reheater outlet manifold body

600 —"I

550 P;PPP!
♦ï 500
S 450 to
D) to


eu 400
-*—FE manifold
ne— Manifold t/c
-»— Estimated steam

4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 14000 15000
Time in seconds

ANSYS 5.0 A-31
SEP 7 1994
SUB =1
DMX =.003863
SMN =380924
SMX =.177E+09
czu .965E+07
□ .189E+08
. 375E + 08
ETTI .467E+08
tí.'f Λ .560E+08
t. ! . 1 .653E+08
.931E+08 IJ
.102E+09 IJ
. 1 2 J lit 0 9
. 177E+09

SF*'£ . R ? h e M l l í M í t g t l n j a L j l f o l d ' s i n l ^ t s t u b model.

Fig. 8: Example of FE thermal stress contours at header crotch


Platen superheater outlet header (Crotch corner position)






XT5 T4X X <T67 XT66

30 Tube No. 11 0 9

Dimensions (mm) O/D Bore f Notes
AA od id t
BB od id t Element 9
HEADER od id t UnitB1,B3&B4
TUBE od id t Tubes 3-34
T,STEAM = (T3 + T4 + Τ66 + Τ67)/4
= C1*T10-C2*r 57ttw + C3 -C4
or = C1*T9 -C2*T^rC2 -C4

T,STEAM = (T3 + T4 + T66 + T67)/4
= Cl*T9-C2*TsmiM + C3 -C4
or = C1*T8-C2*r(STEAM c: C4

Fig. 9: Example stress functions generated for a critical header ligament
(The numbers have been changed for confidentiality purposes)


l'LUS System Stiess Display:- { S H i g s ï l w n . V J SHI WJ

Fig. 10: PLUS System real-time display of monitored temperatures and
ligament stresses derived from stress functions










Fig.11 : Creep-fatigue damage assessment route



D CR. Northeastern United States and Eastern Canada.*. oil and natural gas) are burned. J. Nitrogen oxides are formed when the nitrogen and oxygen are burned with fossil fuels at high temperature. is a big producer of these pollutants. mainly because of acid rain has been object of many discussions in all the world resulting in international programs of research for the development of efficient flue gas removal techniques. the byproduct of the process is ammonium sulfate and ammonium nitrate and after filtration it can be used as a fertilizer. RIVELLI. USA and Poland. crops. the use of fossil fuels with high sulfur content in Brazilian industrial installations has grown. Data concerning the present state of the process along with the design and implementation of a laboratory pilot plant for the electron beam flue gases removal process located at ΓΡΕΝ­ CNEN/SP are presented. The process has been investigated in Japan.*** ♦INSTITUTO DE PESQUISAS ENERGÉTICAS E NUCLEARES ­ IPEN­CNEN/SP Travessa "R". The acid rain affects buildings and monuments what can be seen in many european cities. the development of a technique able to remove toxic gases has become essential. estimates indicate such growing will be continuous. These are the reasons why stricter control of SO2 and ΝΟχ emissions has become internationally recognized as a global problem and many countries have set limits for the discharge of pollutants. The air pollution in Europe is particularly severe. mainly SO2 and NOx. ZJMEK. and plants may be hurt. and in setting more and more limits of emission. ZA. those gases are simultaneously removed from the combustion gases.*. Latter acids are being formed in the atmosphere and fall to earth as acid rain or snow. 229 TECHNICAL AND ECONOMICAL FEASIBILITY STUDY OF THE ELECTRON BEAM PROCESS FOR S 0 2 AND ΝΟχ REMOVAL FROM COMBUSTION FLUE GASES IN BRAZEL POLI. EVTRODUCTION Sulfur oxides are created and exhausted into the air when fossil fuels that contain sulfur (coal. the process of electron beam irradiation has shown to be promising. There exists consequently a strong need for air pollution technology in order to improve such situation. 400 ­ Cidade Universitaria. In the presence of ammonia. Some acid can be transported far away from industrialized zones and cross international borders to ruin the environment in non­urban areas. In addition.**. Trees. Poland. China. In the past years. Numbers . Due to environmental regulations enacted. 1. Among the flue gas treatment methods.M. Germany. SO2 and ΝΟχ are listed among them (5). V. In result lakes and forests are being damaged in certain part of Central Europe. Under irradiation. which produces energy mainly from pit and brown coal. VIEIRA. 05508­900­SP­Brasil **INSTITUTE OF NUCLEAR CHEMISTRY AND TECHNOLOGY ­ POLAND ***COMPANHIA DE TECHNOLOGIA DE SANEAMENTO AMBIENTAL ­ CETESB/SP ABSTRACT The release of toxic gases into the atmosphere.

Limstone with Wallboard Gypsum LSINH .Limstone with Forced Oxidation Wet Scrubbers LFSO .Lime Dual Alkali LSDA . 4. Dry Scrabbers LSD .Economizer Injection DSI . on-going development. precipitation on solids.Limestone with Inhibited Oxidation LSDBA . The methods can be divided into several cathegories: dry.Moist Dust Injection LSFO .Duct Spray Drying ADV . The evaluation of nearly 70 processes has been done under the EPRI project to select the most promising technology (9). adsorption in liquid with oxidation NO2.Furnance Sorbent Injection EI . CONVENTIONAL METHODS FOR S 0 2 AND ΝΟχ REMOVAL Several FGD (Flue Gas Desulphurization) methods have been developed up to now.9 since the nitrogen dioxide form much stronger acid becoming harmful to the environment (2). low volume waste.Magnesium Oxide LSFO . use of existing equipment and required point of access to the flue gas stream).Limestone with Forced Oxidation Dry and wet methods can be applied for reduction of NOx pollutants.1. wet and with sulfur recovery system.Wellman Lord ISPRA . Environmental risk (high volume waste. SCR selective catalytic reduction. secondary gaseous emissions.Limstone with Forced Oxidation LSWB .Limestone Dual Alkali Sulfur Recovery System WLWN . commercial use). Development status (empirical experience.Limestone with Dibase Acids PURE . which are being forced in many countries.Pure Air/Mitsubishi MGL .Magnesium Enhaced Lime LDA . 2.Duct Sorbent Injection DSD . Technical feasibility (probability that commercially viable process can be developed).ISPRA . NO3 are used in the wet method. 230 regarding the NO x emission should be multiplied by a factor 2. .Bromines MgOx .Lurgi Circulating Fluid Bed FSI .Lime Spray Dryer CFB . The recommended methods were selected under screening technology condition based on: 1. Retrofitability (land requirements for process and waste disposal. The stricter control of NO x and SO2 pollutants. Absorption in liquid with reduction to NH4. provokes an impact in the development of low cost NO x /SO x control technology as alternatives to existing ones: SCR (Selective Catalytic Reduction) for NO x and FED (Flue gas desulphuration) for SO2 control. 1. 3. potencial risk due to process upset). catalytic decomposition on solid electrolyte and reduction to Nz by NH3 are examples of dry scrubbers.

6. EPRI and others are engaged in this dry injection technique development program.NO x SO (solid phase adsorbent with fluidized bed reactor) . Fundamental work and pilot scale experiments have been performed in Japan. USA Germany. A SCR catalyst is located on or within the bags to reduce NO x with the ammonia presence and form elemental nitrogen. The adsorbent is returned to the reactor after a stream-treatment and cooling operation. reagent consumption rate. sensitivity to process runnings. in 1970. It was founded in a basic and experimental way that EB technology for flue gas treatment has the following advantages (5): .WSA-SNOx (wet scrubbing iron-chelate process). It also means retrofit applications because of the dowstream of the ESP location. DOE. 2. PRINCIPLE OF THE EB PROCESS The research on flue gas treatment by radiation was initiated by the Ebara Corp. catalyst/sorbent consumption) In addition to EB technology three other processes have been selected: . The development of the filter bags with SCR catalyst suitable for high temperature operation will demonstrate the capability of this heat recovery process. the DOE.ROX .BOX) . The use of fluidized-bed reactor for improving efficiency cost makes a high pressure drop of the flue gas. the Ohio Edison Company. The iron-chelate additives react with NO x in wet scrubbing process to form compounds that include sulphur-nitrogen species. 231 5.SNRB (SO x . Poland and other countries since then. In a second stage. c) THE WSA-SNOx PROCESS The wet scrubbing Iron-Chelate Process can be easly adopted to retrofit the plants wherein a FED System has been already implanted. The demonstration program includes the NO x SO Corporation. Babcocle & Wilcox. In first stage NO x is removed under controled temperature treatment. a) THE NO x SO PROCESS The NO x SO process is based on the use of solid phase adsorbents to remove SO x and NO x in a fluidized bed reactor. The advantages of the NOxSO method is the low temperature (120^C) process which corresponds to the ESP outlet.ΝΟχ . a reducting gas is applied (methane. corrosive environment). The adsorbent is removed from the reactor by means of several steps processing. b) THE SNRB PROCESS In this process a lime or sodium reagent is injected into the flue gas duct wherein ammonia is also injected. The alkaline reagent reacts with SO x in a duct and on hot filter bags. The Iron-Chelate oxidation and the stream of waste which should be additionally treated before disposal may create a technical problem and a significantly increase of the cost of the process. the EPRI and other organizations. Energy and resource requirements (quantity. In comparison with FED process a longer gas/liquid contact or a higher flue gas measure drops may be required for appropiate NO x removal. The concentrated NO x stream from this stage is directed to the boiler inhibiting the formation of additional NO x under thermodynamic equilibrium. carbon monoxide) to produce gas consisting of SO x . Process reliability (chemical and mechanical complexity. H2S and elemental sulphur what can be later processed to produce marketable byproduct of sulphur.

furnaces and municipal solid waste incinerators.24).(3.Byproduct can be used as fertilizer .41).30). In the first one the flue gas is irradiated leading to radical formation such as OH.Simultaneous removal of SO2 and NO x . e.07). 0 + (1.45) CO2 --> C 0 2 + (2. The process can be used for treatment of the gases from coal and oil fixed power stations. sulphuric and nitric acids. These dry powdery ammonium salts are collected by the filtering units (ESP or bag filters) and can be used as agricultural fertilizers (4).96). e. O H (4.51).23).90) H 2 0 -> H 2 0 + (2.69).1 .(3. the process vessel accelerator and byproduct collector which can be fully automated what makes the process easier to be operated. radicals and free atoms are generated.23). 0 2 * (1. The main components of the facility are the spray cooler.(2.> 0 2 + (2. Also retrofitting of existing facilities to reduce SO2 and NO x concentrations is possible regarding to the low space requirement and location between the ESP and the stack where a move space is available. The fraction of energy absorbed by each gas component is proportional to its partial pressure. Ammonia in near stechiometrie quantity is injected into the vessel prior to the flue gas entrance into the process vessel. N + (0. e. industrial boilers.Dry process without wastewater . N (3. O. C O + (0.29) 0 2 . O (0. In practical instalations 95% of SO2 and 85% of NO x removal efficiency can be obtained.56).No need of a catalyst . There is also the ion- molecule reaction mechanism for the decay of the primary species.21). Above 760 reactions were listed in Agate Code to describe the undergone processes. In the third stage the intermediate product reacts with the ammonia presence to form ammonium sulfate and ammonium nitrate. O (1. in the presence of water. wherein SO2 and NO x are involved.(2. The process is based on three stages. 0 + (0. The interaction of these electrons and flue gas molecules results in ionizing and dissociation. O (0. e. H (4. Some reactions from the secondary stage. 2. This is the first stage of the process. During the second stage radicals and atoms containing the oxigen react with SO2 and ΝΟχ to form. are listed below (4): . Low concentration components have to compete with the primary radical decay processes.PROCESS MECHANISM When high energy electrons are applied for flue gas irradiation.38) Where the number in parentheses represent the G values of the species and the G is the number of molecules produced per lOOeV of energy absorbed in the system. 232 . Principal reactions in primary processes can be schematicaly represented by (4): N 2 — > N 2 + (2.17).27). HO2 In the second stage SO2 an(^ NO x are being oxidized to H2SO4 and HNO3 in presence of water through a concurrent number of chemical reactions.05).Low capital and operating costs compared with conventional methods.68). N 2 * (0.07).

ELECTRON BEAM FACILITY FOR FLUE GAS TREATMENT The first experimental faculty for EB process applied to flue gas treatment was built by the Ebara Corp. Last data show that 95% of S 0 2 removal efficiency can be obtained at a 5kGy dose being the water content and the thermal reaction condition properly optimized. In order to demonstrate the capability of the EB process. 2. The multistage irradiation can significantly improve the NO x removal. The results obtained from experimental works already underwent proved the capability of the process gas. Finally. The 7kGy dose for the two stages and the 6kGy dose for a three stage irradiation is required for a 80 % efficient removal (5). Electric Power Research Institute. The batch tests where carried out in the 1970-71 period. The Table 1 shows the parameters of the pilot plants for the flue gas treatment which have been installed since 1991 and are being used now to demonstrate the capability of the EB technology for commercial use (5). Warsaw Power Station in Poland (5). four pilot plant demonstration facilities are being now used both in Poland and Japan. Ebara. in Japan. the gas conversion process is initiated by the reaction of sulphuric and nitric acids in the presence of water and stoichiometric amount of ammonia. .7). NKK in Japan. They are based on the Ebara process where ammonia is injected before the process vessel wherein the flue gas is irradiated (5). 233 S02 + H02 > SO3 + OH S 0 2 + OH > HSO3 502 + O > SO3 503 + H 2 0 > H2S04 NO + 0 H > HNO2 NO2 + O3 > NHO3 + O2 N02 + H02 > HN02 + 0 2 NO2 + OH > HNO3 Most than 20% of the NO is converted into free N2 being released in the EB process in the presence of ammonia according to JAERI and KFK's tests. Department of Energy. The EB process is being used now to remove other kinds of gas pollutants. Institute of Nuclear Chemistry and Technology. Badenwerk in Germany.2. traffic tunnel ventilation gas and various VOC pollutants in the gas phase (3. The experiments proved that SO2 and NO x can be removed from irradiated flue gas in results of radiation chemical reactions. University of Karlsruhe. KFK. Subsequent development of the process has been continued by Ebara. These acids are converted into ammonium sulphate and ammonium nitrate and are collected by a filtering system (4). The efficiency of the EB process was determined in many experimental facilities to optimize process conditions. JAERI. Research Cortrell. University of Tokio.The last stage is the product formation.

Takasaici) and the Chubu Electric Power Company (Nagoya). As a result NO x is converted into powdery ammonia nitrate products. 1. together with the Japan Atomic Energy Research Institute (JAERI.5KW¡ j NKK / JAERI 1 1 400 .700 POLAND |200/600 to KeV 1 80 2 X 50 kW | 250 EBARA / JAERI 1 1 |800 to 800 KeV 1992 12.350 | 1 | loo KeV MATSUDO-JAPAN 1992 1000 100 150 ¡HCl = 15 KW 1 | 1000 I 1 To confirm capability of the EB method in low NO x content gas.3 million USD project was initiated in Japan by the Ebara Corp. To evaluate the EB process applied to the flue gas from municipal waste incinerators a pilot plant was built by NKK. The main parameters of the plant are shown in Table 1. A 80% targed removal efficiency is being obtained at 3ppm level of NO x in inlet parts.000 1 20 | 0-5 2 χ 12. Targets of the removal efficiences are as follows: NOx: lOOppm ­> <50ppm S02 lOOppm ­> < lOppm . The activated carbon is used to remove the ozone formed by the irradiation.000Nm^/h of gas from the ventilation exhauster is introduced into the irradiation vessel for EB treatment with the ammonia presence. a 3­year 14. ­ To evaluate the rehability of the process during a long period operation.The major parameters of the pilot/demonstration plants for the flue gas treatment which have been installed since 1991. TABLE l. FLOW RATE 1 TEMP (NM3/h) :|SO 2 /NO X (°C) | (ppm) 1 1 INCT/KAWENCZYN 1991 20. The plant was completed in June 1992. ­ Study and evaluation of the commercial characteristics of the process. INSTAL.000 1 60 500 . The main objectives of the research carried out at this pilot plant are as follows: ­ To recognize the quantitative characteristics of the process. a Tokyo plant was built by the Ebara Corp. ­ To improve necessary areas of the faculty.000 ¡1000 or 65 JAPAN 1 3 χ 36 KW | |l50/300 1 EBARA / TOKYO 1 | ____ 500 KeV JAPAN 1992 50. 50. and the Tokyo Metropolitan Government to treat ventilation exhausted gases from a highway at the Tokyo Bay Tunnel. 234 In 1991. The main parameters of the pilot plant are shown in Table. JAERI and Matsudo City Government Clean Center. ­ To optimize collecting (ESP^ag house) and byproduct handung systems. The facility was finished in June 1992. ­ To test multistage irradiation. 1 1 1 YEAR OF VOLUME INSTITUTION 1 ACCELER.

. The Polish Pilot Plant is the first installation in which two stage irradiation by electron beam was applied resulting in a significant decrease of energy consumption. It was determined by basic experiments and operation of pilot plant facilities. The black coal used contains 1. . efficiency 84%.3. the filter condition and the EB dose. .2% sulphur. has been built at EPS Kawenczyn in Warsaw. .SO2 removal efficiency depends on the temperature injection.Testing of all parts of the installation under industrial conditions. PRESENT STATUS OF ELECTRON BEAM PROCESS The EB process applied to the flue gases treatment is suitable for full scale commercial application. . 18% ash content and a calorific value of 4700 Kcal/kg. coal consumption 26-32 t/h). gas temperature and ammonia stoichiometry are the second order effects. . Major conclusion regarding the EB process for flue gas treatment are as follows: .NO x removal occurs almost entirely under EB application and depends strongly on the dose. During the process HCl and SO2 are removed by spraying the slurry of Ca(OH)2 NO x is effectively removed by EB irradiation (6).Ammonia should be injected into the process in near stoichiometric amount. The Polish Pilot Plant. 2.Optimizing of the process parameters leading to the reduction of energy consumption with high efficiency of SO2 and ΝΟχ removal. This is a dry process with a usable byproduct which can offset the operating and investment costs.More than 95% of SO2 and 85% of NO x can be simultaneously removed from the flue gas under optimal operating conditions. . The bag filter is used to collect powdery products (mixture of calcium nitrate. . with a 20000Nm3/h capacity.NO x removal efficiency is increased as the inlet S 0 2 concentration increases. The process can be easy controlled for different removal efficiencies and adjusted for the utilization of different fuels. The other novelties of this construction are connected with the process vessel where irradiation zones are located along the flue gas system flow and a double window construction was applied with perpendicular streams of air for cooling the output windows at the accelerators and the inlet windows of the process vessel. .Developing of the monitoring and control systems at industrial plant for flue gas cleaning. . This occurs as a result of the formation of nitrosulphuric compounds. 235 HCl: lOOOppm > < lOppm The irradiation is being done where the slurry of calcium hydroxide is sprayed at a temperature higher than 150°C. The installation was constructed on the by pass of the main stream of the flue gas with total flow net 260000Nm3/h from the WP-120 boiler (nominal heat output 120Gcal/h.Selecting and testing filter devices and filtration process. The EB technology was recognized as flexible and adaptable with excellent turndown ratios.5kGy is required for 95% of SO2 removal efficiency and 7kGy is required for 80% of NO x removal efficiency in a two stage irradiation facility in optimal conditions. upstream injection was found to be more efficient. . The main objectives of the research carried out at the pilot plant are (2): . sulfate and chloride) formed by the irradiation.Preparation of the design for an industrial scale faculty.The quantity of SO2 removed by EB is relatively independent from the inlet S 0 2 concentration.

.Optimization of the systems preventing or removing duct clogging byproduct. quantity). granules or wet sort of byproduct.Experimental study to apply this method for other kind of gases treated by radiation. baghouse.Relatively low capital investment and operating cost of the EB process facility can rate this method as equivalent or preferable to compare with FED/SCR ones.To initiate radiation chemical process of flue gas treatment. . . 3.Optimization of the spray cooler construction to obtain dry bottom and reduction of power consumption. . liquid.No waste water in the process is being produced.Experimental study of quantitative characteristics of the process at the pilot plant level. ANALYTICAL AND CONTROL SYSTEM .Low space requirements produce a significant advantage in the retrofit installations. The number of the most interesting subjects are listed below: . INDUCED DRAFT FAN . fertilizer tests).Electrostatic precipitator to reduce the fly ash content downstream to the boiler. BYPRODUCT HANDLING SYSTEM . To complete present data of the EB process intense experiments are being done in Japan. No such . GENERAL ARRANGEMENT OF THE TECHNOLOGICAL PROCESS Flue gas generated by the coal heated boilers enters the EB process after ESP where the ash content is reduced in order to improve the quality of the fertilizer byproduct.Design study and evaluation of commercial characteristics of the process.Good reliability of the long time operation was demonstrated in pilot plant facilities. Experimental studies describe above improve the technology and promote it for future applications (2). .Wet and dry ESP. Poland and Germany.Vertically installed down to the boiler and ESP is used to increase water content in flue gas and describe its temperature by complete evaporation injected water. cylindrical) and gas velocity in duct and process vessel are investigated.Byproduct handling studies (granulation. ACCELERATOR . ESP . HEAT EXCHANGER .The byproduct collected during the process consists of ammonium sulfate and ammonium nitrate which can be effectively used as a fertilizer.To overcome pressure drop in ducts and byproduct collector. .to keep automatic control over the process. .To keep stoichiometric quantity of NH3 in flue gas stream. . . . PROCESS VESSEL .Ammonia sup and ammonia injecting (location. EQUD7MENT SPECD31CATION BOILER . gravel bag filter experimental study to optimize byproduct collecting system.To reduce inlet or increase outlet gas temperature by additional stream of air or water. . COLLECTOR .Oil or coal fired to produce thermal or electrical energy. 236 . AMMONIA INJECTION .Duct configuration (rectangular.Horizontally mounted with multistage irradiation capability. .as baghouse/ ESP/gravel bed filter to collect byproduct. The Electron Beam process for flue gas treatment could be used beneficially in the future. . 3.To prepare powder. storage.1.Multistage irradiation (two and three zones). SPRAY COOLER . The small amount contaminants does not affect the quality of the product.

The ammonium sulfate and the ammonium nitrate are collected by electrostatic precipitator or bag filters and the cleaned flue gas is released through the fan into the stack. Many factors should be considered when specifying the location of the accelerator/scanner relative to the process vessel. The best position of the scanner was found to be at the top of the process vessel with the irradiation zones along the gas stream flow. The multistage irradiation is recommended to increase the process efficiency (10). The required beam power level is significantly higher than in those accelerators utilized for industrial beam processing but there are technical prospects to build accelerators with a 200-500kW unit power what sharply reduces the number of accelerators in industrial facultes and their cost. The Table 2 shows the basic electron beam parameters which have been applied in laboratory and pilot plant facilities for flue gas treatment. water and others substances in the flue gas to produce active free radicals such as OH. The initial concentration of S 0 2 depends on the sulphur content of the applied fuel. Significant improvement in NO x removal can be achieved when high sulphur coal is applied. MAJOR EQWPMENT 3. H 0 2 . Water is totally evaporated by a heat exchange with the hot flue gas once the dew point of the gas is approximately 50°C. The process vessel location in horizontal position and at the underground level can reduce shielding costs and allows to have an easy access and change of certain components of scanner/process vessel systems. oxygen. 237 filter is foreseen after the oil-fired boiler. Ammonia in stoichiometric quantity is injected before the flue gas enters the process vessel where it is irradiated by the electron beam to promote the reaction of the ammonia and flue gas. The new developments which are under progress in USA (induction linac) give some prospect to reduce the cost level by factor 2. Usually a dry bottom principle is applied to operate the spray cooler facility. According to accelerator producers the cost of high power 800keV machines is in the range of 5 US$/W at present. to eliminate a residual wastewater stream. It is necessary to remember that 95% of S 0 2 removal can be obtained with a 5kGy dose. N0 X concentration depends on the combustion process temperature and is different for different burners and boiler construction. In results S 0 2 and N0 X are converted to sulphuric and nitric acids and finally forms a byproduct consisting of ammonium sulfate and ammonium nitrate (6). Heat exchanger is usually used to reduce the gas temperature in the initial cooling stage up to 150-250°C level.1. . The beam interacts with nitrogen. 3. The most important are: dose uniformity. Multistage irradiation can reduce this figure up to 7kGy. Water content in the flue gas should be increased up to 8-12% in this stage. cost and easy access to maintenance.2. O. ACCELERATORS The present estimate of the required dose level for an efficient NO x removal (80%) shows that the radiation dose should be in the range of lOkGy for low sulphur content coals.2. If it is assumed that gas absorbs 85% of the total beam energy then 1MW accelerator facility will be sufficient for a 100MW generator with the dose range described above. Then flue gas enters the spray cooler where the temperature is reduced to 65-80°C by atomized water injection.

Cott.8 3 χ 36 Cockrft-Walton Ebara.5 15 Ebara.8 2 χ 40 Res. Poland| was found that other methods can be effectively used in the collection process.8 8 χ L50 Transformer PLANT 1.2.3 3. The basic parameters of the electron accelerators applied in faculties for flue gas treatment.5 30 JAERI. 2 Dynamiton Tokyo Univ. Japan| 1.5 2 χ L2..000 Nm 3 /h 1 1 To remove byproduct deposition from the bag filter and reduce baghouse pressure drops several methods can be applied. Japan| 0.5 15 Cockrft-Walton KFK. A pre­coating system was used to protect the bag' surface from direct contact with hydroscopic byproduct. FILTERS. 3.000 Nm 3 /h (10). 238 The Table 3 shows producers and accelerators which are suitable for flue gas treatment in capacity 10.7 5 Resonance INCT. ­ Pulse jet cleaning ­ Reverse flow cleaning ­ Mechanical shaking Acryllic and Teflon covered bags are the best in this application. Poland¡ 1000 . GE | FACILITY 0.75 30 Ebara.0 4 χ 400 Induction linear 300.000 ­ 20. Japan| 0. 6 II KFK.75 2 χ 45 Ebara.2. USA 0.000 0.3 χ 90 Electrocurtain 2 Badenwerk.22 22 Transformer Karlsh. A baghouse was initially selected as a byproduct collector. The mass median aerodynamic diameter of the product aerosol facilities around l u m depend on the dose and flue gas parameters (4).8 2 χ 80 Ebara. 2 linear Ebara.2 1. TABLE 2. j FACILITY 1.5 Ebara. Japan| 0. 1 I TYPE OF BEAM TYPE OF ENERGY POWER REMARKS FACILITY (MeV) (KW) ACCELERATOR 12 1. Wet and dry ESP and gravel bed filters are being used to optimize byproduct collecting system.Usj DEMONSTRATION 0. BYPRODUCT HANDLING The process of particles formation and filtration has been intensively investigated during the recent years. Japan| j PILOT 0.Germ| < 1000 Nm 3 /h 0. Japani LABORATORY 3 15 Cockrft-Walton JAERI. To avoid decreasing property of byproduct by neutral precaution material diatomaceous earth can be used.Japan | II 0. .7 2 χ 50 Transformer INCT. G ermanyj 0.20. G ermanyj II 0. Japan| (Nm3/h) INDUSTRIAL 0.

¡Electrocurtain ¡Corp. 1 Polimer ¡Transformer ¡Physics. .. Japan 1 1 I I I I 1 1 I I I I |ESH. The sale of this byproduct can be used to offset the cost of the ammonia which is applied in the process. TABLE 3. Such lack translates into an excellent opportunity to sell EB process byproduct at an attractive price. 750 j 2 χ 60 | 2000 ¡Transformer |Russia 1 EPS-500 ¡Nissin High 500 80 j 1600 Cascade |Volt. Such sale can significantly decrease operating costs. of Nucl.3/90 ¡Energy Seien. but its quality was estimated on 75% of the regular product.. Enriching various organic compounds like sludge or municipal waste compost with a byproduct addition may improve the nitrogen content. The combination of these two compounds provides a suitable quality material for direct application. An alternative application of the EB process byproduct is under consideration. generally located in the more arid regions of the world.. The basic parameters of the electron accelerators offered by the different producers for flue gas treatment in the capacity 10000­20000 Nm^/h 1 1 I I I I TYPE OF |PRODUCER |ELECTRON |BEAM ¡OUTPUT ACCELERATOR G ¡ENER Y ¡CURRENT ¡WINDOW j (keV) (mA) (mm) ¡600/200/1830. The usable byproduct is one of the major features of EB process for flue gas treatment. 239 ESP and baghouse can be installed in series to increase the efficiency of the byproduct collection.| ¡Transformer jPhys. Ammonium sulfate is being applied directly on certain sulphur­depleting agricultural crops like corn and cotton. 280 220 700 |Germany 1 1 I I I I Ammonium nitrate is the basic fertilizer for many plants. may adjust the precipitation of the mixture and may be effective and economically replace the chemical fertilizer. but at a significantly higher cost of installation. 500/700 ¡ 100 1500 |Russia/Japan ¡UW-075-2-2-W. Existing ammonium sulfate sources do not meet market needs. Ammonium sulfate is required by sulphur defficient lands. Usually ammonium sulfate is a component of the final commercial product of the NPK fertilizer. ¡NIIEFA. ¡500/600 200 1830 ¡USA/Japan ¡ESI 0. 300 300 1400 ¡USA/Japan ¡ELW3A ¡Inst. The concentration of ammonium sulfate and ammonium nitrate depends on the fuel composition. ¡Radiation Dynamitron ¡Dynamics.

10 0. Usually it is efficiently removed by the ESP located before the process vessel. Presently.11 sio 2 43.53 Fe 2 0 3 1. COST ESTIMATE The costs including capital investment cost. Usually the amounts of trace metal in the byproduct can be controlled at levels equal to or less to those being found in commercial fertilizers.0 Pb 26 26. This level of preremoval can be easily obtained by the use of relatively low efficient collectors. of flyash by byproduct is accepted. With a nitrogen content of approximately 25%. the flyash is not recognized as a hazardous waste material. Germany.0 j Cd 4 3.12 0. by weight. but the high flyash content in the byproduct decreases the nitrogen content and increases the distribution and application costs per nitrogen unit (4).50 ¡ MgO 2.5% sulphur coal the byproduct production can be estimated on 800 Kg/day/MWe. the nitrogen content of the byproduct mixture will be between 20 ­ 30%. Some trace of heavy metals are present in the flyash. The present status of accelerator development allows to build 500 kW units at a cost rate of .50 1 PH j 7.31 0.74 ?2°5 1. For those facilities using 2.46 1 Na 2 0.3. Product A was a mixture byproduct with filtration.57 0. Table 4 shows a record for two different byproduct samples.04 CI 2. operating and maintenance cost and byproduct credit should be taken into account to evaluate the EB process from the economic point of view.75 0.50 S-SO4 3. the flyash is one of the significant compound of the byproduct.6 j 3. while product Β is a pure EB process byproduct having the characteristics of a nitrogenous fertilizer.07 | CaO 3.35 4.40 ¡ Ν-ΝΟ3 0. Karlsruhe.89 2.m. Composition and chemical properties of tested products.21 0.90 1.95 0. For 100 MWe power plant lOOOkW electron beam power should be applied to achieve 90% of SO2 removal efficiency and 80% of NO x removal efficiency at a 7kGy dose.20 99.89 ash 77.50 ¡ N-NE 4 4.16 19. with properties and fertilizing utility similar to that of the ammonium sulfate.0 10 1er 1 1 3.0 Zn 60 254.34 24.21 Ι κ2ο 1.50 % s.51 content of heavy metals (ppm) j Mn 160 60.0 Cu 38 3.74 0. The byproduct was collected at the installation operated by Badenwork.45 19. Typically no more than 10%. dry mass 98.50 | including: R 2°3 16.27 0.40 S total 3. TABLE 4.95 25. I 1 1 "B" Product 1 "A" Product I1 | Ν total 4. 240 Depending on the coal sulphur content and the level of nitrogen oxides in the flue gas.

responsible for SO2 and NO x removal from the flue gas. According to an Ebara estimate to a 100 MW plant burning 2% sulfur coal and SO2 removal rate 92% and the NO x removal rate 60% listed below. Table 5 shows capital cost estimate depending on the cost of the accelerator. performance and economic parameter can be achieved: • Power consumption 2.4.300. in Germany. 241 2-5 US$/W of beam power depending on the accelerator construction and its producer. Flue gas generated by both oil and gas burners needs the additional injection of SO2 and NOx to meet appropriate experimental conditions. TABLE 5. Up to 25% of thecapital cost is applied to buy accelerators what is slightly less than the typical cost of a construction work (buildings. This type of laboratory unit was applied in Ebara during the first tests performed in 1970-71 to establish chemical reactions induced by radiation.000 US$ . in Poland and in some others countries to investigate experimental characteristics of the EB flue gas treatment process (2). BATCH TYPE facility can be easily adopted to local experimental conditions.75 2 225 169 5 350 262 1 3.000 US$ It was recognized that the byproduct has 75% of the value of a commercial fertilizer what meant 51 US$/t inl990. FLOW SYSTEM incorporates flow gas stream rate lower than 1000 NM^/h.000 Nm3/h • Total capital cost 19. ducts) (4). SO2.6MW/h • Ammonia requirements 1500 kg/h ■ Inert earth 100 kg/h ■ Fertilizer byproduct 600 kg/h ■ SO2 reduction 1400—>112ppm ■ NO x reduction 400 >160 ppm • Flue gas . 1 1 Accelerator Investment Multistage Cost Cost Irradiation (USD/W) (USD/KWe) Investment Cost | beam power (USD/KWe) 0. generated by oil or city gas burners. O2 and N2 at a moderate flow rate. An additional amount of water should be incorporated to keep adequate water contents.Annual operating cost 580.flow rate 300. The gas flow can be arranged by the use of pressure tanks containing NO.000 US$ ■ Process cost 193 US$/kW ■ Operating personnel 3 per 24 h ■ Annual maintenance cost 200. LABORATORY INSTALLATION A batch type laboratory unit with a flow system has been built in Japan. Estimate of the capital cost EB faculty for flue gas treatment depending on the cost of the accelerator. CITY GAS BURNER OR A GAS MIXING . The choice between OIL BURNER.

ACCELERATOR is used to provide stream of electrons which is applied in the process. Electron beam parameters are not critical in laboratory installations due to experimental requirements. HEAT EXCHANGER is sometimes used before the process vessel to control the temperature of the EB process. . . AMMONIA INJECTION is supplied from the pressure tank after conversion from liquid to gas phase. Thermal isolation and additional heating system could be used to stabilize experimental conditions.Inlet and outlet SO2. The energy of an electron may range from 0. The injection point is usually located before the process vessel.NH3. SO2. NO x . Bag filters and/or ESP may be applied FAN located before the stack is necessary to keep proper flow rate of the gas through theprocess vessel and the collector of the product. ANALYTICAL EQUIPMENT should allow to measure number of process parameters: . COLLECTOR of the product is being used to collect the final product.22 to 12 MeV while the energy beam power from 1. The temperature of the gas paths is recommended by analytical instrument producers and usually is into the 150°C range. The highest flow rate can be obtained in a system equiped with a boiler. .Aerosol parameters. NOx injection flow rate. Stainless steel and other corrosion resistant materials are preferable.Dose rate.Temperature in determined points of then facility. HEATING EQUIPMENT is required to provide proper temperature conditions to the process vessel and analyze gas paths. . H2O. .2-30 kW in laboratory installations which have been used to investigate the EB process.Flue gas flow rate. RETENTION CHAMBER located downstream of the process vessel is sometimes used to stimulate the product formation. NH3 concentration. O3.100°C is being used for various experiments. STACK and duct üne are used to extract the flue gas out of the building. . PROCESS VESSEL should stand a long time irradiation with appropriate temperature according to the nature of the experimental condition. The amount of injected ammonia should be carefully controlled according to experimental requirements. Process vessel temperature 60 . SPRAY COOLER is used for the water injection from an air-assisted manifold of spray nozzles. PREFBLTER is sometimes used after the burner to stop particles coming from the combustion process. The quantity of water injected is under control to cover temperature of the flue gas by the evaporation process and increase its relative humidity. 242 DEVICE depends mainly on finantial conditions or the possibility of adaptation of the existing faculties. A corrosion effect and the deposition of the byproduct may occur when filter collector units are not applied.

AG.2 m ­ Scan frequency 100 Hz The irradiation device allows a four­turn irradiation and was already used for dosimetric studies (1). 1991... S.. T. IAEA­JM­325/124. Laboratorium and industriai research installation for electron beam flue gas treatment. 42. TANAKA. 1992. 679­682. V. Chem. . H­R . (4) EBARA ENVIRONMENTAL CORPORATION. MORISHIGE.. IAEA­TECDOC­428.6­1. ALBANO. Estudo sobre o tratamento de gases tóxicos S02 e NOx provenientes de combustão de óleo ou carvão . 1994.. The carrier gas will be normal cooking gas. MIYAJfMA K . (2) CHMOLELEWSKI. REFERENCE (1) CAMPOS. Ε. SATO. Y. M . J. MAETZING. 1993 (8) POLI.. PA 1988. TOKUNAGA O. SOMESSARI.42. temperature and humidity and also the analysis of the gases to calculate the efficiency of their removal. Z A . Several points will allow the measurement and control of gas rate. POLI. p.. S. Karlsuhe. that is burned at a proper burner. 9­13 March. NAKATïïvLA.L. 4. Radiât. J. CA. The gas flow rate will be 251/min and a synthetic mixture of S0 2 and ΝΟχ will be used in preliminary studies. Proceedings of an International Symposium of Isotopes and Radiation in Conservation of the Environment.. S. MIYATA T. INIS­JP­005. Tokyo.C. Indiana. A .. G.. H Electron beam induced purification of dilute off gases from industrial process and automobile tunnels Radiât Phys.C. Final report for testing conduced on the Ebara flue gas treatment system process demonstration unit at Indianapolis.5 MeV ­Beam current up to 25 mA ­Scan length 0. Greenburg. (7) PAUR. Proceedings of the Third International Synposium on Advanced Nuclear Energy ­ Jaeri. OSADA Y . Desenvolvimento de um sistema calorimétrico para dosimetria de gases. ΖΓΜΕΚ. Eletron beam processing of combustion flue gases. VIEIRA. 659­661. TELER. AOKI. (3) DOI..81.M. D.. 719­722. SUSUKI. Pilot plant for NO„ S 0 2 and HCl removal from flue gas of municipal waste incinerator by electron beam irradiation.. Vienna. KATO. 1992. NH3 will also be injected and the fertilizer will be collected at a bag filter. Y. 1987. em fluxo contínuo. Rio de Janeiro­Br. T. irradiados com feixe de elétrons.C.. LICKI.R. 243 A laboratory pilot plant has been built at IPEN­CNEN/SP. having the following parameters (8): . BABA S. PEREZ. from Radiation Dynamics Inc.. 2. R. OGURA. 118­122. Chem.R. Japan. TOKUNAGA O.. LAROCA M A M .. J... Phys. D.­Electron energy 0. HJROTA Κ. VIEIRA. H E B . RIVELLI. Electron beam treatment of coal-fired flue gas.5 ­ 1. (6) NAMBA H . using an electron beam accelerator..M. (5) INTERNATIONAL ATOMIC ENERGY AGENCY.D. Anais do V Congresso Geral de Energia Nuclear. 1993..

Z A . (10) ZIMEK. 3. CE. Rio de Janeiro-BR.. W. 965-970. 40..low energy electron accelerators. Louisiana. USA. (9) PRIEST. 1990. Radiât. 1992. 1993. JE. New Orleans. May. 317-320. JB. SALTMOV. JARVIS.. Engineering Evaluation of combined NOx/SOx removal process: second interim report. Anais do VI Congresso Brasileiro de Energia. CICHANOWICZ. Phys. DENE. Control Symposium. . R A Windowless output for high power .. Chem. 8—11. 244 por aceleradores de elétrons.

COPEL/UFPR mauro@lac. the design of classical PTD-based controllers offers as the main difficulty the mathematical modeling of the problem. . In this examples. KEY WORDS Neural Nets . Simplicity: the structure of a neural net is relatively simple to understand from the user's standpoint. The environment supports also the on-line training of a second neural controller in parallel to the control task itself.INTRODUCTION The usage of neural nets in control systems appears as a very interesting Marcelo Stemmer UFSC Daniel Pagano UFSC ABSTRACT This paper describes a neural net based environment designed to perform process control. 1 . 245 IMPLEMENTATION OF AN INTEGRATED ENVIRONMENT FOR ADAPTIVE NEURAL CONTROL Mauro Cezar Klinguelfus LAC . Ability of learning: a neural net learns trough examples and is able to represent a certain behavior. Among the aspects that motivate the use of neural nets in control. enabling an adaptive behavior.Speed Control.Voltage Regulators . which is adequate for any process complexity. we can mention the following ones: Non algorithmic methodology: it is a quite intuitive method. which is not always trivial.Adaptive Control .copel. mainly when the plant is difficult to be modeled or has parameters that change with the time. the behavior that should be learned is presented to the net in the form of input / output relations. The environment includes all the necessary resources for neural net training and real time control of the plant with different sampling rates. Under this conditions. where the pragmatic knowledge of an expert about the problem is more important then a previous knowledge of mathematical models.

these nets exhibit some characteristics of the human brain. fault detection and diagnosis. which permit the mapping of input vectors in output vectors without the need of a mathematical model. Beside their superficial similarity to the brain's structure. The amount oftest vectors needed in each pattern is proportional to the complexity of the process to be controlled. The development of new neuronal models and new training algorithms. like. image and text recognition). due to a relatively better understanding of the real neural systems and the improvements in the computer technology. event prediction. pattern recognition (along with voice. the capability of learning by experience. is based on a training process. because they are universal approximators of general functions. Fast response: once trained. 2 . ANNs can be seen as estimators without a model.FUNDAMENTAL CONCEPTS OF NEURAL NETS The ANNs are composed of elements that perform some of the elementary functions of the biological neuron. Intrinsic characteristics of the process are automatically considered. that means. Minimal knowledge of the process: neural controllers require a minimal knowledge about the mathematical model of the process. contributed to popularize the use of ANNs to the most different applications. we need a previous knowledge of some values that define a set ofinputs end their respective outputs (this is necessary for the so called supervised learning). the response of an ANN is very fast and can be used even for real time applications. neural nets facilitate the controller design task. Practical experiences have shown that this amount should be around 20 to 60 vectors. Universal approximator: with an ANN it is possible to represent any mathematical function. including control. an adequate choice of training examples that attend the range of interest is required. 246 Generalization: a neural net is able to answer to patterns to which they have not been trained. among many others. instead. since it is not based upon the classical ways to develop controllers but. beside the availability of faster processors. . and the set oftest vectors correspond to a test pattern. The inspiration for the creation of the so called Artificial Neural Networks (or ANN) comes from biological models and goes back to the 1940'es. for instance. For this purpose we only need to know the input / output relations of the process to be controlled. which is started before the realization of the controller itself. only in the last decades the interest for these connectionist models has grown in a solid base. Natural noise elimination: this capability is due to the constructive characteristics of the net itself. signal processing. In spite of that. Seen under this aspects. Each pair ofinputs / outputs correspond to a training pair or test vector. In order to achieve this.

that represent the problem to be solved in a satisfactory way.THE NEURAL MODEL McCulloch e Pitts [4] proposed a simplified model for the biological neuron. more than any other. one or more hidden layers and one output layer. 247 ANNs can modify their behavior as an answer to its environment. we always have a certain error and a certain probability of correctness associated to a net output. It is said then that a neural net can learn. The neurons of the hidden layers perform the modeling of non linear functions and serve also as noise and drift suppressers. that is. In this case. an appropriate model and learning algorithm should be chosen. Their model is based on the fact that. Adaptive control techniques have been developed basically for processes that work under unexpected or hardly predictable conditions. In the applications related to control. As a consequence. each one with its strengths and weaknesses. the net weights are adjusted in order to produce a consistent output vector. the training set consists only of input vectors. that is. the result will be so near the expected as better the training process was. . In this kind of topology. the nets are composed of an input layer. The training can be supervised or not supervised. The ability of learning by training brings to the net a certain degree of unpredictability. They have high noise immunity and can be used to implement adaptive controllers. This depends fundamentally on a good choice of test patterns. In the non-supervised learning. The classical adaptive control techniques fail every time we don't have a complete knowledge of the mathematical model of the process or when we don't take in consideration some uncertainties or complexities of the system (what is the case of most practical applications). The training of the net consists of adjusting the weights of the various layers.1 . Neural nets can be used to control complex and nonlinear systems. 2. and on the choice of an efficient learning algorithm. is responsible for the interest they are receiving. This fact. the application of any input vector sufficiently similar to one of the training vectors will produce the same output. In this nets. the neuron is either firing or inactive. which are difficult to include in the models. the multilayered neural networks are awaking the greatest interest of the researchers. in a certain moment of time. For each particular application. what gives it a discrete and binary behavior. the neurons are totally interconnected with neurons of an adjacent layer. requiring for that a variety of training algorithms. The training is done through the sequential application of input vectors while the weights of the net are adjusted according to a pre-defined procedure. A net is trained so that the application of a set of inputs produces a desirable (or at least consistent) set of outputs. The use of neural controllers is interesting precisely in this cases. The supervised training requires an input vector associated to a desired output vector (the training pair).


There are excitatory and inhibitory connections in these neurons, represented trough a
weight with a signal, which reinforces or hampers the generation of one output impulse. One
neuron n¡ produces one impulse, that is, one output o¡ - 1 if and only if the sum of the inputs is
bigger or equal a certain threshold. Equation 1 defines the output function of the McCulloch &
Pitts neuron:

ι if χ - ij »Vu - a s o
Oi (x)\n .

where WJJ is the weight of the connection associated to the input ijj, and /' is the activation
threshold of the neuron n¡.

Starting from the model proposed by McCulloch & Pitts, many other models that permit
the production of any output, not necessarily 0 or 1, have been derived. Also many different
definitions of the activation function appeared. Figure 1 shows four if these activation functions,
namely: the linear function, the ramp function, the step function and the sigmoidal function.



(a) (b)

f(x) SM

(c) (d)

Figure (1)

The sigmoidal function, also known as S­shape function, illustrated in figure l.d, is a
semilinear function, limited and monotonic. It is possible to define many sigmoidal functions. One
of the most important sigmoidal functions is the logistic function, defined on equation 2.

&(*) = χ/Γ
l+ ¿
where the parameter T defines the form of the curve.


An elementary representation of the McCulloch & Pitts neuron is shown in figure 2.



Figure (2)

Basically, the neuron corresponds to a weighted sum of the inputs, over which the
activation function is applied. In this work, the sigmoidal activation function presented in equation
2 has been used.


The environment we describe here works with multilayered neural nets to perform adaptive
control of complex processes, which can contain nonlinearities.


In this work, two distinct training algorithms have been used: genetic algorithms and back-

3.1.1 - Genetic Algorithms

Genetic algorithms can be seen as generic algorithms for optimal general purpose solution
search. Their working mechanisms are similar to the mechanisms that rule the evolution of living
being populations.

In this algorithm we generate initially "n" sets of weights and each set is called a
chromosome. The set of test vectors used for training is than applied to the net using each
chromosome. For each chromosome, the resulting average quadratic error is stored. The first
action over the chromosomes is the so called elitization, in which the 25% worst chromosomes
are eliminated and the 25% better chromosomes are duplicated. The total amount of chromosomes
keeps the same, because the remaining 50% are kept unchanged.


In this work, the phase called crossover has been eliminated, for it does not amplify in
significant way the search space and has a high associated computational cost. This phase would
correspond to the exchange of weight values from their position in the same chromosome. The
position choice in made in randomic way. There is no rule to define the number of changes to be

The next phase is called mutation and consists in substituting, also in randomic form, some
of the values inside the matrix composed of all chromosomes. The mutation rate is an arbitrary
parameter. This process inserts new information in the population, what is desirable, because there
is no warranty that the solution is inside the universe of weights being considered. Because it is
randomic, the mutation can also destroy a good chromosome before it can be duplicated. The
practical work has shown that a too high mutation rate causes oscillations in the error values.

Once completed the mutation phase, the process of elitization / mutation is started over
again and again, until the expected error value or the specified number of iterations is reached.

In the environment we describe here, the alteration of the weights can be done in a
"controlled" or in "elitized" way. This permits us to by-pass in an effective way the biggest
difficulty associated with this algorithm, which is the divergence of the error when it reaches
relatively small values, mainly when the mutation rate is high. In the "controlled" form, the
alteration of weights is only executed if the new value presents a smaller quadratic average error
as the former value. In the "elitized" form, the alteration of weights happens only over the "bad"
chromosomes. In both situations the work of the user is facilitated, since a way to control the non
convergence problem is given. The main advantage of using the convergence control process is the
automatic search of a solution without the need of continuous supervision from the user.

3.1.2 - Backpropagation Algorithm

This method follows an iterative model, which goals the reduction of the quadratic average
error between the desired and obtained output values for each training pair (supervised learning).
The error found is then back-propagated from the output to the input and the weights of each
network layer are readjusted according to a well defined rule.

During the training process, a factor called learning rate is adjusted, which determines the
speed and stability of the convergence. The environment described here executes this adjustment
in automatic form.

3.1.3 - Coexistence of the algorithms

The backpropagation training algorithm, although largely used, presents certain
drawbacks, for the solution could not converge to a desired minimum if the solution space is too
convoluted. This comes from the fact that this algorithm is highly affected by local minima, which
can delay or even stop the process of getting a global optimal solution. This problem can be by-
passed by using the genetic algorithm, specially in the initial phase of the training. The genetic
algorithm is not affected by the local minima because it does't include minimization of gradient.
However, when the error becomes relatively small, the convergence of the genetic algorithm
becomes critical and slow. There is no error evolution granularity defined in the genetic algorithm.


What is meant is that the search for an optimal solution in this algorithm is done in a non
continuous form, that is, there can be a period in which the error value practically doesn't improve
and then, in the next iteration, the error can "sink" abruptly to the desired value, ending the
training phase. For all those reasons, it is interesting to make a composition of algorithms, where
the genetic algorithm is used at the beginning in order to initialize the weights of the net and then
the backpropagation is used to get the desired, more generic, solution.

One way to partially by-pass the problem of local minima in the backpropagation algorithm
is to induce a controlled randomization of the weight values every time they meet a local minimum
and the convergence stops. This solution is, in the most cases, enough to solve non complex
problems and has the advantage of being simpler to implement than genetic algorithms. Otherwise,
this solution offers no warranty that we are not going to fall in another local minimum, in which
case we need to start the whole randomization process again. In the environment we describe
here, a maximal number of automatic randomizations has been defined and the training process can
be interrupted after some steps if an optimal solution has not been reached.

The number of iterations needed for each algorithm to reach their best solution is sensibly
smaller in the genetic algorithm. However, the computational effort required by this algorithm is
larger. In this form, a compromise between the two algorithms must be established. The
environment permits the user to define the number of iterations after which a migration from one
algorithm to the other will occur.


A practical problem one meets in the implementation of a neural controller which is going
to control a real process is how to initialize it before connecting it to the process. In order to avoid
unexpected behavior of the system (controller + process), the neural controller must first learn the
real dynamics of the process (observe that the net starts with randomized weights, where the input
/ output relation is unknown and aleatory).

The solution we adopted was to generate initially a set of test vectors which are obtained
through an assay of the process in open loop and considering that the controller is in a rest
condition or steady-state. Knowing the processes input values and the corresponding relation with
the output values, we can create a set of vectors, which we call "true-vectors", and use them in a
"pre-training" phase of the neural controller. Through these vectors we can automatically make
inferences about the system's response time to an external disturbance, obtaining in this way a
parameter that is similar to the behavior of the derivator in a classical ΡΠ) controller.

The "true-vectors" are also used in the adaptive training phase, working as a kind of
"anchor" that hinders undesirable behavior caused by the introduction of bad test vectors. This can
occur due to an inadequate choice of the new test vectors dynamic acquisition method.

Due to the form in which the "true-vectors" are acquired, all the imperfections of the
process are automatically considered, including, for instance, transducer inaccuracies,
nonlinearities in the actuators or in the process itself.


If it's not possible to execute the assay in open loop, the true-vectors will need to be edited
directly by an expert. Once implemented the preliminary controller, it's performance can be
improved subsequently, trough an on-line acquisition of new test vectors.


The integrated neural control environment has all the necessary tools for the automatic on-
line test vector acquisition. This enables the automatic consideration of all the dynamic
characteristics of the process and the implementation of adaptive controllers.

A quite significant point in this environment is the possibility of performing an off-line
training and an on-line test vector acquisition. This ability of the environment enables to give the
controller an adaptive capability. For instance, if a transducer coupled to the system starts to
present an error in it's output after a certain time of good operation and after the implementation
of the neural controller, it would be desirable that the controller recognizes this change and adjusts
itself in order to compensate the error. In order to accomplish this, the environment should read
new vectors in regular time intervals and use them to re-train the net. However, we can't
increment the number of test vectors indiscriminately. One of the reasons is the limitation of the
computer's memory and other is the consequent increase of the training time. Neither can we
detect which vector should be altered, for it belonged to a former set of vectors, defined before the
transducer changed its behavior. The solution of changing all the vectors should also not be used
indiscriminately, for it would be as if the net had lost its memory.

The environment offers a set of options that, when correctly combined, make it possible to
by-pass all these problems.

In the environment, it is possible to set and vary the immediate reference value in order to
facilitate the on-line vector acquisition. The reference value of the neural controller can vary in a
manual way, with pre- and post-triggering, or automatically, with pre-defined boundaries and
controlled number of repetitions. The reference can also be programmed to follow a set of pre-
defined values from a vector with controlled number of repetitions. These repetitions enable to
establish a flat stretch of values, which is very significant due to the inherent delays associated to
the process dynamics.

The alteration of the test vectors can be executed in blocks or totally, enabling to establish
a smoothness criterion of adaptation.

It's not interesting to have equal or very similar test vectors. During the process of
acquisition of new test vectors, the environment can be programmed to filter out test vectors that
are similar to the ones we already have. The similarity criterion can be adjusted by the user.

Case the controller stays a long time in the same operation point or region, it could happen
that it "overlearns" this point or region and "forgets" the expected behavior in the others, due to
the continuous acquisition of new test vectors in this place. In order to solve this problem, an
option has been implemented in the environment that only enables the substitution of a test vector
case the present reference value is in the neighborhood of the reference value of some test vector.


Another option implemented in the environment is concerned with the condition of starting
a new training cycle based on the newly acquired test vectors. This process can be activated by the
user or trough an external trigger. The external trigger is concerned with a determined variation of
a previously defined input. Case the change in the value of this input overshoots a certain
percentage, the acquisition of new test vectors and the corresponding training process are
automatically started.

When the process under study is very complex or presents many nonlinearities, the neural
net must be proportionally more complex, in order to execute the control in effective way. In the
integrated neural control environment, it is possible to decompose the main problem in many sub-
problems, with a different neural net responding for each one. That means that we can define a
different neural controller dedicated to each operation region of the process. The advantage of this
strategy is that the dedicated nets are significantly smaller and simpler (and consequently faster)
than the net we would need for the whole range of operation of the process. The switching time
between nets is very short in comparison to the response time of the net itself and is completely
transparent for the rest of the system. In order to accomplish a smooth transition by the switching,
it is possible to execute a superposition of the nets.

The environment enables also the visualization on the screen of all the acquired data, in
order to supervise the training and control tasks.


In the specialized literature we can find alternative forms of implementing neural
controller. Some of those make use of the mathematical model of the process and the neural nets
are used only due to its adaptive capabilities.

In the present work we adopted the strategy of implementing a pure neural controller,
where a previous knowledge of the mathematical model of the plant is not required. We assumed
it is enough to have an approximate idea of the order of the involved mathematical model, which
is also not imperative. The knowledge of the model's order helps only to reduce the training time
until a satisfactory solution is found.

Two neural control models have been adopted: the so called direct control and the indirect
control, described hereinafter.

3.4.1 - Direct Control

In this model we have solely the neural controller controlling directly the process.
The main difficulty in the implementation of this method appears in the neural controller
training phase. The problem is how to know what output value should the controller have for each
variation of its input. The error in the process output, related to the reference value, should be
compensated by the neural controller in a similar way as if we were using a classical PID
controller. The systems inner dynamics (delays) should be respected.


The desired controller output value for a given input depends not only on the value of the
input itself but also on the preceding state of the system. This means that the input / output
mapping is not a trivial task.

In order to escape from having to know in forehand the mathematical model of the plant
and still be able to extrapolate the controller's instant output value for a given input condition, we
opted by using a factor that changes the present controller output value in the direction to which
the system's error (between reference and process output) points. This factor, which we called
gain, can be linear or exponential and is basically added to the present output value of the net,
obtaining in this way the goal value for the training.

A linear gain basically adds to the present controller's output value a factor that is given by
the output value itself multiplied by the system's instant error and by an adjustment's speed factor.

target = actual_output + actual_output * instant_error * linear_gain_factor (3)

An exponential gain multiplies or divides the present controller output value by a factor
given by a pre-defined number raised to the system's instant error.
target = actual_output * exponential_gain_factor _error ^

The exponential gain has the characteristic of speeding up the adjustment when the
system's error is large.

3.4.2 - Indirect Control

Another model supported by the environment is the so called indirect control. In this
model, two neural nets are used: one is the identifier net, whose task is to represent the behavior
of the system and is used on the training of the second net, and the other is the controller net,
which plays the role of the process controller itself. The environment provides all the necessary
conditions to implement both nets.

Initially the identifier net is trained, using the environment tools, in order to obtain the
training vectors needed to map to input / output behavior of the process under study. After the
training is finished, the behavior of the identifier net can be compared in real time to the behavior
of the process, checking in this way the results of the training.

The next step is the training of the controller net.

At this point we can train the controller net off-line, that is, without a connection to the
process, or we can accomplish an on-line training, in connection with the process. In the second
case, we have to start the training using the already mentioned "true-vectors".

The first option (off-line) enables a preliminary training of the controller net without any
influence over the process itself and is recommendable when we are not sure about the resulting
behavior of the system in closed loop. In this option, a reduced amount of true-vectors is needed
in order to complete the matrix of values used during the training phase. The other vectors are



obtained from these true-vectors trough a list of reference values. This can be seen as an off-line
test vector acquisition.

In the majority of the cases, we can start using directly the second option (on-line), due
mainly to the safety given by the true-vectors. In this case, it is suggested the use of an amount of
true-vectors enough to cover the dynamic band of reference. Practical experiences have shown
that, for the most cases, 20 true-vectors would be enough. In this way it is assured that, at the
beginning of the dynamic vector acquisition process, the controller will respond to the commands
in a way that can't damage the process under control and also permits that variations in the
reference are accepted by the controller. The idea is to start the training using only the true-
vectors and afterwards improve it by the dynamic acquisition of new vectors.

One important point here is concerned with the adaptive behavior. During the normal
operation, the weight values of the identifier net are not changed, but only the weights of the
controller net. It is usually not necessary to have two adaptive structures in series. The reason for
the existence of the identifier net is to back propagate the system's global error (reference -
process output) in order to keep it available at the output of the controller net during the training.


In order to test the functionality of the integrated neural control environment, it has been
used to control a pilot plant described hereinafter.

The tests were done using a generator of 127 ACV and 1800 VA driven by a single-phase

The generator field was controlled trough a hexaphase bridge made of SCRs*. It is
important to point out that, beside the nonlinear behavior of the process itself, the hexaphase
bridge has also a nonlinear behavior, for it responds to the sinus of the applied signal.

The tests were conducted by comparing the performance of the neural controller to a
classical PID controller previously dimensioned for this process (generator + motor).

It has been observed that, after an initial training time, it is possible to achieve a more
continuous control on the extremities of the controlled band with the neural controller. This is due
to the strong non linearity of this process, mainly for low voltage values.

Another observation is that, during the training phase, the indirect model converges more
promptly than the direct control model. In the other hand, the direct control model requires a
smaller computational effort.

The success or failure in the practical implementation of neural controller is intimately
connected to the quality of the test vectors used in the net training. In the preliminary training we
have got a satisfactory result after 20.000 iterations using only the backpropagation algorithm and
this number was strongly reduced when genetic algorithms have been combined.



The environment was implemented on a IBM-PC 386DX with 40 MHz clock frequency.
For a neural controller with 5 neurons in the hidden layer it was required a training time of less
than 10 minutes. In the subsequent trainings, since the net was already pre-trained, this time was
reduced to some seconds, depending on the number of vectors used.

At any moment of time we have a neural net controlling the process in real time and
another being trained in parallel. Once finished the training of the parallel net (and assuming it was
successful) the weights of the controller net are quickly updated, without any interference in the
process under control. After the updating of the weights, the training process can be finished or
another training cycle can be automatically started.


Surely the use of neural controllers is not restricted to generators voltage and speed
control. It is important to point out that for each application there will be a more adequate
structure, which will be identified after the realization of tests. We suggest to start with a very
simple net, which has a hidden layer with at least 3 neurons. Starting from the results obtained
with it, we can increase the complexity of the net's inner structure. It is recommendable to use
different nets specialized to each operation point or region of the process, in order to work only
with small nets and also assure fast training and adequate real time behavior.

One of the main motivations in the use of neural control resides in the fact that we can
implement very complex controllers without a deeper knowledge of specific control techniques.

Before anything else, the neural nets are universal approximators. We have to consider
that, when a classical controller is implemented in the field, it is usually done in an environment
were the received information (from transducers) already presents a certain amount of embedded
error. In the case of neural controllers, as the informations used to "design" them are obtained
directly from the process, all errors are automatically considered, including those implicit in the
process itself.

We can assert, in face to what has been said until here, that neural control represents a
good option for the most control problems. In each situation it is important to analyze the
convenience of using this technique or not. The main difficulty resides usually in the fact that,
during the training phase, the user should have an adequate methodology to obtain the test
vectors. It is just in this item that the here described environment can reduce the user effort,
providing integrated support for data acquisition and subsequent training.


[1] SILVA L. E. Borges da; TORRES G. Lambert; SATURNO, E. C. ; SILVA A. P. Alves da;
OLIVER G. - "Neural Net Adaptive Schemes for DC Motor Drives", IEEE Industry Applications
Society Conference, Toronto, October 1993.

[2] PAO, Yon-Han. "Adaptative pattern recognition and neural networks". Addison-Wesley
Publishing Company, United States of America, 1989.



[3] MASTERS, Timothy. 'Tractical neural networks recipes in C++". Academic Press, INC, San
Diego CA, 1993.

[4] McCULLOCH, W. S.; PITTS, W. H. ­ "A logical calculus of ideas immanent in nervous
activity". Bull Math Biophys, 5:115­133. 1943. Formal Neuron.

[5]KANATA, Yakichi; MAEDA, Yutaka. "Learning rule of neural networks for control ", SICE,
777 ­ 789. 1994.

[6]BOSE, Bimal K. "Expert system, fuzzy logic, and neural network applications in power
electronics and motion control". Proceedings of IEEE, vol. 82, no. 8, 1303 ­ 1323. 1994.

[7]TAKAHASHI, Hiroki; AGUI, Takeschi; NAGAHASHI, Hiroshi. "Designing adaptative neural
networks architectures and their learning". Science of Artificial Neural Networks Π, SPIE vol.
1966.208­215. 1993.

[8]SHEBLÉ, Gerald B.; MAIFELD, Timothy T. "Unit commitment by genetic algorithm and
expert system". Eletric Power Systems Research 30. 115 ­ 121. 1994.

[9]WU, Q. H ; HOGG, B. W.; IRWIN, G. W. "A neural network regulator for turbogenerators".
IEEE Transactions on Neural Networks, vol 3, no. 1, 95 ­ 100. Jan 1992.

[10] TORRES, Germano L. "Notas do curso de introdução às redes neuronais". EFEI. 1992.

[11] SEPEDA FILHO, Idmilson H.; STEMMER, Marcelo R. "Redes Neurais". Notas Internas
LCMI/UFSC. Set/1993.

[12] RUMELHART, David E.; LEHR, Michael Α.; WTDROW, Bernard. "Neural networks:
applications in industry, business and science". Cornminications of the ACM. Vol. 37, no. 3. 93 ­
105. March 1994.

[13] DJUKANOVIC, M.; SOBAJIC, D. J.; PAO, Y. H. "Neural net based determination of
generator­shedding requirements in electric power systems". TEE proceedings­C, Vol. 139, No. 5,
427 ­ 436, sep/1992.

[14] CAMPAGNA, David P.; KRAFT, L. Gordon. "A comparison between CMAC neural
network control and two traditional adaptative control systems". IEEE Control Systems
Magazine. 3 6 ­ 4 3 . april/1990.

[15] ROY, Serge. "Near­optimal dynamic learning rate for training back_propagation neural
networks". SPIE Vol. 1966 Science of Artificial Neural Networks Π. 277 ­ 283. 1993.

[16] JANAKJRAMAN, J.; HONAVAR V. "Adaptative learning rate for increasing learning speed
in backpropagation networks". SPIE Vol. 1966 Science of Artificial Neural Networks II. 225 ­
235. 1993.



[17] CHANG, C. S.; SRINTVASAN, D.; LEEW, A. C. "A hibrid model for transient stability
evaluation of interconnected longitudional power systems using neural networks/pattern
recognition approach". ΓΕΕΕ Transactions on Power Systems. Vol. 9. No. 1, 85 - 92. Feb. 1994.

[18] MISTRY, Sanjay L; NAJR, Satish S. "Identification and control experiments using neural
designs". IEEE Control Systems. 48 -56. June/1994.

[19] VILLALOBOS, Leda; MERAT, Francis L. "Optimal learning capability assessment of
multicategory neural nets". SPIE Vol. 1966 Science of Artificial Neural Networks II. 384 - 395.

[20] ZHANG, Y.; CHEN, G. P.; MALIK, O. P.; HOPE, G. S. "An artificial neural network based
adaptative power system stabilizer". IEEE Transactions on Energy Conversion. Vol. 8. No. 1.71-
77. March/1993.

[21] WEERASOORIYA, S.; EL-SHARKAWI, M. A. "Laboratory implementation of neural
network trajectory controller for a dc motor". IEEE Transactions on Energy Conversion. Vol. 8.
No. 1. 107-113. March/1993.

[22] DJUKANOVIC, M.; SOBAJIC, D. J.; PAO, Y. H. "Preliminary results on neural net based
simulation of synchronous machine dynamic response". Eletric Power Systems Research, 25. 159 -
168. 1992.

[23] YANG, H. T.; HUANG, K. Y.; HUANG, C. L. "An artificial neural network based
identification and control approach for the field-oriented induction motor". Eletric Power Systems
Research, 30. 35 - 45. 1994.



Advanced Analysis of Material Properties using DataEngine

M. Poloni - MPA Stuttgart
Pfaffenwaldring 32, 70569 Stuttgart - Germany
Fax: +49 711 685 3053 e-mail:
R. Weber - MIT GmbH
Promenade 9, 52076 aachen - Germany
Fax: +49 2408 94582 e-mail:

Abstract: Many industrial problems require adequate interpretation of data which are present
in the respective situations. For example process monitoring, diagnosis, quality control, and
determination of material properties to use in life and/or damage prediction are some of these
tasks. All the related problems have in common that a large amount of data describing the
respective area exists. But in most cases the information contained in the data is not used
sufficiently. Since the above described problems have different characteristics, different
methods are needed to analyse the existing data. In this paper an overview over advanced
methods for data analysis is given. In addition a software tool which supports the application
of these methods together with some applications are presented to emphasize the benefits of
advanced data analysis.
Keywords: cluster analysis; pattern recognition; data analysis; neural networks, material
properties, hardness, low cycle fatigue.

1. Introduction
This paper presents approaches of data analysis with intelligent technologies as for example
fuzzy technology and neural networks. After having seen the wave of successful industrial
applications of fuzzy control, data analysis has become a very fast growing and important
area where fuzzy and neural methods are applied. In particular their combination offers high
potentials for future use.
In section 2 a brief introduction to data analysis and the related terminology is given, section
3 proposes possibilities to support a potential user with methods and tools for data analysis.
While methods used in fuzzy control are based primarily on the formulation of fuzzy If-Then
rules, data analysis requires several different methods as shown briefly in section 3.1.
In section 3.2 a software-tool which contains the respective approaches is presented. Section
4 describes some examples in which methods for data analysis are used and further possible
applications are pointed out. The conclusions show some directions for future developments
of data analysis.

2. Basics of Data Analysis
In general, data analysis can be considered as a process in which starting from some given
data sets information about the respective application is generated. In this sense data analysis
can be defined as search for structure in data [4]. In order to clarify the terminology about
data analysis used throughout this paper a brief description of its general process is given

260 In data analysis objects are considered which are described by some attributes. The overall goal is to find structure (information) about these data. Of course. and so on. Here one could think of decision support for diagnosis problems (medical or technical).). This leads to a complexity reduction in the considered application which allows for improved decisions based on the gained information. . forecast (sales. [4]. creditworthiness [15]). Process description Features determination: • Numerical object data Sensors • Pair-Relation data Humans Feature analysis Pre-processing Extraction 2-D display Classifier design Identification Classification Input data Estimation Prediction Output results Assessment Control Figure 1: Contents of Data Analysis Here three steps of complexity reduction can be found: • An object is characterized in the first step by all its attributes. things (machines. Information is gained from the data in the sense that relationships between objects are detected by assigning objects to classes. process states. maintenance management. This can be achieved by classifying the huge amount of data into relatively few classes of similar objects. sensor signals. time series. for more approaches see e. • According to these features the given objects are assigned to classes (classifier design). The specific values of the attributes are the data to be analysed.. . this list of applications is by no means exhaustive.g. connection to process control systems. products.g. improved decisions can be made. Objects can be for example persons. and classification. and quality control as well as direct process optimization (alarm management. Figure 1 shows the process of data analysis described so far which can be separated into feature analysis. Based on the derived insights. classifier design. and development of improved sensor systems).. • From these attributes the ones which are most relevant for the specific data analysis task are extracted and called features (feature extraction). evaluation tasks (e. stock prices).

Fast Fourier Transformation (FFT) could improve the respective results. it could be sufficient for a classification to consider just one of these two. Both. In addition to these filter methods some transformations of the measured data as. features. Subsequently an overview over some of the methods to solve the related problems is given. Support for Advanced Data Analysis As stated in section 2. Both. Therefore knowledge-based fuzzy methods alone. and classes are considered. 261 The process of data analysis described so far is not necessarily connected with fuzzy concepts. 3. This leads to the following four cases [13]: • crisp objects and crisp classes • crisp objects and fuzzy classes • fuzzy objects and crisp classes • fuzzy objects and fuzzy classes In chapter 3 methods and a tool are described which can be used to solve data analysis problems falling into the latter three cases. These methods could be used for example to facilitate the process of feature extraction (see section 2). An object is said to be fuzzy if at least one of its features is fuzzy. [1].1. for more details see e. knowledge-based systems. 3. and neural networks. Statistical approaches could be used to detect relationships within a data set describing a special kind of application. [4]. the applications of data analysis have a wide range and occur in divers areas where different problem formulations exist. regression analysis. for example. [12]. belong to the class of signal processing techniques. see also [9]. however. Data Pre-processing includes signal processing and also conventional statistical methods. 3. in quality control some acoustic signals have to be investigated. Based on the specific data analysis formulation these tasks can be performed with algorithmic techniques as for example clustering methods. for example. two features from the set of available features are highly correlated. for example.g. If. Which of these methods is most appropriate depends on the specific problem structure. Here correlation analysis. for example. are no longer sufficient to solve the complex tasks of data analysis. either features or classes are fuzzy the use of fuzzy approaches is desirable. filter methods as well as FFT. If. and discrimination analysis can be applied adequately.1 Data Pre-processing If. methods for classifier design and classification can be used. In figure 1. it becomes necessary to filter these data in order to overcome the problems of noisy input. features and classes can be represented in crisp or fuzzy terms. Chapter 4 contains two industrial applications where crisp objects and fuzzy classes are considered.1.1 Methods Two groups of methods for data analysis can be distinguished: • methods for data pre-processing • methods for the classifier design and classification 3. .2 Classifier design and classification In order to find classes in some data sets. as used in most fuzzy controllers. objects.

see e. The automatic construction of such systems can be supported by fuzzy techniques from the area of machine learning. Interactive and automatic operation supported by an efficient and comfortable . Especially for data analysis the combination of these methods could give promising results. If an expert has some knowledge about the analysis of data (as for example in the area of diagnosis). It is expected that in the near future the areas of fuzzy technology. which are described by several features. this knowledge should be used for the evaluation. to fuzzy classes. Basic Module with File ::·: statistical methods^ Serial port INPUT Signal processing module OUTPUT • File Data ::·: ::%:: Modul e with fuzzv • Serial port acquisition elusieriegreethods . One of the most frequently used cluster algorithms which has been applied very extensively so far is the Fuzzy c­means (FCM) [2]. 3. 262 In the literature. and intelligent systems for classifier design and classification leads to a powerful software tool which can be used in a very broad range of applications. inference. see e. [10].g. * * ■ • Printer boards • ■ Module with fuzzy rule­ • 2D Data editor based öiethods Graphics • Module with neural Nets Module with nuero­fuzzy ! : ·: methods · Figure 2: Structure of DataEngine DataEngine is built using object oriented techniques in C++ and runs on all usual hardware platforms.g.3 New developments of methods for data analysis Recently a lot of research efforts are directed towards the combination of different intelligent techniques. [11]. a lot of different algorithmic methods for data analysis have been suggested [5]. [6]. One of these methods is a fuzzy version of Kohonen's network [3]. Objects belong to these classes with different degrees of membership. This class is similar to the approach taken in fuzzy control systems where fuzzy If­Then rules are formulated and a process of fuzzyfication.g. a neural network can be trained with these training examples.1. Especially the combination of signal processing. Here the elaboration of neuro­fuzzy systems is one cornerstone for the future development of intelligent machines. statistical analysis. see e. 3. Then knowledge­based methods for fuzzy data analysis are suitable [14]. and defuzzyfication leads to the final decision [15]. and genetic algorithms will be combined to a higher degree.A Software-Tool for Data Analysis DataEngine is a software tool that contains methods for data analysis which are described above. Here no explicitly formulated expert knowledge is required for the task of data analysis. This algorithm assigns objects. [8]. If an expert can not describe his knowledge explicitly but is able to deliver some examples for "correct decisions" which contain the expert knowledge implicitly.2 DataEngine . neural networks.

1 Modelling of a specific application with DataEngine Each sub-task in an overall data analysis application is represented by a so called function block in DataEngine (see Figure 3). 3. Depending on the specific requirements this step can be performed in an on-line or off-line mode. applications ofthat kind are performed in the following three steps: 3. in diagnosis or evaluation tasks) objects are classified off-line. Application to material properties determination In the following two applications of fuzzy clustering in modelling of steel behaviour at hgh temperature are reported. In general. If data analysis is used for decision support (e.2. Figure 3: Screenshot of DataEngine 4. output interfaces. This leads to a very high performance in time-critical applications. Such function blocks represent software modules which are specified by their input interfaces.2 Classifier design (off-line data analysis) After having modeled the application in DataEngine off-line analysis has to be performed with given data sets to design the classifier. Examples are a certain filter method or a specific cluster algorithm. 263 graphical user interface facilitates the application of data analysis methods.g. In such cases. direct process integration is possible by configuration of function blocks for hardware interfaces. and their function.2. the classification of new objects can be executed. Classification Once the classifier design is finished. This task is done without process integration. Data analysis could also be applied to process monitoring and other problems where on-line classification is crucial. Respectively low cycle fatigue behaviour of the lCrMoV rotor . Function blocks could also be hardware modules like neural network accelerator boards.

This type of evaluation can indicate the presence of noisy points and whether their number and characteristics justify an additional investigation. exploiting the possibilities of the advanced clustering methods available in DataEngine.5 4 1 π ia 1 0. Variations of the strain range in the first region are less likely to affect the number of life cycles in a strong way. in this case the local regression models. that do not clearly belong to any of the clusters.5 ■­ 3 ■· 2. Figure 4: Data set The data in are for the same alloy and temperature but came from different sources. These curves are characterised from two different regions: the first with higher values of strain. The best result has been obtained using an assumption of the presence of two clusters.MES lCrMoV rotor steel 4 j 3.T:\BE5245\FELL0WSH. A possible approach is to adopt a clustering method to find out the regions and to determine eventual spurious measurement.IP\W0RKPLAN. This effect is more evident in the second one. does not lead to a result much more comprehensive than the one which can be obtained by conventional regression analysis. Graphic . There is to note that the models reported were calculated with a pure data­based procedure.5 ­ 2 Β @Ξ E Í È Strain range (%) 1.2\CFATDATA\LCF. The results are reported in Figure 7. where all the points are reported without considering their membership values. The example above shows that. 264 steel and hardness­based temperature determination for lifetime prediction are the two examples. 4. the second with lower values of strain range. The first step performed was to see if it was possible to reconstruct the usual LCF curves using only numerical methods.5 4 % Ξ Ε ¡3 Π 0 10 100 1000 10000 1D00ÜD Endurance (cycles) Çuiititjuie. . based on the fuzzy clusters.1 Material Low Cycle Fatigue behaviour modelling Data from a material properties database about a lCrMoV rotor steel have been extracted for the analysis [16].

the regression is the main (and almost only!) result.. the model derived by means of fuzzy cluster analysis offers much more than a regression model only.5 ■■ 3 ■■ 2. on the contrary....5 · D ^ Π Γ^3| E F" 10 100 1000 10000 100000 Endurance (cycles) Select ...irn Figure 6: Resulting clusters There is to take into consideration that the regression based on the fuzzy model is only a „by­ product" of the analysis. For instance.. 265 Figure 5 : Fuzzy clustering environment in DataEngine Graphic-C:\MASSIMO\MUELL\CL1... Configure.8 lCrMoV rotor steel 4 τ 3....Jr. For classical methods. This is an example on „no- knowledge approach". In other words...5 1 ■ 0. / ^ » ^ ^ . j ¿elosi! ■ W ^ J » / ^ / ^ ... a model to classify LCF data has been automatically obtained from the data set. an alpha­cut over the membership values of the data points in the 7 ..5 ­ 2 M austeri Strain range (%) ■ Cluster2 1...

Moreover it will give the possibility to the KBS user to apply the different kind of advanced analysis in parallel to the classic methods. that means considering only the data points belonging to the cluster prototypes with a membership higher than a fixed threshold (alpha value).1.2 Hardness-based temperature estimation A set of experimental data has been extracted from [19].) to use the model for classification of new data. It will be incorporated in a KBS about metallic material properties.i 1—ι—1 I I I i| . a trained technician in his/her own field (in the case of C­ FAT a metallurgist) to perform data mining tasks using advanced techniques. lorenzianfit . The expression of the Sherby­Dorn parameter is Ρ = logr — .ï — 1 — 1 1 ) 1 1 II I I I I I I I l| ι 1—I I I I M I -0 100 1000 10000 100000 Endurance (cycles) Figure 7: Lorenzian fit of data Another possibility would be (e. In this way the local regression analysis is performed only over the data points with a high similarity to the model. 95% confidence O second cluster lorenzianfit 95% confidence 0. returning an advice on which method or technique is more suitable to solve the current problem. hardness will be indicated with H. Strain range (%) 10­. 4. providing the membership values to the clusters of new data pairs. Expert Miner. first duster . where / is the . realising an important methodology transfer in the field of applied material science. regarding the determination of hardness properties for two different steels. is currently under development at MPA Stuttgart [17]. The system will support the user through a KBS that analyses the user requests in terms of input­output data. In the following. while the Sherby­Dorn parameter will be Q indicated with P. resulting model requirements and data available. in the framework of the European BRITE project BE5245 C­FAT [18]. Expert Miner will enable the user. 4. 266 clusters can be made. namely 2%Cr1 Mo and 1 Cr1/2Mo.1. that is. implementing advanced methods using the DataEngine ADL (Application Development Library) and containing classifiers like the described one. .1 Integration of the results in a KBS An intelligent software module. the determination of the similarity of a new point to the general behaviour of the system as described by the model.g.

011 0.1 21/íCrlMo steel analysis The material under consideration had the following composition: C Si S Ρ Μη Ni Cr Mo V W 0. comparisons are made with an approximation proposed in the European SPRINT 249 project guideline [20].5 Trace 2.5 Figure 8: set of data .05 The data.0 -14.5 -15. coming from hardness measures after different time slots at fixed temperatures (varying from 550 to 750 degrees) are reported in Figure 8. 4.14 0.14 0. For this 2%CrlMo steel.5 -16.04 0. The detected regions have been approximated using local regression models. and from temperature the remaining lifetime.56 1. where the obtained global model is reported together with the plot of the equation suggested from SP249.04 < 0. The results related to four clusters are shown in Figure 9.2.0 -16.012 0.0 -15.5 -17. an exponential function built up starting from mechanistic assumptions and constraints (like the two asymptotes for H values of 180 and 115). Two different tests have been performed: one assuming the presence of three clusters and the second assuming the presence of four clusters. These models were than fused together to reach a unified model for comparison purposes. The number of clusters is assumed taking into consideration the possible material behaviour and through an evaluation of the results obtained from the numerical procedure. 11111: 13a 12a ­17.5 -14. A «B O c6 —S7- mmmw. The two derived expressions will be used in the paper: C (i) log t ­ Ρ t = i(r TJ (2) Hardness measures are used to estimate the temperature. The best prediction results have been obtained using the four cluster subdivision. 267 time in hours and Tis the temperature in Kelvin. The set of data has been processed using a fuzzy C­means algorithm.0 -13.

It can lead to the automation of tasks which are too complex or too ill­defined to be solved satisfying with conventional 10 . because it is derived through a regression analysis but not always in an affordable way. • The approximations obtained by regression analysis are clearly reliable over the interval in which we have data. Conclusions Data analysis has large potentials for industrial applications. A point to stress is that an approach based only on a best-fit has the disadvantage of not always being conservative in its result. ▼ ■ cluster 1 • cluster 2 160. Different approximations of the material behaviour should be adopted to take into account the use that will be made of the models. • The parameter C is a source of uncertainty. 180. This problem comes from the best­fit approach used. A suitable representation could be a fuzzy number. | SteeIE(2%Cr1Mo) A « 170. idei in e 130- * X « «'""•it 1 120- -17 -16 -15 -14 13 Ρ Figure 9: global function approximation 4. An extrapolation of the behaviour outside these regions could lead to non­consistent results.2 Remarks • The lack of more data does not permit a check of the effectiveness of the curves obtained. Nonetheless the question of the underestimation of temperature starting from hardness values remains open: the methods illustrated for this kind of steel (including the SP249 guideline) have not a coherent conservative response. An underestimation can be responsible for dangerous non­conservative evaluations of the remaining lifetime (equation 2). Lower bound approximations or interval analysis­based regression models have to be adopted in this case. I τ ▼ . Different approximations of the material behaviour should be adopted to take into account the use that will be made of these models. This can affect remaining life assessment methods based on the temperature estimated values.2. Interval analysis­based regression models could play a role in this respect. 5. This is obviously due to the fact that the equations are not a mechanistic description of the physical process. they can lead to more conservative or less conservative temperature estimations. \ regression model ■ ■ 140. ^ * cluster 3 ■ ▼ cluster 4 ■ H 150. 268 In this case the material behaviour is more regular and this is reflected in the success of different methods of approximation.

Watada. 1989). 1981). Pal. (John Wiley & Sons. Pao. time. (Addison-Wesley. 6. C. [15] Zimmermann H. 1992) 1035-1043. Fuzzy-Set-Based Hierarchical Networks for Information Fusion in Computer Vision. (1991) Fuzzy Set Theory and Its Applications (2nd Edition). Krishnapuram. The authors believe that the link of the software package with the available material databases can bring some new insight in many difficult material analysis problems. Tsao. 1992). Boston. San Mateo. Kittler. (Morgan Kaufmann.-J. [10] J.. Schalkoff. and energy which also improves environmental criteria. [7] R. (Prentice-Hall. Zimmermann. [6] B. Dordrecht [16] Holdsworth S. Structural and Neural Approaches. New York. 269 techniques. Mass. in: IEEE International Conference on Fuzzy Systems (San Diego. 1992). (1994) BRITE-EURAM C-FAT Project BE 5245: KBS-aided Prediction of Crack Initiation and Early Crack Growth Behaviour Under Complex Creep- 11 . Neural Networks and Fuzzy Systems. References [1] H.-J. New York. Pattern Recognition Statistical. The applications reported show how the cited methods can be successfully introduced in the field of material properties analysis.-H. [14] H. New York. Computer Systems that learn. Kandel. In contrast to fuzzy controllers where the behaviour of the controlled system can be observed and therefore the performance of the controller can be stated immediatly. Japanese Journal of Fuzzy Theory and Systems 4 (1992) 149-163.. [11]R. 1982). [13] H.R. Bezdek. Japan. Fuzzy Techniques in Pattern Recognition.M. Pattern Recognition Theory and Applications (Springer-Verlag. Dordrecht. Englewood Cliffs. IEEE Press [5] A. Eds. CA. Devijer. Adaptive Pattern Recognition and Neural Networks. [9] R. Bandemer. Berlin. This can result in the reduction of cost. [4] Bezdek J. Weiss. Fuzzy Sets in Pattern Recognition. Fuzzy-TD3: A Class of Methods for Automatic Knowledge Acquisition. Pal S. Näther. Fuzzy Data Analysis (Kluwer. Kulikowski. Decision Making. 1991). [12] S.C.A. 1987). Eds (1992) Fuzzy models for Pattern Recognition. [8] Y. Neural Networks 5 (1992) 335-350.C. [2] J. Lee. Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks (Iizuka. Fuzzy Kohonen Clustering Networks. Kosko. July 1992) 265-268. Reading. Methods for Fuzzy Classification. Kluwer Academic Publishers. and Expert Systems.-K. E. (John Wiley & Sons. 1992). in: P.R. J. J. W. many applications of methods for data analysis have in common that it will take some time to exactly quantify their influences. Weber. 1987) 383- 391.C. N. Bezdek. Boston.-J. Pattern Recognition with Fuzzy Objective Function Algorithms (Plenum Press. Fuzzy Sets. At MPA Stuttgart a research effort is currently under way to exploit the possibilities of advanced data analysis. [3] J. Zimmermann. (Kluwer.

Scientific Services Department. Schäfer (1995) Extraction of knowledge from data: application in power plants. Jovanovic. Germany [18] M. Day RV. Aachen. Lucia.. Central Electricity Generating Board. (1968) The Spheroidisation of some Ferritic Superheater Steels. [20] ERA Technology (1994). Joint Research Centre of European Commission. 270 Fatigue Loading Conditions. Third European Congress on Intelligent Techniques and Sofi Computing. M. Poloni.B.. P. EUR 15408 EN. In Knowledge-Based (Expert) System Applications in Power Plant and Structural Engineering.138. Poloni (1995) Data mining and dynamic worked examples in the C-FAT KBS. Jovanovic. A. EUFIT 95. Ellingsen. H. SPRINT 249 Guideline GG2 12 . Report SSD/NE/R. P. C- FAT report CFAT/T6/MPA/220a [19] Carruthers R. pp. Fukuda Eds. 235-243 [17] M. North Eastern Region.

How to assign membership to the features that will be analysed. Paulo Eigi Miyagi Universidade de São Paulo .SP .ufmg. or to reduce the number of drawings in the design department. families are established to make it possible to rationalize the manufacturing processes.MG . the formation of manufacturing cells and the part families. 01 . tel. It permits taking into account uncertainties and ambiguities usually present in manufacturing. KEYWORDS Group Technology.INTRODUCTION Group Technology (GT) is a philosophy which tries to analyze and arrange the parts and manufacturing processes. membership attribution.Itajubá . Then. fuzzy backward reasoning 1 . Brazil ABSTRACT In this article a procedure for obtaining part families using fuzzy logic is discribed. are accurate. it integrates the informations of design and manufacture and makes possible a rationalization of resources. can provide a solution to this problem. e-mail: arnaldo%efei. in many cases it does not occur. Most papers about part family formation assume that information about cost and processing time. Brazil. This procedure makes it possible the elaboration of one software which will be a interesting tool to the manufacture of small lots. Aspects of the data base are presented. those articles consider such questions isolately neglecting the development of methods to be shared by all company users [7]. demand of part.uucp@dcc. part family. It is usually supposed that a part only belongs to one family.Escola Politécnica CP: 61548 . This approach is a interesting application for group technology. Finally the article discribes the use of fuzzy backward reasoning for the classification of new parts in established families. Likewise.São Paulo . which can aggregate design and manufacture information. few articles have been published dealing with the problem of uncertainty. by agreement with the projects and manufacturing similarities [2] [5] [6]. fuzzy logic.05508-900. 271 FUZZY LOGIC .AN APPLICATION FOR GROUP TECHNOLOGY José Arnaldo Barra Montevechi Escola Federal de Engenharia de Itajubá CP 50 . the similarity analysis to the resemblance relation and part processing information are also presented.37500-000. Nevertheless. fax: +5535 629 1148.: +5535 629 1212.. The analysis of grouping. making use of fuzzy logic. etc.

an equivalent class over a certain threshold value. In the data base all important information about several component features of the company have to be available. Development an object-oriented data base permits adding more data and features. For process similarity a procedure is shown to search information of similarities that should guide the formation of manufacturing cells. To this purpose. will make the part family formation more sensible. which is the basic aspect for family formation. It is not possible in CCS. the autors belive that the use of backward reasoning makes it possible to classify a new part in a established family in a faster and easier way. Features that may be in the data base are shown in Figure 1. it is necessary to integrate apropriate approachs that can incorporate the uncertainty which nowadays serve isolated aspects of similarity analysis. In order to obtain an efficient and flexible classification which considers uncertainties. The main purpose of the data base is provide a single reference for all company. easy. for example. etc.. like suggest Figure 2. It may be not sufficient to describe part features using yes or no labels. more quantity of feature the part has. this article describes a procedure that makes use of fuzzy logic for the part family formation. hard. However. in case of similarity function. is also shown. The object of this methodology. when accurate classification is required [7]. Another important aspect described is the possibility to use qualitative data. function. which lies between 0 and 1 can express what extension of the feature the part has. The closer the value is to 1. The part input information is made according to discription of its features. fact that does not happen with most CCS. the similarity between them is greater than the threshold value. material and /or process. high surface roughness. including design and manufacturing features. Fuzzy membership function permits taking into account the inherent uncertainties into part features description. which is common in the Classification and Codification System (CCS). complex. such as. 272 The part similarities. It is possible because will be necessary to answer a less number of questions without the rigidity that is frequent in the common methods. The membership value. is to develop an alternative procedure to traditional methods for obtaining similarity. 02 . 2 . The use of this technique. Once this threshold value is chosen. in essence. This fact is very important because it permits data handling by all company user. Thus producing more realistic results. which is essential to similarity analysis. Since the similarity relationship is not necessarily transitive. it is necessary to employ the fuzzy matrix theory to form the closest structure which permits the separating data in exclusive and separated groups which are. because after the coding is obtained the insertion of new features will be difficult. it should be borne in mind that no coding procedure is proposed.. First are described details of data base. consist of a close classification in geometry. is the data base providing the features that will be classified according to their similarities.AN OBJECT-ORIENTED DATA BASE An extremely important component of the proposed methodology to obtain the part families.. The grouping principle employed is also described which consists of choosing one threshold value to the similarity. Finally. How to translate this information into numerical values. thus eliminating the shortcoming of the currently employed methods. two elements will be in the same group if.

• Qualitative features (incertain/fuzzy): these features describes attributes in terms uncertain. In this class are length . This matrix is shown in Figure 3. lenght number of slots thread roles max diameter min. tolerance complexity Material stength hardness optimum cutting speed Complexity shape number of diferents planes number of rotational elements number of gears Production max number for production min number for production Annual production max number min number Process number of lathes used number of drilling used number of milling used Figure 1 . width min. Basically in the data base it is possible to have two types of features. diameter min. diameter max pitch NUMBERS OR SUBJECTIVE min pitch number of roles VALUES OF THE basic shape (A) total length (B) max width PARTS (C) max depth Technology min.MEMBERSHD? ATTRD3UTION The parts for the classification by their similarities is represented in a η x m matrix form (part χ feature). This matrix will be formed with the data base features important for the grouping analysis. depth min. More clear. diameter . diameter number of holes Pockets max lenght min.Some prismatic part features for data base 3 . lenght max... width max. depth number of pockets Slots max lenght min. etc. 03 . 273 FEATURES DETAILS QUANTITATIVE AND QUALITATIVE ATTRIBUTES Holes max. namely quantitative and qualitative. the features are: • Quantitative features: representative of parts property what can be express by numbers.

if the function of membership is given by Figure 5.00. complex or not complex. 0. 0. These values also can be given by any of the expressions suggested by [9]. It is necessary to transfer the data to these features in the same unit.25.8.75. An example of such feature can be the shape complexity of a part.625. Otherwise. each part feature is given a membership between 0 and 1.5. 7. Likewise it is possible to use graphs with a more suitable membership function. METROLOGY ■ m¡ PROCESS PLANNING MANAGEMENT Figure 2 ­ A single data base 'h ira X = v nl 'nm Figure 3 ­ Matrix of parts X features Since the features can be quantitative or qualitative. small". if the analysis of a family of lengthy shaft is desired. For example. 0.00. 3. 274 like "wide. Membership values of quantitative features can be expressed directly in function of values obtained from the data base. 0. which automatically give the values of membership to the selected features for the matrix of Figure 3. the membership what the part will have for this geometric feature will be 0. The other parts that belongs to 04 . 0.55. can be calculated by dividing each length value by the largest value of the vector (i. and the part in question has 100 mm of length.5.00).i SLNGLE DATAB ASE -.375.00. 0. To this purpose.800. In this methodology the data of different features are expressed by memberships.e. an analyst may define as very complex.750]. 5. PRODUCTION PLANNING DESIGN MANUFACTURE *~ t. such as figure 4. there will be a scale problem. This procedure results in another vector. 8. 6. the membership values for this feature. it is important to develop a procedure for the grouping which can deal with these two kinds of features in a unified way. given by [1. 10. that is the membership vector. 8.' '. which will put an end to the scale problem. *:'·-""-'a.850. As an example.50].. the length feature whose values for 7 parts are given by the vector [ 10. medium.

How to transform a qualitative information. 05 . » > .Membership by graphic With qualitative features. whereas the attributes are the feature designation as small. the attribution of membership is not so easy. length. such as proposed by [1]. large. now. roughness. LENGTH 40 80 120 1E0 2O0 MAX LENGTH <«. for example. shape complexity. fnrert fi«te'» i» Figure 5 . etc.j = j M AX •jr -v· m x ü = X · ij —■ X ­ JMIN 3M AX JMIN x X ij ~ j i — ï( ¡MAX JMIN Figure 4 ­ Expressions for membership attribution Select a Graphic ΓΪ! Graphic Number. It is necessary to make a matrix of comparison for the attributes for each feature. it is possible to utilize the ΑΗΡ (Analytic Hierarchy Process) method [8]. · l. Hext . These comparative analysis are made in pairs between the attributes of the feature in question. such as part complexity or with high roughtness in a number? To solve this problem. . Confinnralion Select an equation (or membership attribution x X u. 275 the sample will have their values of membership calculated in the same form. Features are. £iev«WA. which by means of a comparative analysis permits the calculation of membership for qualitative attributes.

7. value 1 is attributed. If A is absolutely more important than B. By means of the scale of priority shown. If A is much more important than B. In 06 . These matrices are calculated by the evaluation of the importance of an attribute over the other. The eigenvector will represent the memberships that can be used for the attributes in question and the eigenvalue is the measure or rate of consistency of the result. the eigenvalue ( λ ^ ) and its respective eigenvector will be calculated.. to complex.8. one of the features which may be important for the obtaining similarity is the complexity of shape evaluated by an analyst. To illustrate this method. the memberships that can be used for the similarity classification are available. After the matrix of comparison is defined..9}. If A is little less important than B. high roughness. 4. of the data base.1.. 5. 276 complex. With the eigenvector. value 7 is attributed. The attributes of this feature originating from the data base range from very complex. obviously after testing the consistence of the result to conclude that the answer is good [8]. value 3 is attributed. etc. If A is little more important than B. 9. value 9 is attributed. If A is obviously or much strongly more important than B. Saaty proposed that to provide attribute comparison should be used values from the finite set: {1/9. 8. have one of these qualitative values for its feature of complexity. value 5 is attributed. low complexidade and very low complexity.. 3. entry a¡j indicates the number that estimates the relative membership of attribute A¡ when it is compared with the attribute Aj.2. If A and Β are equal in importance.. All the parts. If A is obviously or much strongly less important than B. Each entry of the matrix is a pair of judgment. Obviously. mean complexity. In matrix A of comparison. Figure 6 ­ Matrix A of comparison by pairs for the feature shape complexity.1/8. value 1/5 is attributed.. If A is much less important than B. 2. a¡j=l/aji. value 1/7 is attributed. through of the following scale: 1. 6. If A is absolutely less important than B. little complex. value 1/3 is attributed. a specialist will provide the matrix A of Figure 6.. value 1/9 is attributed.

After the memberships for the several features (qualitative and quantitative) are calculated.06 END Figure 7 .061 CR» . the memberships (one of eigenvectors normalized for the greatest weight to be equal 1). These memberships represent the importance of complexity of shape for each part. will be give by the first normalized eigenvector in Figure 7..Xj) = .065 CI.033 Very Low Conpteuty .054 (SECOND NORMALIZED EIGENVECTOR MEMBERSHIP ATTRIBUTION .517 1 . it is necessary to have a procedure to obtain the clusters of similar parts.036 . it is possible to use the convention of arranging the data in form of matrix. (3).Eigenvector for membership attribution The values of memberships now will give a weight for each of the parts from the data base.243 . k Zmin^ k (xV^ k (*" jk )) S(xj.1 .5.25 ..264 Coepfei .125 . This relationship.936 1 INTERMEDIATE VALUES 2. after the calculation. (2). as examplified by the expressions (1).51 Vciy Compiei . represents the similarity between different parts.52 . .p (2) 07 .254 . To estimate the resemblance between pairs of data. To obtain this matrix it is possible to utilize several formulaes for the calculation of similarity.ik )^ k (x.. two elements will be in the same grouping if the similarity between them is larger than the value of comparison. Once this threshold value is chosen. Each entry in this matrix will represent the proximity between two parts.254 MAX EIGENVALUE. now.ANALYSIS OF SIMILARITY FOR THE FEATURES A principle that can be used for the grouping is to choose a threshold value for the similarity. Shape Complexity EC FIRST NORMALIZED EI6EKVECTOR EI6ENVECTDR 3. 277 this case.jk )) S(Xj.. 4 . called S which is a matrix η χ η.431 .13 Meao Complexity .Xj) = y . (4) and (5). 1 2 .064 Lon Complexity -.p (1) JminOi k (x.

278 x S(xi. 03 . Here what occurs is that there is a relationship of resemblance. The similarity of parts consist of a very close classification in geometry. Nevertheless. function. Then. But the matrix does not have the propriety of being transitive. The similarity as seen.x. then A is similar to C will not be possible. With the similarities it is possible to obtain the matrix of Figure 8. that if A is similar to B. To deal with this problem. The similarity of parts consist of a very close classification in geometry. function. The measurements of similarity usually has a minimum variance. from matrix of Figure 3. it is necessary to transform the matrix of similarity into a transitive matrix. if the grouping are near one other. R' RoR (6) (7) ik = mi ( r ij A r jk) The transitive matrix of Figure 8 is the matrix FUZZY equivalent which can be simply calculated by (8) [3]. and Β is similar to C. material and /or process. It is possible to use the fuzzy theory to transform this matrix [7]. >11 in S= >nl J nn Figure 8 ­ Matrix of similarities The Figure 8 is a symmetric matrix that can be used directly in analysis of fuzzy grouping.x j ) = i (5) ¿l^ik^k^jk» The symmetrie matrix can be used directly in the analysis of fuzzy grouping. The relationship of composition of a fuzzy matrix is definided by (6) and (7).Xj) = l — f (4) ( x x k tr^ M V>M jk» Ρ Σ x k=l ^ k ^ i k ) ­ ^ ( " j k ) S(x i . and they usually give the same results if the grouping are compact and well separated. really différents results can be calculated [7].) W k=l "ik>*M x V (3) «k51^k(x"ik)2)*(¿1^k(x"jk)2))/ Ρ x x Σ k=l ^ k ( "ik)­M V S(xi. The transitive matrix is the matrix fuzzy equivalent. will be obtained throught the membership manipulation. material and /or process. the following conclusion.

6 ALFA­CUT ­ 0.3 . An example of decomposition. different classifications will appear. Parti I I | I 'r Parti Part 4 Part 5 Part 2 Part 3 acut = 0. For example.. ♦1. for oc = 0.vjRn (8) Finally. ·::·. of part Β and the third one.■:·­. \. D and E.= . 279 R = Rv_.. for obtaining the families can be better understood through Figures 9 and 10. less parts will be classified in each family.. given one oc level.T.­)* ALFA-CUT . the groupings formed. the second one. of part C. thus more families will be formed. the first one consisting of parts A.9 1 r Parti Part 2 Part 4 Part5 Part 3 acut « 0.8 Part 1 ­ 1 1 1 1 1 Part 1 ­ 1 1 1 1 1 Part 1 ­ 1 1 0 1 1 Part 2 ­ 1 1 1 1 1 . Part 2 ­ 1 1 Τ 1 1 Part Ζ ­ M 1 0 1 1 Pat 3 ­ 1 1 1 1 1 Pat 3 ­ 1 1 1 1 1 Part 3 ­ 0 0 1 0 0 Pat 4 ­ 1 1 1 1 Γ­ Part 4 ­ 1 1 1 1 1 Pat 4 ­ 1 1 0 1 1 Part 5 ­ 1 1 1 1 1 Part 5 ­ 1 1 1 1 1 Part 5 — 1 1 0 1 1 T * +U. The greater is the cc value. for each one of the (oc) levels. Figure 9 shows the attribution of some oc values and Figure 10 shows. 09 .:■= L +.8

Λ *T Figure 10 ­ Tree shaped decomposition.-: .R2u. Similarity Relationship m ALFA­OJT ­ 0.9. With different °c values..3 ALFA­CUT ­ 1 Pat 1 ­ 1 0 0 1 1 Pat 1 ­ 1 0 0 1 0 ♦i Part 2 ­ 0 1 0 0 0 Pat 2 ­ 0 1 0 0 0 Part 3 ­ 0 0 1 0 0 Pat 3 ­ 0 0 1 0 0 Pat 4 ­ 1 0 0 1 1 Pat 4 ­ 1 0 0 1 0 Pat 5 ­ 1 0 0 1 1 Part 5 ­ 0 0 0 0 1 J¿amiEe * U ±LE Figure 9 ­ Decomposition of a similarity relationship. ALFA­CUT ­ 0.0.7 .U : · : . there are three groupings. the groupings of similar parts are obtained for the level chosen.

m. which should be called of nonbinary. In this way an algorithm for checking the grouping of machines may be used .3. where the information was about the part families.. is that the information about cell formation may not be enough.. where the vagueness will also be considered. it is possible to think in such a way to obtain the matrix of membership between part χ machine. 2.3 ··· 'In Y2 U2i u22 u23 ·" '2n (10) NONBTNARY MATRIX Ya U3l u32 u33 ··· '3n Y. • Yi is a machine and i = 1. U ll U 12 U.. which does not show the possibility of another machine also making part 2. γ. To prevent this from happening. m (12) H 10 . 3. in (10): • Xjis apart andj = 1.2. Xi x2 X3 ­ X. 280 5 . X... un = 1 shows that the part 2 visit machine 1.2. η (H) JV>0fori=l. Ujj represents the relationship between part j and machine i (u¡j = 0 ou 1).. • Ujj represents the relationship between part j and machine In (10) it is possible to observe the following properties: 0<u¡j< 1 fori= 1.2. If n parts and m machines are being considered to obtain manufacturing cells. n.. m..ANALYSIS OF PROCESS SIMILARITY A problem that may happen. 3. 2. For example.. X.. in the formulation for obtaining the similarity seen in the last item. • Y is a machine and i = 1. 3. as seen in (9).. represented by (10)..... 2.. usually the representation of the machines that processes each part is given by a matrix part χ machine.2. 3. -"ml um2 u^ · " u. which is not evident in the last formulation.j = 1.. another matrix should be developed...m. n.3..3. .. . Xη l J u In '11 12 13 u u BINARY MATRIX '21 22 '23 2n (9) i31 u32 I33 u 3n Y u mL u ml u m2 u m3 mn in (9): • Xjis apart andj = 1.. Due to the inflexibility of this matrix..

the machine would definitely not be appropriate. 0 x < tl 1 tl<x<t2 (13) μ(χ) t3­x t2 < χ < t3 t3­t2 0 x > t3 As it can be understood. called combined. to set the nonbinary matrix.| x j i > x j 2 : ■' jp| J l ' 2> ■*> · " > features fuzzy set of component j .Ym} set of available machines. 11 . The following steps are necessary to obtain these values : 1.. goes to the nonbinary matrix. it is possible to think about the tolerances a certain machine can obtain. it is possible to begin the next step that is the calculation of combined index for each pair of machine χ part. After obtaining all the membership functions associating machine χ feature. μ« tl t2 O tolerance ­ (χ) Figure 11 ­ Membership function of tolerances for determined machine The memberships values from Figure 11 will be in the range designated by (13). To define the membership functions for each pair of feature χ machine. M = (Y. the membership functions for the feature that is being studied.· . The main criterion to select these features is the appropriate selection of the features that can contribute for differentiation of the parts at the time of the grouping. 281 The property defined by (11) indicates the intensity with which a machine is designated to process a determined part. If: x = n X j . which is an interesting proposition from [12]. The elements of matrix (10) are calculated from mixed functions between machines and components. To compute the combined index for each pair of machine χ part. To illustrate a membership function for one pair of feature χ machine. a number near of 1 meaning a great potentiality to process the part. This index. 2. 3. it will be necessary to obtain. To compute the degree of membership for each pair of feature χ machine. This function may be represented by Figure 11. it is necessary to obtain the membership function of the main part features. Of course. while with a number near 0. for each machine.Y 2 . because usually a part has more than a feature processed by the same machine. in function of capacity of determined machine to process it.

uixi udì finishins! tolerance (umi machine capacity (mm) Figure 12 ­ Example of membership function for feature that should be analyzed relative to machine 1 Parts finishing tolerance machine capacity 1 1 1 2 1 0 3 1 1 4 0. the membership between the machine i and the component j may be given by (14). where: n = number of parts . (14) M X J ) %l<k<p ÍÜMX*)Í v With this procedure it is easy to establish the membership for all the pairs of machine χ part. On the other hand. and to construct the nonbinary matrix. In the nonbinary matrix. The relationship of correspondence should remain in the resultant matrix. if the pertinence functions are those of Figure 12. It is important to observe that the binary matrix used in most methods has a different interpretation from nonbinary. then the outside elements for these parts become exceptional elements. For example. as long as alternative machines are available. If the necessary machines are grouped in the cell for some components. with the χ value (from a data base) for the feature in question. ρ = number of features and m = number of machines. for each one of the 7 parts that are to be grouped. it should be assured that all machines should be in one group.1 1 7 0. the entries represent the degree with which a component can be processed in a machine.9 1 Figure 13 ­ Memberships given to each pair part χ feature for the machine 1 12 .8 1 5 1 0 6 0. the entries represent the relationship of incidence between a part and a machine. It is not necessary to assure that all the non­zero parts are in groups. for a small example. any entry will be an exceptional element. In agreement with the Zadeh theory [10]. In the former. 282 • μ γ (χ^ ) membership for the machine i related to feature k of component j . it is possible to do a chart of Figure 13. for a hypothetic case of machine 1 of m available. To illustrate this procedure. where the parts have entry 1 with those machines.

3 0 0. The importance of utilizing the nonbinary matrix is that.3 0. This characteristic is not possible if the binary matrix is used. 0 0 0.8 0. 13 .1 0.7 0. which will consist of the nonbinary matrix that should be studied for obtaining the similarities of the process. 0.8 0 (16) M4 0 0.7 0.9 0.3 0 0.8 0.7 0.8 0. there is now the possibility of analyzing the machines that are more appropriate to process the part families. Furthermore.5 0 0 0_ .9 0 Obtained the non­binary matrix. The matrix (16) is the one that should be analyzed to have a solution of similarity of process.3 0.7 0 0 0.9] (15) If the process for the m machine is repeated. M3 and M4 and family 2 of part P 2 . The result for this matrix is: family 1 ­» family 2 ­> P2 P5 P6 '<_Mi 1 1 0.7 0.3 M.6 0.7 0.3 0.8 0 M­ 0. " 1 0 1 0.8 0.8 celli ' M4. If for the 7 parts example a universe of 7 machines is available.3 0 0 <­ M 3 0.l" M M2 0.7 0.7 0 <­M5 0.8 0.5 0 0.3 0 M7 0.1 0. a nonbinary matrix as (16) may be obtained. it makes sense to eliminate the machines that process similar operations.7 _0 1 0. M2. Pi.5 0.8 0. after the grouping of machines is obtained.1 0. the process should be repeated for all the machines that will be analyzed.3 cell 2 ' ^ M 4 0.<­ M 7 1 0. [12] shows the use of the Rank Order Clustering (ROC) [4] to analyze matrix (16).8" l 0 0 O.7 0.9 M2 0 1 0. after the execution of the procedure. P5 and Ρβ. the vector (15) is calculated.8 M 6 0. Cell 2 composed of machines M7.2 0. For the application of the formula (14) for the matrix of Figure 13.8 0 0. it is now necessary to run a proper grouping algorithm to get the possible manufacturing cells.8 0.6 0.7 0 0. 283 In the same way that the values of Figure 13 are given.5 0 0 0 0. M¿ and Ms and family 1 of part P 3 .7 0 0. 0.5 M. Other algorithms for the grouping can be adapted so that the nonbinary matrix will be used for the cell formation. machinel[l 0 1 0. thus m matrices analogous to this is obtained.2 0.1 0 0 ^M6 0.8 M3 0 0. P P P P P P P l 2 3 4 5 6 7 M. which expresses the membership for the 7 parts of machine 1. m vectors such as (15) will be calculated.3 0 0 0 <­ M 2 1 0.9_ Cell 1 composed of machines Mi.8 0 0.5 0 M 5 0. P7 and P4.

for the news parts. Let the following features: CI = rotational. With families and features it is possible to obtain the Fuzzy relation matriz R.FUZZY BACKWARD REASONING FOR THE CLASSIFICATION OF NEW PARTS IN ESTABLISHED FAMILIES After to obtain the families. Existing parts available were divided and classified into families as shown in Figure 14. shown in Figure 15. It is possible to map a Fuzzy relation matriz R through the application of F in C as shown in (17). To classify these new parts always is a problem. C3 = through hole.....the space of parts. F = {Fl. m .. C = {Cl.. The expression for purpose inference is (18). C7 = thread. . which family (or families) is (are) the best suited for the new part. C6 = cone.important technological features of parts. R: F -> C (Π) In (17): R = tøj] i .. new parts can be introduced.blind hole. To use the fuzzy backward reasoning it is a new way for solution of this problem..l . Let the system be decribed by: 1. 284 6 . It is possible to infer. 14 .> n and j = 1. F2. C2. and making use of the features fuzzy weight. C5 = slot.Cm} . C=RoF (18) Two illustrative examples will be show.. 2.. C4 . C2 = prismatic.Fuzzy relation matrix between family and technological features. Fn} ..

Fuzzy relation matrix 15 . through hole Rotational.9 0.8 0.1 F5 0 0.6 0 0 0 Figure 15 . 285 PARTS FAMILIES Rotational. through hole and thread Rotational.9 0 0.2 0.7 0 0.9 0 0.6 0 0 0.7 0 0. cone and blind hole Prismatic with slot F ^-y Prismatic with through hole Figure 14 .7 0 0 0 0 F2 0.2 0.9 0 0 0.Families established Cl C2 C3 C4 CS C6 C7 Fl 0.9 0 F4 0 0.9 F3 0 0 0.

9 0 0.7 0 0.9Aa4)v(0Aa5) (23) 16 . the following features membership set Ci features is given: xl = 0.7 0.7/C4 + 0/C5 + 0.9 0.1 0 j Using MAX-MIN [11].9 "af 0.6 0.9Aa4)v(0.9 0 a4 0. that results in the following relation: 0.9 0.2 0.9Aal)v(0. it is possible to write the following equations: 0. that means membership between the new part and the Fi families.7 0 0.9Aa5) (20) 0.9 0 0 _a5_ 0 0 0. 286 6.7 = (0.7Λ33)ν(0Λ34)ν(0.2 0 0.Rotational part to classify The objective is to evaluate the ai terms in the expression bellow.6Aa2)v(0.7 = (0Λ3ΐ)ν(0Λ32)ν(0. yl = al/Fl + a2/F2 + a3/F3 + a4/F4 + a5/F5 yl = xl o R.7/C6 + 0/C7 NEW PART Rotational.6Λ35) (22) 0 = (0Aal)v(0Aa2)v(0Aa3)v(0.1 .7Aa2)v(0Aa3)v(0Aa4)v(OAa5) (19) 0 = (0Aal)v(0Aa2)v(0Aa3)v(0. blind hole Figure 16 .7 0.7 0 0 0" 0 0 0 0 0.8Aa5) (21) 0.2Aa3)v(0Aa4)v(0.CLASSIFICATION OF A ROTATIONAL PART For the new rotational part in Figure 16 to be classified into families of the Figure 14.7/C3 + 0.7Aal)v(0.9 0.7 0 0 0.9 = (O.6 o a3 0 0 0 0.8 a2 0.9/Cl + 0/C2 + 0.

7 Va5 From equation (25): (OAal) < ö Val (0.7 Va2 (0Aa2) < 0 V a2 (0.7 A a2) < 0.2Aa2)v(0.7Aa3) < 0.8 A 35) < 0. s4 = 0.9 A s2) < 0 => s2 = 0 (0AS3) < 0 Va3 (0.7 Va5 (0A35) < 0 Va5 (0Λ35) < 0.9Aa3)v(0Aa4)v(0Aa5) (24) O = (0Aal)v(0.9Aa2)v(0Aa3)v(0.2 A a3) < 0.6Aa5) < 0.7 V a l (OAal) < 0 V a l (0A3l) < 0.7 (0Aa3) < 0 V a3 (0.9 Va2 (0Aa3) < 0 Va3 (0. the following features membership set Ci features is given: x2 = 0/C1 + 0.7 (0.9 Va3 (0.9/C2 + 0.1 A 34) < 0 => a4 = 0 (0AS5) < 0 Va5 The result of seven­equation problem.7 => 35 < 0. 32 = 0.7 (0Aa5) < 0.6/C4 + 0/C5 + 0/C6 + 0/C7 17 .7 => al > 0. 6.7 (0Aa4) < 0.7 Va2 (0.9Aa5) < 0 => s5 = 0 (0.9Aa4) < 0 ^> a4 = 0 (0Λ34) < 0.2 ­ CLASSIFICATION OF A PRISMATIC PART For the new prismatic part in Figure 17 to be classified into families of the Figure 14.9 Va4 (0. whose solution is: si > 0.7 V 33 (0Aa3) < 0.9 A al) < 0. s5 = 0 The conclusion is that the new part belongs to family 1.7 A al) < 0.9.7 => a3 > 0. 287 O = (0Aal)v(0.7 V a l (0Aa2) < 0.7 V 34 (0. 33 = 0.7 V 32 (0.8/C3 + 0.lAa4)v(0Aa5) (25) Now from the equations we can obtain the following: From equation (19): From equation (20): From equation (21): (OAal) < 0 Val (0.9.2 A a2) < 0.7 Va4 (0.7. Va5 From equation (22): From equation (23): From equation (24): (OAal) < 0.7 => a3 = 0.7 Va4 (0Aa4) < 0.9Aa4) < 0 => s4 = 0 ( 0 A S 4 ) < 0.9 => al ­ °' 9 (0Aa2) < 0 Va2 (0.6 A s2) < 0.9 A a3) < 0.

ΐΛ34)ν(0Λ35) (32) NEW PiJSMATIC PART M /4*TÎ y^¡k Øy^y y . 9 A s 5 ) (27) 0.9 al 0.9 0 0 0 0.9Λ33)ν(0Λ34)ν(0Λ35) (31) 0 = (0Λ3ΐ)ν(0.7 0.1 0_ Using MAX-MIN.9Λ32)ν(0Λ33)ν(0.9 0 0.6 0.7 0 0.2 0.y' ^y W ψ y^ Prismatic .9A3l)v(0.6 = 0 0 0.7 0 0 0 0.6 = ( 0 A a l ) v ( 0 A a 2 ) v ( 0 .2Λ32)ν(0.9 A a4) ν (0 A 35) (29) 0.9 0. we have: 0 = (0. blind ane .2 0 0. through hole Figure 17 ­ Prismatic part to classify 18 .7A3l)v(0. 9 A a 4 ) v ( 0 . 288 We want to obtain this: y2 = a l / F l + a2/F2 + a3/F3 + a4/F4 + a5/F5 y2 = x2 o R.9 = ( 0 A a l ) v ( 0 A 3 2 ) v ( 0 A 3 3 ) v ( 0 .9 0.7A32)v(0Aa3)v(0A34)v(0Aa5) (26) 0. 6 A 3 5 ) (30) 0 » (0Λ3ΐ)ν(0. that results in the following relation: 0 0.9 0 a4 0 0 0.8Aa5) (28) 0 = ( 0 A 3 l ) v ( 0 A a 2 ) v ( 0 A a 3 ) ν (0. 0 0. 7 A a 3 ) v ( 0 A 3 4 ) v ( 0 . 1 Hy .6Aa2)v(0.9 0 0 _a5 0_ .8 0.2A33)v(0Aa4)v(0.8 = (0.6 0 a3 0 0 0 0 0.8 a2 0.

Prod. J. U. it is possible to conclude.a5 > 0. The paper shows the appreciation of simple cases for the classification of rotational and prismatic parts into established families. which will be an interesting tool to the manufacture small lots. a common method used to obtain similarities.CONCLUSIONS This paper is the synthesis of what is possible to do with fuzzy logic in order to deal with the problem of obtaining similarities. After the examples. Res. Quantifying data for group technology with weighted fuzzy features. D. and more comprehensively. The Computer and Automated Systems Association of SME. 8 . work is being done to save the company resources. although it is not possible in the most currents tools. This procedure groups techniques that can cope with the problem of similarities isolately. However. this methodology may be transformed into a software. that the use of fuzzy backward reasoning makes it possible to classify a new part in established family in a faster and easier way. the solution for the best family is not always simple. 1285 . making use of the same data base.9 The conclusion is that the new prismatic part belongs to family 5. The autors belive that the use of backward reasoning makes it possible to classify a new part in a established family in a faster and easier way.REFERENCES [1] Arieh. 289 In the same way like for the rotational part we may conclude that: al = 0. 1992. Wemmerlov. The development of this model can also supply a solution for the problem of setup time decrease. In this way.B. the backward reasoning is used. 30. It is possible because will be necessary to answer a less number of questions without the rigidity that is frequent in the Classification and Coding Systems (CCS). thus simphrying the interaction between modeling and classification procedures. s3 = 0. In some cases is not possible to obtain a solution. once it is possible to retrieve information of process and geometry similarities together. Group Technology and Productivity. E. [2] Hyer. a fact that usually is necessary. Int. For the classification. which is possible with the identification of similarities. 7 . 1987. For this reason it is necessary more research in this field. s4 = 0. 3- 12. an important aspect for part families formation. Capabilities of Group Technology. With the procedure described here it is possible to provide a new contribution for the Group Technology. Michigan. a2 = 0. 19 . This consists of a different proposition as an alternative to current methods available. It is possible to observe then that with the rationalization. Finally. is possible to profit by these similarities so that the preparation time of machines will be shorter. N . Triantaphyllou.1299.. considering the uncertainty present in the manufacturing environment.

. Simultaneous Formation of Machine and Human Cells in Group Technology: a Multiple Objective Approach. The Concept of a Linguistic Variable and its Application to Approximate Reasoning-Hi Information Sciences. no. Machine-component grouping in production flow analysis: an approach using a rank order clustering algorithm. 43-80. Tecnologia de Grupo Aplicada ao Projeto de Células de Fabricação. 11. Prod. 6] Montevechi. 11] Zadeh. A Scaling Method for Priorities in Hierarchical Structures. 12] Zhang. L. 15. 8] Saaty. J A B .281. H. 10. 1993. Wang. L. 338-353. Int. USA. 1989. of Prod. JR. A. Information and Control. Florianópolis. 234 . J. Msc. Res. 8. Res. 27. 213 .67. H P .1637 . 10] Zadeh. 1992. Academic Press. L. Introduction to the Theory of Fuzzy Subsets. 5] Min. 9] Xu.. 18. J. Journal of Manufacturing Systems. Doctorate Qualifying Examination. International Journal of Production Research. 1975. D. H .A. C .232. Journal of Mathematical Psychology. Int. INC. Fuzzy Sets. São Paulo.. T.A. Formação de famílias de peças prismáticas utilizando lógica Fuzzy. Shin. Wang. 1994. Concurrent Formation of Part Families and Machine Cells Based on the Fuzzy Set Theory.B. 1980. 1965. UFSC. 7] Montevechi. Volumel.1651. 1989. EP-USP. 9.. HP. 20 . 1975. 31. 61 .A. 290 3] Kaufmann. Orlando. 4] King. 1977. 2307-2318. J. Part family formation for GT applications based on fuzzy mathematics.



etc. distance relays have found some improvements mainly related to efficient filtering methods (such as Fourier.MLEEE 'Departamento de Engenharia Elétrica Escola de Engenharia de São Carlos Universidade de São Paulo São Carlos .SP . The scheme utilizes the digitized form of three phase voltage and current inputs. 1-Introduction Distance relaying techniques have attracted considerable attention for the protection of transmission lines... more particularly in protection. The trip/no trip decision has been improved.[4]. With digital technology being ever increasingly adopted in power substations. Voltage and current data are used for these purposes and they generally contain the fundamental frequency component added with harmonics and DC component (noise). To work with unforeseen or unknown data is a challenging task.M. This paper presents the theory of Artificial Neural Networks (ANN) as being an alternative computational concept to the conventional approach based on a programmed instruction sequence. Kalman.) and as a consequence shorter decision time has been achieved. fault detection and location.[3].. etc.. 293 Artificial Neural Networks Applied to Protection of Power Plant Denis V.B. The scheme can work with unexpected or incomplete data. Increase of performance for the distance relays is expected.Brazil Abstract The project of a power plant protection is nowadays limited to expected situations. Its potential has brought power system researchers to look at it as a possibility to solve problems related to different subjects such as load forecasting. [2]. Nowadays the studies of ANN are growing rapidly.Sc. Jorge*. compared to electromechanical/solid state relays. This work shows the application of ANN as a pattern classifier for distance relay operation in transmission lines. Coury*.Ph. The implementation of a pattern recognizer for a power plant protection diagnosis may provide great advances in the protection field. However.Sc.MIEEE David C. for many reasons[5]: . improving the performance of ordinary relays using the digital principle. the protection system may not act properly [1]. This paper presents the use of Artificial Neural Networks as a pattern classifier for a distance relay operation. The degree of accuracy for locating the faults in the different zones is also improved. The ANN can provide solutions to problems with unknown determining factors.Sc.D.B. 2-The Artificial Neural Network The Artificial Neural Network (ANN) is inspired by biological nervous systems and it was first introduced as early as 1960. This principle measures the impedance at a fundamental frequency between a relay location and the fault point and thus determines if a fault is internal or external to a protection zone. if unforeseen or incomplete data input occurs. economic dispatch.

the ANN must have a mechanism for learning. together with information on its current activation state to determine the output a..Wn. • ANN has a high degree of robustness and ability to learn. A special algorithm adjusts weights so that the output response to the input patterns will be as close as possible to their respective desired responses. .P 2 . the bias adjust b and an output a.. PI..ANN diagrams (A) .. The interconnection of the perceptions can form a network which is composed of a single layer or several layers as seen in Figure 1(b). Learning alters the weights associated with the various interconnections and thus leads to a modification in the strength of the interconnections..ANN multi-layer scheme Figure 1(a) shows a simple model of a neuron characterized by a number of inputs the weights Wi.PN..Perceptron representation (B) . 294 • ANN works with pattern recognition at large. P 1 \ Wl P2^ Ν^2 P3 a : ~Ά Σ ) » /wn b / Pri (A) (B) Figure 1 .. • ANN is prepared to work with incomplete and unforeseen input data. a = YWkPk+b (1) *=1 The ANN models may be "trained" to work properly.. The neuron is the nervous cell and is represented in the ANN universe as a perceptron. In other words. The desired response is a special input signal used to train the neuron.W2. given as in equation (1). The neuron uses the input.

It was invented independently several times. PHWii Output w1>V P[2J' .ar}[6].b] 1 a=logj/g(«. The use of ANN make it possible to protect over 80% of the extension of the power system line. the initial weights used and the number of cases for the training data may affect the results. Problems may also arise from the ANN training. A closely related approach was proposed by Le Chun (1985). by Bryson and Ho (1969). The network scheme will have direct influence on the ANN performance. 295 In order to use the ANN properly. 3-Backpropagation Method The Backpropagation algorithm is central to much current work on learning in neural networks. The use of the bias adjust in the ANN is optional. Werbos (1974). ANN may not converge and it could be necessary to change the training parameters. Hinton. The logistic sigmoid transfer function maps the neuron input from the interval (­oo. The algorithm gives a prescription for changing the weights in any feed­forward network to learn a training set of input­output pairs {Pn. but the results may be enhanced by it. it is necessary to know that empirical methods are the only way to find satisfactory results. Trained backpropagation networks tend to give reasonable answers when presented with inputs that they have never seen. The use of ANN in distance relays may result in a considerable advance for the correct diagnosis of operation. ANN can be trained with data provided from a simulation of a faulted transmission line and 'learn" the aspects related to that situation. An elementary backpropagation neuron with R inputs is show below on Figure 2. Parker (1985) and Rumelhart. The ANN may solve the overreach and the underreach problems which are very common in the power plant protection project. The sequence of the input data training. and Willians (1986).b) = . ANN can deal with unforeseen situations related to faults in the power plant.( " ■ * * ) (2) 1+e Backpropagation networks often use the logistic sigmoid as the activation transfer function. The Backpropagation method works very well adjusting the weights (Wjn) which are connected in successive layers of multi­layer perceptrons. ? y n F a P[Rf 1* bias adjust Input data Figure 2 ­ Neuron with logsigmoid characteristic where: n ­ n ­ summation output R ­ number ofinputs W ­ weights b ­ bias adjust F ­ transfer function a = F[w.P. Depending on some factors.+oo) into the .

The primary situation used for training. The training values used in the ANN scheme considered the changes of the fault location along the .+l). a simulation of the transmission line in a faulted condition is needed. among the algorithms for digital distance protection. etc. to use voltage and current waveforms taken from a busbar in order to solve the fault location problem in a power plant. Only one side ofinformation was used in the referred method (from busbar A). OUTPUT [trip/no trip] Figure 3 . The lOOKm transmission line used to train and test the proposed ANN is shown in Figure 4. 296 interval (0.ANN configuration used as a distance relay. The transfer function used for the perceptrons was the logistic sigmoid described in the earlier section. 5-The Power Plant Diagram Used In order to test the apphcability of the scheme proposed earlier.) and phase A to ground faults only. This scheme also uses the three phase values of current and voltage data. The logistic sigmoid equation (2) is applied to each element of the proposed ANN [7]· 4-Application of the Backpropagation Method for the Fault Location Problem It is common. The digitized output of voltage and current at the three phases are then used in real time to feed the ANN algorithm. Figure 3 shows the ANN diagram chosen to solve the fault location problem using the backpropagation method. This paper makes use of a digital simulation of faulted EHV transmission lines developed by Johns and Aggarwall [8]. The Discrete Fourier Transform was used to filter this input data and extract the fundamental components. considered the fault resistance as constant (as well as the other parameters such as source capacity.

Q> r>ry~\ current line switch * signal voltage I signal τ I Off-line routine ANN D/A Analog training Converter VJ input routine signals On-line weights hardware Surge trained process Filter off-line S/H Fourier logsig Digital Circuit A/D -*| Transform transfer Output (clock) Converter Filter function 0/1 Figure 5 . Figure 5 shows the schematic diagram for the hardware needed in an ANN implementation. including the microprocessor based neural relay.Transmission line used for the ANN studies. a protection of 80% of the line was . flexibility for untrained or unforeseen data is expected for this kind of scheme. 297 transmission line as the main variation of the input data. The initial weights as well as the initial bias used random values between 0-1. 6-The Training Procedure and Test Results of the Proposed ANN The "Neural Network Toolbox" from the software 'Matlab™" [7] was used to create the ANN diagram. 100 Km r- 0 Β fault point 0f 20GVA 5 G VA Rf=10n V=ll0°pu V=1|0°pu fault inception angle 90" Figure 4 . which are worked in an off-line mode are then stored in the microprocessor for on-line application. However. In the referred scheme. train it and obtain the weights as output. The scheme works in a sample frequency of 4kHz.Block diagram of the distance relay. The converged set of weights. 59 different faulted cases were used in different locations of the transmission line in order to train and test the proposed ANN.

The ANN answer is shown.0 2.0 26. Table 1 shows the results of an ANN model used as a distance relay. 7-Tests of the ANN for unforeseen data In order to test the performance of the ANN scheme subjected to unknown data inputs.0 0.0 1.7110e"1J 0 48.0 78.0 10.8801e"18 0 54.0 65.0 22.0 71.0 2.0 88.0 6.0 96.0 3. Table 2 shows the results of the ANN scheme subjected to such .0 82.0 87.0 1. For all the cases.0 92.0 83.2734e"7 0 40.0 29. less degree of scarcity was taken between locations used for training.0 11. the fault resistance.0 98.0 77.0 16. for faults along the transmission line.0 76.9941 33.0 75.5577e"21 0 fable 1-Results 1or the ANN sch eme.0 74. Points next to the region where trip/no trip condition exchanges (80Km for the line used) had special treatment.0 84.0 6.0 60.0 4.0 2. Table 1 presents the relay results for the configuration presented in Figure 4.0 56.1656e"9 0 42.3289e"20 0 55.0 20.0 0.0 1.6594e"11 0 45. compared to the expected ones.0 2. It should be mentioned that the cases used for the tests are different from the ones used for the training The results presented in Table 1 show the efficiency of the proposed scheme. the ANN scheme correctly classifies the fault as been internal or external to the first zone of the relay. In this case.9998 30. Distance of ANN answer Correct Distance of ANN answer Correct the fault from answer the fault from answer point A (Km) point A (Km) 2.0 13.9977e"12 0 46.0 8.0 94. As said before.0 86. 298 chosen as the extension of the first zone of the relay.0 68.6845e"4 0 35. power generation capability and fault inception angle suffered small variations.0 73. some changes were made to the power system parameters.0 64.0 61.0 89. In order to test the ANN flexibility to unforeseen inputs.0104e"5 0 37.0983e"16 0 52.0 4.0 5.

fault distance=85Km from A. as a consequence the current of a faulted phase increased. 1 1 Fault resistance set to 12Ω. The results obtained in this scheme are very encouraging. fault distance=70Km from A 1 1 Fault resistance set to 8Ω.5024. Change of trained parameters ANN Output Correct Output Fault inception angle set to 88°.3442. 3. this tool opens a new dimension in relay philosophy which should be widely investigated in order to solve some of the various problems related to the distance protection of transmission lines.0047. The ANN scheme can operate correctly in the location of the fault point. fault distance=70Km from A.5GVA. However. 299 variations.10"13 0 Source at Β set to 4GVA. The training points to be used can also be an expressive problem. fault distance=90Km from A.5749. fault distance=75Km from A. confirming its capability as a pattern classifier. 7.10"15 0 Table 2-Results of the ANN for unforeseen data. It could be noted that for most cases the ANN scheme still gives correct results. fault distance=90Rm from A.10"12 0 Source at A set to 18GVA. The wrong diagnosis was presented in the case of small fault resistance where. fault distance=70Km from A 0. 2.5GVA fault distance=90Km from A. 1 (wrong) 0 Fault resistance set to 5Ω. fault distance=90Km from A. 2. The initial network configuration is totally empirical and may not result in the best performance for the scheme. These are some points that can influence the speed of the conversion of weights and consequently the performance of the scheme. fault distance=70Km from A. 1 (wrong) 0 Fault resistance set to 5Ω.3569.8528. fault distance=85Km from A.1024. fault distance^OKm from A 1.10"1K 0 Fault resistance set to 15Ω. fault distance=70Km from A.10"y 0 Fault resistance set to 0Ω.10"21 0 Source at Β set to 4. 1 1 Source at Β set to 4. fault distance=70Km from A. fault distance=75Km from A.10"15 0 Fault resistance set to 12Ω. . fault distance=90Km from A 5. It is also necessary to point out some problems related to the ANN application. The scheme can be extended including some more variations of parameters in the training set in order to avoid misoperation as seen in the paper for the case of low fault resistance.10"' 0 Fault inception angle set to 92°. 1 1 Fault resistance set to 8Ω. However. 1 1 Fault inception angle set to 88°. it should be mentioned that such cases could be used in the training set in order to avoid such a problem. 8-Conclusion In this paper the use of ANN as a pattern classifier to work as a distance relay was investigated. 1 1 Fault resistance set to 0Ω. 3.1459. 1 1 Source at A set to 18GVA. fault distance=85Km from A.9192 1 Fault resistance set to 15Ω. 1 1 Fault inception angle set to 92°. fault distance=75Km from A. 1. fault distance=95Km from A. The wrong diagnosis was given because this case is similar to the situation of the fault occurring in the first zone of the relay trained earlier. fault distance=70Km from A 1 1 Source at Β set to 4GVA.

Adison-Wesley Pubhshing Co. Agarwal..B. R J. 1991.A Khapared. Beale. 'Tault Detection and Diagnosis of Power Systems Using Artificial Neural Networks". P. [3] K S . pp. 1991. Swamp and H S . No. April 1976 . "Neural Networks and Their Application to Power Engineering". Marks Π and S. 7. [8] A T. Chandrasekharaiah. VoL 123. Weerasooriya.Toolbox -For Use with Matlab™". 1991. 'Tault location for transmission lines using inference model Neural Network". 1991. Control and Dynamics Systems Vol. TEE proceeding. Hertz. 1992. 1991. [5] R AggarwaL Artificial Neural Networks for Power Systems. M. [4] M. A Krogh and R G. Kaneta and K Kanemaru. Demuth and M. 41. VoL 111. 300 References [1] S. "Application of Artificial Neural Network in Protective Relaying of Transmission Lines". R Κ Aggarwal. Electrical Engineering in Japan.359-451. shortcourse notes. Kale and S. [7] H. " Digital Simulation of Faulted EHV Transmission Lines with Particular Reference to Very High Speed Protection" . [2] H Kanon.H. pp. Introduction to the theory of Neural Computation. A El-Sharkawi. Palmer. "Neural Network . [6] J. IEEE. 353-359. Johns. ΓΕΕΕ.

During shut- down a steam explosion occurred. the scope of inspection was expanded with the objective of a thorough assessment of the plant condition. in the course of an operational inspection tour the staff of the Bayer-Uerdingen power plant noticed a temperature rise in the condensing turbine #2 of the N230 power plant and moisture in the soundproof hood. During shutdown a steam explosion occurred. In the course of an operational inspection tour the staff noticed a temperature rise in the condensing turbine 2 of N230 power plant and moisture in the soundproof hood. Within the inspections an extraordinary amount of creep damage was detected. If a certain degree of damage is detected. Approximately half an hour later an accurate control by the plant management detected an increasing flow noise due to escaping steam. The cause was the rupture of a hot reheat line (Figure 1) directly . Recommendations The systematic detailed analysis of the hot reheat line allows the following recommendations for pipe line systems operated for long periods of time. Load reduction by decreasing the temperature and/or pressure and repeated inspection or replacement during next overhaul. Inspection and scope Safety requirements and the necessity of a high availability. may require an extended scope of inspection. The cause was the rupture of a hot reheat line directly upstream of the right turbine side. the following measures are recommended consider- ing the intended future mode of operation: Monitoring of continued operation and replacement of component during next overhaul of the plant with prolonged operating time. Damage Cause There are indications on the failed tube that an inadequate heat treatment of the tube con- tributed to the failure. The immediate decision was to unload the turbine and to shut it down. in the afternoon. DESCRIPTION OF EVENTS In August 1994. A sudden massive steam release into the turbine hall of the Uerdingen plant occurred. Approximately half an hour later an accurate control by the plant manage- ment noticed in increasing flow noise due to escaping steam. The knowledge base was substantially improved. When replacing individual components care shall be taken that even straight pies may be highly loaded and therefore damaged. in the afternoon.000: Only component metallography may give early. reliable indications as to creep damage initia- tion. 301 CONSEQUENCES OF CURRENT FAILURES FOR QUALITY ASSURANCE Hans R. The immediate decision was to unload the turbine and to shut it down. As a conse- quence. especially in case of industrial plants. > 150.Kautz Grosskraftwerk Mannheim AG Germany ABSTRACT In August 1994.

Beyond the branching the crack ran on above and below the girth weld. 104 bar operating hours approx. The crack orientation was in the pipe axis and the pipe body unfolded. but ended short off the weld.000 . In the lower section of the girth weld the crack branched. The inner pipe surface has a dark-grey oxide film (Figure 2. It is obvious that the component was ground in the girth welds area in the course of non-destructive examinations. A part of the outer surface of the failed component displays grooves oriented in the direction of the component circumference. As opposed to the lower girth weld this crack did not branch. In the crack area this film is absent. 302 upstream of the right turbine side. By appearance it is a magnetite layer which spalled in the section of maximum de- formation during unfolding of the pipe. Dimensions length of failed pipe section 835 mm inner diameter 150 mm minimum wall thickness 13 mm Material 4 MoV 6 3 ( a molybdenum-vanadium steel) Operating conditions pipe line medium steam operating temperature approx. In order to clarify the failure cause. The macroscopic appearance of the rupture surfaces and the crack mouths of the two cracks are an indication of the ductile nature of the rupture in this area. Failed Component Data The failed pipe line section consists of a vertical pipe between the girth welds 52 (connection upright pipe bend/failed component) and 52 (connection failed component/transition cone to trip valve casing) of the right reheat line. Failure Appearance The pipe ruptured at the 6 o'clock position (towards the plant control room). extensive surface microstructure examinations were con- ducted and non-destructive testing of the failed component performed including other relevant pipe system components. The two girth welds 51 and 52 were rehabilitated in 1990. The examination procedure was established by a Working Group three days later. All preliminary examination results so far indicate that the failed pipe is a unique event with respect to damage in all of the pipe system. The failed pipe section was cut out and submitted for further examinations. 217. The macroscopic result of the upper girth weld examination confirms the diagnosis of ductile crack rupture. 525 °C operating pressure approx.

In accordance with a long lasting practical experi- ence the damage classes were newly defined for the present guideline. radiogra- phic examination. In particular the damage Classes 2 and 3 do not allow to differentiate as required. Reheat line right side. Reheat line left side. without thermal service load 1 creep exposed. surface micro- structure examination. chains of cavities and/or grain boundary separations 4 advanced creep damage. as below: assessment class structural. macrocracks The surface microstructure was examined by way of replica of ground and polished areas.Same examinations as for failed component. bends #1 and #2 upstream of failed component .Material determination. UT wall thick- ness measurement. measurement of circumference/ovality. 303 EXAMINATION PROGRAM The examination program established by the Working Group for the first phase of the failure examination included an as-is condition report an a non-destructive examination of the failed component plus other relevant components of the pipe system. surface microstructure examination. UT wall thickness measurement. without cavities 2a advanced creep exposure. material determination. surface microstructure examination. Cone between failed component and trip valve casing .and damage conditions 0 as received. Measurement of Circumference The circumference was measured with a flexible metal gage. UT wall thickness measurement. Surface Microstructural Examination Description of Damage-Classes Practical experience of field metallography with replicas has indicated that the damage classes are not appropriate as published by VdTÜV and VGB.Measurement of circumference. PERFORMANCE OF EXAMINATIONS Material Determination With random checks the metal alloy was determined by way of radiographic fluorescence analysis. UT volumetric measurement. straight pipe upstream of trip valve . microcraeks 5 large creep damage.Material determi- nation. . isolated cavities 2b more advanced creep exposure. numerous cavities without preferred orientation 3a creep damage. Non-Destructive Examinations Failed Component . numerous oriented cavities 3b advanced creep damage.

7 levels in the direction of the component axis including girth welds. 304 Polishing technique electrolytical Etching agent 3 % alcoholic HNO3 Replica transcopy. gold-doped Evaluation light microscope US Wall Thickness Measurement The wall thickness was measured with an analogous UT probe.are as follows: • The rupture is macroscopically undeformed.staggering by 90 ° over the pipe cross section across the upper and lower girth weld. Ovali ty Measurement The ovality is measured with a reference caliper over two pipe cross axes staggered by 90 °. There were 28 replica locations within a given screen: . the pipe circumference is increasing from the component lower to the upper edge. Material Determination The random check of the material confirmed the steel to be 14 MoV 6 3 (molybdenum- vanadium).starting at the lower edge of the cut pipe section with a spacing of 50 mm up to the component upper edge . Radiographic Examination Radiography was performed by way of an X-ray tube inside the pipe. A chemically wet analysis of the material was performed within destructive exami- nations of the failed component. . The replicas were evaluated according to VGB data sheet TW 507/TW507e. In conclusion can be stated that . Hardness Measurement The hardness was measured by a Equotip™ hardness meter. • By all appearance the pipe was chamfered. The pipe is not expanded. US Volumetric Examination The volumetric measurement was performed with an analogous UT probe. RESULTS Examinations of Failed Component Measurement of Circumference The results of the circumference measurements . 1992 issue. Surface Microstructural Examination A replica of the failed component surface was made.

Hardness Test The hardness test was performed at the locations of replica. UT Volumetric and Radiographic Examinations The UT volumetric and the radiographic examinations did not detect any indications in the pipe wall. also the development of welding fillers. • the maximum thickness is found in the 3 o'clock and 9 o'clock positions. in German power plant construction 'economy steels' were ap­ plied. The material revealed excellent high-temperature strength des­ pite the low alloy components as compared to Ρ 11 and Ρ 22. The following observations are of interest: • In the crack area in 6 o'clock position the material hardness was reduced. The cause for this wall thickness variations may be facing of an oval pipe. The hardness was measured on metallographic specimens within the destructive examination of the failed component. When around 1960 the first contracts for pipe lines of this material were awarded. However. Experiments as to the best heat treatment for pipes and forgings were started. After removal of the internal and external deposits within the destructive examination of the failed component a surface crack test was conducted. FAILURE CAUSES Prior to and during the war.assessment class 3b. • the strongest microstructural damage was found in the 6 o'clock position of the pipe circumference 220 mm away from the lower girth weld . 305 • in all cases the weld metal of the two girth welds displays a microstructure according to assessment class I. The wall thickness ranges between 11. There may be a relation between the degree of micro- structural damage and wall thickness variation. At the beginning of the fifties.3 mm and 14. The total hardness level is at the lower limit of the hardness range to be anticipated. Discrete indications on the inner pipe surface are caused by the oxide film.6 mm. when the fact that in Great Britain an economy steel alloy­ ed with only vanadium and molybdenum was successfully applied extensive creep strength examinations were performed. the difficulties for heat treating the pipes . US Wall Thickness Measurement The wall thickness is measured at the replica locations. • the areas between 6 o'clock and 12 o'clock of the pipe circumference has a microstructure of assessment class 3 a. • the microstructure in the areas between 3 o'clock and 9 o'clock of the pipe circumference is of the assessment class 2b. • the hardness values range between 133 and 153 HB with a minimum at 110 HB. However. The following observations are worth noting: • The wall thickness is reduced within the crack area at 6 o'clock along all of the pipe axis. operational experience with this material lacked. This is also the area with the largest crack opening.

RECOMMENDATIONS The systematic detailed analysis of the hot reheat line allows the following recommendations for pipe line systems operated for long pe-riods of time. As a consequence. The knowledge base was substantially improved. Despite the knowledge andfindingsgathered at this meeting it took almost another ten years until unanimity was reached on the adequate heat treatment of the tubes. Load reduction by decreasing the temperature and/or pressure and repeated inspection or replacement during next overhaul.000: Only component metallography may give early. INSPECTION AND SCOPE Safety requirements and the necessity of a high availability. Within the inspections an extraordinary amount of creep damage was detected. 306 became so enormous that a (public) meeting of the VGB Materials Committee was called in 1963. . the following measures are recommended considering the intended future mode of operation: Monitoring of continued operation and replacement of component during next overhaul of the plant with prolonged operating time. but pipe manufacturers established also differing parameters over the years. especially in case of industrial plants. It took a long time until it was clear how sensitive the material was with respect to cold forming or manipulations such as heat treatment. may require an extended scope of inspection. If a certain degree of damage is detected. Heat treatment data for forgings differed. The greatest difficulty was to harmonize then theoretical knowledge and operational limitations. It absolutely happened in those days that only stress relief annealing was performed instead of a heat treatment. reliable indications as to creep damage initiation. There are strong indications on the failed tube that an inadequate heat treatment of the tube contributed to the failure. > 150. When replacing individual components care shall be taken that even straight pipes may be highly loaded and therefore damaged. the scope of inspection was expanded with the objective of a thorough assessment of the plant condition.

307 Figure 1: Ruptured Hot Reheat Pipe Figure 2: Internal Oxide Film of Ruptured Reheat Pipe .


Ramos. Welding Technology. Few data entries are required to generate a new welding procedure or to select a qualified procedure from its database. Stress corrosion crack (SCC) and weld decay are same process plant environment constraints analyzed by the system .CEFET-PR (nievola@dainf. Fabiano H. Aluído Iwasse PETROBRAS/REPAR PO Box 009 83700-970 Araucaria Paraná Brazil ( Julio Cezar Nievola Centro Federal de Educação Tecnológica do Paraná . Luiz Augusto D. These data entries usually are the base metal specification and thickness. Lucio M. Besides the essential standard variables. the SES achieves the procedure considering the environment constraints imposed to welding joint in ABSTRACT The Welding Expert System SES was developed with the main purpose of helping people involved in welding procedure qualification and selection of qualified procedures. Correa.celepar. required by industrial sector in Brazil. Hydrogen damage. The development of this Welding Expert System is justified by the requests of welding experts and the need to meet quality and productivity improvement goals. Budel Instituto de Tecnologia do Paraná . Section LX and several project codes applied to process plant. The generation or selection of the procedures through the SES is made in accordance to the ASME code. Silveira.0274. Qualification.TECPAR PO Box 357 81310-020 Curitiba Paraná Brazil (tecpar@lepus. .00. Expert System.celepar. ( * ) Project supported by CNPq/RHAE n° 610094/93-9 and FINEP n° 56.94. 309 IMPROVEMENTS ON WELDING TECHNOLOGY BY AN EXPERT SYSTEM APPLICATION (*). Key words: Artificial Milton P.

This alternative also supports the major emphasis Brazil has placed on increasing the quality. Thus. Furthermore. Welding process. one that better represents the knowledge concerning welding technology. mainly in developing countries. THE KNOWLEDGE DOMAIN The generation or selection of a welding procedure requires the knowledge of several areas of expertise. Also project codes like ASME VIII. However. ASME I. Certainly the welded joint is the most critical . considering the aggressive environment to what the welded joint will be exposed to. metallurgical properties and features of both filler and base metals and the welding metallurgy are some of these areas. 310 INTRODUCTION Technological developments require great efforts and financial resources. This arrangement fulfills almost all welding procedures demand for assembly and maintenance of process plant. factors that appears in any quality management system. The Expert Systems Technology (ES) offers an important alternative to disseminate this expertise(1). In this context and supported by the strategic importance of the ES development. the Welding Expert System (Sistema Especialista em Soldagem . The initial goal of the SES is to deal with welding procedures for carbon steel. the development of an ES applied to the welding technology becomes an important tool in the dissemination of knowledge when human resources are scarce. alloy steel and stainless steel base metals. specially for industrial production. The system generates WPS in accordance to ASME code. certainly.3(4·5·6>7) requirements for welded joint are attended. However these resources are scarce. This structure is. to standardize procedures and to facilitate the information storage and retrieval. the procedures should be qualified according to the appropriated standards. Filler metal specifications in procedures generated by SES are in accordance with AWS/ASME section II(8). Section IX and PETROBRAS N-133 standard(2'3). It is capable of generating Welding Procedures Specification (WPS) and managing a qualified procedure database. Shield metal-arc-welding(SMAW) and gas tungsten-arc- welding(GTAW) are the welding processes concerned in this stage. Welding technology comprises several areas of knowledge. Large amounts of these resources are spent in training and in the development of expertise. and mastering it requires a great number of experts in these areas. the metallurgical performance is an important information to prepare fitness procedures to usage. These standards and codes are concerned chiefly with the mechanical performance of the welded joint. since the most important welding parameters are included. ANSI B31.SES) was structured. The use of an ES tends to reduce costs.1and ANSI Β31.

there are also information that makes SES capable to adjust the welding procedure parameters according to the environment and its corrosion process. the system will display a cold crack risk warning. knowledge-based systems and expert systems try to reproduce the performance of a highly skilled professional in a specific problem solving task . The SES structure is shown in Figure 1. This configuration makes the system more dynamic. Knowledge is completed with quality control techniques at the execution level. Beside the qualification codes. Filler metal storage and handling. the user may. since each procedure parameters can be evaluated and changed by user according to his convenience. The main advantages of these technologies are to preserve and distribute the human expert knowledge. Both try to simulate or emulate the human intelligent behavior in terms of computational process. This arrangement and knowledge base also converts the SES in a tutor capable to transfer its expertise knowledge to a user.11). welding decay in stainless austenitic and ferritic steel. modify the procedure parameters that are being prepared by the system. SES warns the user about the feasibility changes and. at his convenience. if the user keeps his new option. if the user changes a basic filler metal selected by the system to weld a carbon steel subjected to hydrogen crack by the welding process. 311 region in equipment working in an aggressive environment^. the properties and features of the base and filler metals and the welding metallurgy knowledge. The nature of the welding procedure qualification doesn't present a pre-determinate solution method. In this case. Although it is possible to achieve a procedure from de SES with few data entries (usually the base metal specification and its thickness). Therefore. therefore it fits perfectly as an expert system application. It is composed of the following modules: . joint preparation. This occurs because the solution of this task doesn't need only the information contained in codes and standards. but needs also the experience of a welding expert for correct manipulation of this information and search for a better solution. if necessary. Specifically. welding and heat treatment techniques are supplied by the system when the welding procedure specification (WPS) is issued. the system will change parameters like preheating and interpass temperature to avoid the cold cracks. For example. knife corrosion line in stabilized austenitic stainless steel and the usual corrosion processes for other standard base metals(10. SES points out any more parameters changes required to reach the welding properties and qualification. This knowledge includes the corrosion process like Hydrogen damage and stress corrosion crack (SCC) in carbon steel welding. SYSTEM DESCRIPTION The artificial intelligence and expert systems technologies have been used in a large number of applications related to industrial problems ' . However. it is necessary aflexibleinference engine and an enlarged knowledge base capable of evaluating any parameters change. more than just preparing a welding procedure. the procedure may be totally developed by SES or developed interactively with the user..

It allows the user to see and to print these documents. which contains all information concerning to how do to the test coupon. As these stored documents belong to the company that generates them. If this qualified WPS doesn't exist. that satisfies the conditions of a new welding process requested by the user. to assure the reliability of conclusions showed by the system. The knowledge module of SES gives an overview of knowledge base content. THEN pre-heating temperature = 100 °C AND interpass temperature = 100 (minimum). the facts about filler metals and base metals (extracted from welding codes) stored in the system. the system generates a new WPS to be qualified in laboratory.45 AND carbon-equivalent <0. The main objective of this module is to disseminate the knowledge acquired during the system's development. IF standard is ASME IX AND standard is PETROBRASN-133 AND project standard is ASME VIII AND ρ number is 1 AND thickness >20 mm AND thickness <30 mm AND carbon-equivalent > 0.g. which tries to make new WPS/PQR documents. Knowledge base: The representative AI method used to organize the domain knowledge was a "production system" with a forward chain method of inference. is possible to see the production rules. and delete them. Qualification: This module contains the inference engine. if the user has this permission. The forward chaining is applied because the WPS generation starts with little available information and tries do draw a conclusion that could be appropriate to do the new WPS.47. The rule and facts of knowledge base are used with this objective. called production rules. 312 Database: Database is the manager module of WPS/PQR base. . as well as they are also used to do an intelligent search in WPS/PQR database and to manage the explanation facilities.. The explanation facilities explains how a determinate solution was obtained. See "Procedure Selection and Generation Systematic" in this paper. Then. and the information contained in standards related to filler metals and base metals. where the knowledge database consists of rules. comprehended in various files. This choice was made because production systems offer good features in terms of modularity and uniformity. The intelligent search seeks a qualified WPS stored in the database. during WPS elaboration. e. it is possible to eliminate them from the WPS/PQR database just using passwords.

information about base and filler metals. the qualification of welders and operators is intended to assure that personnel are able to weld a specific joint properly. The tool used in the development was a PROLOG interpreter/compiler. where it is possible to consult the WPS/PQR data base. procedure qualification and . In the same way. providing data and tools for the knowledge base evolution. Expert inte rface : A password protected interface where the expert can change parameters used by the rules in order to adapt the knowledge stored in the system. PROCEDURE SELECTION AND GENERATION A qualified Welding Procedure Specification (WPS) is intended to guarantee that the required mechanical properties will be reached when welding a joint. > INTERFACE INTERFACE * EXPERT 1 ιL Γ 1Γ QUALIFICATION MODULE (inference engine) 1 ' ji ι DATABASE KNOWLEDGE BASE Figure 1 : SES structure The SES was developed in 2 years by a team of three knowledge engineers and two welding experts. and to generated a new WPS/PQR. construction and assembly standards and codes present mandatory requirements for welding qualification and welder performance qualification. 313 User interface: This module provides a user friendly interface for the user. SES USER EXPERT WELDING USER ■·-. However. It runs on IBM- PC compatible machines under MS-Windows environment. Project.

According to this flow chart. are given through warnings. The system points out the most suitable filler metal and others that are possible to weld. 314 workmanship skill certification are basic for a Quality Assurance System in any production activity. These qualified procedure parameters determine the welding procedure application fields through the essential variables range established in the codes. When a WPS is generated. from the tabulated AWS filler metals from the system database. PQR Test Specimen Preparation and Weld Instructions. . More data entries may be requested by the system as environment conditions that the welded joint will be subject and others physical constrains. Considering these aspects of codes. Quantitative results are analyzed and approved by the system although qualitative results must be approved by the welding inspector. results in a large application field for a PQR Even if a few numbers of PQR are required to weld a variety of joints. After this search. These data entries are presented by the system as options. if it is necessary to modify any variable of WPS established by the system to adequate it to a specific usage. whose P. established by welding qualification codes. like F. qualification standards and welding process. As may be inferred from flow chart. a careful and arduous selection from a procedure database is required. At least all documents required to qualify the welding. to retrieve some WPS is required the base metal specification. Thus. This systematic. a procedure qualified for carbon steel base metal. the system points out special conditions to apply other filler metal than the indicated by the system. may always be used to weld all P.Number.Number. For each variable updated. All others essential variables. all available qualified WPS are pointed out to user acceptance.Number is 1. the system will provide warnings when the change could result in a pour WPS. the SES was designed to minimize the routine work of procedures development by making an intelligent WPS selection from its qualified procedure database. Other filler metal may be chosen by the user. Generation of a new WPS starts with the filler metal selection. In call case. as well as the metallurgical and mechanical constraints. Base metal specification and thickness are usually enough. each parameter is reviewed by the system. thickness and pre-heating and treatment temperatures(2). The tests and results of analyses are input into the system through the Results Data Entry Module. when a test specimen(welded according to the pre-established parameters in the procedure) provides the required properties. all procedure variables may be modified by the user into the Updating Variables Module. or the Qualification Module becomes active when a new WPS is required. Thus. for its intended application. the SES structure was developed to be able to qualify or make a WPS intelligent selection with few data entries. have to be considered. The WPS database stores qualified WPS and PQR where the search and selection are made according to PQR variables.Number 1 base metal. The WPS qualification and search module are detailed in Figure 2. Welding procedures are qualified according to these standards. are emitted: WPS. A.

315 ( BEGW ) i: Qualification DataBase Knowledge Base '' Base Melalurgic Other Metal Analisis Rules Rules Rules ^ END J Figure 2: SES Flow Chart .

New York.M. Brazers. 4. N. Soldagem. 1992 Edition. 5. New York. dissimilar welding and process combination.. NY. division 1 and 2. Welding Journal 70 (1): 29-s to 38-s. Power Boilers. becoming SE S a modular system capable of being enlarged to attend specific necessities of the user. January 1995. and Madigan. The SE S knowledge base. New York. ASME Boiler and Pressure Vessel Code. New York. PC-Based Expert Systems and their applications to welding. American Society of Mechanical Engineers. 1992 Edition. R. NY. Barborak. 2. Section IX. Brazil. Chemical Plant and Petroleum Refinery Piping. American Society of Mechanical Engineers. Ν. ACKNOWLEDGMENTS Several people and institutions gave important contributions to the system development. D. American Society of Mechanical Engineers. Y. American Society of Mechanical Engineers. We want to express our gratitude to TECPAR and PETROBRÁS for the support. and the database design allow the system improvement beyond the initial scope as well as the use of different base metals. ASME Code for Pressure Piping B31. 316 CONCLUSION Assembly and maintenance welding plays an important role to reliability and safety in process plants. 1991. 1993 Edition.. and Welding and Brazing Operators. SES was developed with the main purpose of improve the work of welding experts and the personnel in charge of equipment integrity. Dikinson.B.3. Pressure Vessels. 1992 Edition. . ASME Boiler and Pressure Vessel Code. Welders. Section VIII. 3 E P TROBRÁS N-133. the inference engine. 6. Section I.A Rio de Janeiro. and disseminating the welding knowledge and technology.Y. SES surely is contributing to improve that reliability and safety.W. Petróleo Brasileiro S. Q ualification Standard for Welding and Brazing Procedures. REFERENCES 1. Reaching theses aims. D. ASME Boiler and Pressure Vessel Code.

Power Piping. 1993 Edition. Kuo. 29. N Y . Schorr. Electrodes and Filler Metals. 13. no. An Engineering Approach: 646 Singapore: McGraw-Hill. Η. 8. Section II. American Society of Mechanical Engineers. Johnson. 1992 Edition.J. Β. Brazing and Soldering. 9th edition. Schalkof. January.E. Metals Handbook. Volume 13 . ASME Code for Pressure Piping B31. S. Chemical Engineering Progress. 9th edition. C. p. 1990. 317 7. 10. Η. A. 1987. 1994. Shrobe. 12. Friedland. American Society for Metals. S.Corrosion. Artificial Intelligence: Starting to realize its practical promise. Knowledge-based systems research and applications in Japan. 11. Volume 6 . 2. 1983. AI Magazine. 9. .Y. 1992. N. P. P. New York.Welding. 15. Η. 14. Materials Specifications for Welding Rods. New York. ASME Boiler and Pressure Vessel Code. A. R Crowe. Β. American Society for Metals. Feigenbaum.E. American Society of Mechanical Engineers. Vassiliadis. Welding Metallurgy: 411 New York: Willey.1. 1987. Nii. R. R. 1995. vol.Metals Handbook. Artificial Intelligence. E. Engelmore.


Hamano Nuclear Engineering Information System Dept. We have integrated two-dimentional CAD systems. We have integrated existing two-dimensional(2D) CAD systems. H. As a design automation system. Japan 1. Narikawa Resereh and Development Center TOSHIBA CORPORATION Yokohama. Sato Nuclear Plant Design and Engineering Dept. a three-dimensional (3D) CAD system and a relational database system which stores engineering information such as design conditions. TOSHIBA has integrated CAE systems to utilize the reliable information and to make decision efficiently. 319 Integrated and Intelligent CAE Systems for Nuclear Power Plants T. This paper describes a situation that TOSHIBA has been promoting in order to improve user interface in integrated environment and to replenish intelligent applications specialized for the nuclear engineering.ABSTRACT This paper presents an overview of integrated and intelligent CAE systems for nuclear plants. pipings and so on. Futami and T. careful engineering is required to conform to strict . and are utilized in the practical design. 2. N. We have also developed an automated routing system and design check system. three-dimentional CAD systems and a nuclear power plant database system. These systems are the main parts of the plant engineering framework. huge and diverse information has been accumulated on the Data Base Management System(DBMS). In this term. we have developed an automated design check system. It takes 8 or 9 years to complete the design and engineering from the beginning to a commercial operation. As Computer Aided Engineering(CAE) is applied for several plants. The design and engineering of a nuclear power plant covers various technical fields and the information which is created in many fields is exchanged and utilized in parallel.INTRODUCTION A nuclear power plant is composed of a large number of equipments. maintenance histories and inherent properties.

320 design standards regarding safety. minimizing radiation exposure. TOSHIBA is promoting the improvement of design quality and engineering efficiency.1 Plant Engineering 3.CONCEPTS OF SYSTEM DEVELOPMENT TOSHIBA has been developing CAE systems according to the following three concepts on the basis of rich experiences about plant engineering. It is very impotant to manage not only the design data but also historical data such as maintenance and replacement records. Furthermore. reliability. TOSHIBA has promoted improvement of reliability and efficiency of the common information usage in many fields in parallel. informaton processing is automated to improve the reliability and efficiency of engineering. maintenance process and so on.1 Overview Figure 1 illustrates a typical engineering process in a nuclear power plant. TOSHIBA has constructed and applied distributed processing systems by engineering work stations and network systems around the integrated DBMS for the large scale plant engineering. reduction of engineering term is indispensable to cut down plant costs and a construction period. the design verification plays a very impotant role. reduction of radiation exposure and so on. project management. construction. mechanical equipments.1. etc. Furthermore operating plants are to be kept on maintaining and improving for more than 30 years. design. (2) There are many kinds of components including pipings. Plant engineering of this type consists of various processes. plant operation. Depending on these concepts. reliability. and a nuclear power plant operates for more than 3 0 years. maintenance and know-how for computer applications. construction. Furthermore. 3. (1) A plan spans more than eight years from the initial stage to full commercial operation. various local systems are integrated to improve the efficient information usage. Comfortable user environmnt is supplied by a visualized user interface on the basis of recent progress of computer graphics technologies. Thus. The main features of nuclear power plant engineering are as