Professional Documents
Culture Documents
Introduction
Background
Difference between ANN & BNN
ANN architecture
Types of ANN
How it works
Principle & theory of ANN
Advantage & Disadvantages
Application of ANN in textile
Conclusion
Introduction: Indian textile industry is one of the leading textile industries in the world.
The textile industry, being the second largest employment generator after agriculture, it
earns about 27% of its total foreign exchange through textile exports. It is achieved by
producing competitive fabrics at minimizing production cost and time. The competition
enhancement depends mainly on productivity and quality of the fabrics produced by each
industry. In order to obtain good-quality products with high-efficiency production lines,
clothing companies have established advanced laboratories to measure fabric properties
by using artificial neural network thus controlling production processes and fabric
quality.
Although quality levels have been greatly improved with the continuous improvement of
materials and technologies, most weavers still find it necessary to perform 100%
inspection because customer expectations have also increased and the risk of delivering
inferior quality fabrics without inspection is not acceptable. It means that the quality is
the most important parameter despite the increase in one or both of the other parameters.
Scientifically, a process quality control is attained by conducting observations, tests and
inspections and thereby making decisions which improve its performance says Abdel
(2012). Practically no production or manufacturing process is 100% defect-free, the
success of a weaving mill is significantly highlighted by its success in reducing fabric
defects and achieve optimum potential benefits as quality, cost, comfort, accuracy,
precision and speed. To imitate the wide variety of human functions, technology was the
magic stick that advanced humanity from manual to mechanical and then from
mechanical to automatic.
ANN: It is an efficient computing system whose central theme is borrowed from the
analogy of biological neural networks. ANNs are also named as “artificial neural
systems,” or “parallel distributed processing systems,” or “connectionist systems”.
Artificial Neural Networks are a special type of machine learning algorithms that are
modeled after the human brain. That is, just like how the neurons in our nervous system
are able to learn from the past data, similarly, the ANN is able to learn from the data and
provide responses in the form of predictions or classifications. ANNs are nonlinear
statistical models which display a complex relationship between the inputs and outputs to
discover a new pattern. A variety of tasks such as image recognition, speech recognition,
machine translation as well as medical diagnosis makes use of these artificial neural
networks.
Today, ANN is being applied to increasing number of real world problems of
considerable complexity.
History of ANN: The first step towards neural networks took place in 1943, when Warren
McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper
on how neurons might work. They modeled a simple neural network with electrical
circuits.
1949: Donald Hebb reinforced the concept of neurons in his book, The Organization of
Behavior. It pointed out that neural pathways are strengthened each time they are used.
1950: Nathanial Rochester from the IBM research laboratories led the first effort to
simulate a neural network.
1956: the Dartmouth Summer Research Project on Artificial Intelligence provided a boost
to both artificial intelligence and neural networks. This stimulated research in AI and in
the much lower level neural processing part of the brain.
1957: John von Neumann suggested imitating simple neuron functions by using telegraph
relays or vacuum tubes.
1958: Frank Rosenblatt, a neuro-biologist of Cornell, began work on the Perceptron. He
was intrigued with the operation of the eye of a fly. Much of the processing which tells a
fly to flee is done in its eye. The Perceptron, which resulted from this research, was built
in hardware and is the oldest neural network still in use today. A single-layer perceptron
was found to be useful in classifying a continuous-valued set of inputs into one of two
classes. The perceptron computes a weighted sum of the inputs, subtracts a threshold, and
passes one of two possible values out as the result.
1959: Bernard Widrow and Marcian Hoff of Stanford developed models they called
ADALINE and MADALINE. These models were named for their use of Multiple
adaptive linear elements. MADALINE was the first neural network to be applied to a real-
world problem. It is an adaptive filter which eliminates echoes on phone lines. This neural
network is still in commercial use.
1982: John Hopfield presented a paper to the national Academy of Sciences. His approach
to create useful devices.
1985: American Institute of Physics began what has become an annual meeting — Neural
Networks for Computing.
1987: The Institute of Electrical and Electronic Engineer’s (IEEE) first International
Conference on Neural Networks drew more than 1,800 attendees.
1997: A recurrent neural network framework, Long Short-Term Memory (LSTM) was
proposed by Schmidhuber & Hochreiter.
1998: Yann LeCun published Gradient-Based Learning Applied to Document
Recognition.
Soma
Node
Dendrites
Input
Synapse
Weights or Interconnections
Axon
Output
AN ARTIFICIAL NEURON
The artificial neuron simulates four basic functions of a biological neuron.
In this various inputs to the network are represented by the mathematical symbol, x(n).
Each of these inputs is multiplied by a connection weight. The weights are represented by
w(n). In the simplest case, these products are summed, fed to a transfer function
(activation function) to generate a result, and this result is sent as output. This is also
possible with other network structures, which utilize different summing functions as well
as different transfer functions.
Major Components of an Artificial Neuron: This section describes the seven major
components which make up an artificial neuron.
1. Weighting Factors: A neuron usually receives many simultaneous inputs. Each input
has its own relative weight, which gives the input the impact that it needs on the
processing element's summation function. Some inputs are made more important than
others to have a greater effect on the processing element as they combine to produce a
neural response. Weights are adaptive coefficients that determine the intensity of the
input signal as registered by the artificial neuron. They are a measure of an input's
connection strength. These strengths can be modified in response to various training
sets and according to a network's specific topology or its learning rules.
2. Summation Function: The inputs and corresponding weights are vectors which can
be represented as (i1, i2 . . . in) and (w1, w2 . . . wn). The total input signal is the dot
product of these two vectors. The result; (i1 * w1) + (i2 * w2) +…….. + (in * wn) ; is
a single number. The summation function can be more complex than just weight sum
of products. The input and weighting coefficients can be combined in many different
ways before passing on to the transfer function. In addition to summing, the
summation function can select the minimum, maximum, majority, product or several
normalizing algorithms. The specific algorithm for combining neural inputs is
determined by the chosen network architecture and paradigm. Some summation
functions have an additional ‘activation function’ applied to the result before it is
passed on to the transfer function for the purpose of allowing the summation output to
vary with respect to time.
3. Transfer Function: The result of the summation function is transformed to a
working output through an algorithmic process known as the transfer function. In the
transfer function the summation can be compared with some threshold to determine
the neural output. If the sum is greater than the threshold value, the processing
element generates a signal and if it is less than the threshold, no signal (or some
inhibitory signal) is generated. Both types of response are significant. The threshold,
or transfer function, is generally non-linear. Linear functions are limited because the
output is simply proportional to the input.
The step type of transfer function would output zero and one, one and minus one, or
other numeric combinations. Another type, the ‘threshold’ or ramping function, can
mirror the input within a given range and still act as a step function outside that range.
It is a linear function that is clipped to minimum and maximum values, making it
non-linear. Another option is a ‘S’ curve, which approaches a minimum and
maximum value at the asymptotes. It is called a sigmoid when it ranges between 0
and 1, and a hyperbolic tangent when it ranges between -1 and 1. Both the function
and its derivatives are continuous.
4. Scaling and Limiting: After the transfer function, the result can pass through
additional processes, which scale and limit. This scaling simply multiplies a scale
factor times the transfer value and then adds an offset. Limiting is the mechanism
which insures that the scaled result does not exceed an upper, or lower bound. This
limiting is in addition to the hard limits that the original transfer function may have
performed.
5. Output Function (Competition): Each processing element is allowed one output
signal, which it may give to hundreds of other neurons. Normally, the output is
directly equivalent to the transfer function's result. Some network topologies modify
the transfer result to incorporate competition among neighboring processing elements.
Neurons are allowed to compete with each other inhibiting processing elements
unless they have great strength. Competition can occur at one or both levels. First,
competition determines which artificial neuron will be active or provides an output.
Second, competitive inputs help determine which processing element will participate
in the learning or adaptation process.
6. Error Function and Back-Propagated Value: In most learning networks the
difference between the current output and the desired output is calculated as an error
which is then transformed by the error function to match a particular network
architecture. Most basic architectures use this error directly but some square the error
while retaining its sign, some cube the error, other paradigms modify the error to fit
their specific purposes. The error is propagated backwards to a previous layer. This
back-propagated value can be either the error, the error scaled in some manner (often
by the derivative of the transfer function) or some other desired output depending on
the network type. Normally, this back-propagated value, after being scaled by the
learning function, is multiplied against each of the incoming connection weights to
modify them before the next learning cycle.
7. Learning Function: Its purpose is to modify the weights on the inputs of each
processing element according to some neural based algorithm.
The first step is to multiply each of these inputs by their respective weighting factor
[w(n)]. These modified inputs are then fed into the summing function, which usually
sums these products, however, many different types of operations can be selected.
These operations can produce a number of different values, which are then
propagated forward; values such as the average, the largest, the smallest etc. Other
types of summing functions can also be created and sometimes they may be further
complicated by the addition of an activation function which enables the summing
function to operate in a time sensitive way.
The output of the summing function is then sent into a transfer function, which turns
this number into a real output (a 0 or a 1, -1 or +1 or some other number) via some
algorithm. The transfer function can also scale the output or control its value via
thresholds. This output is then sent to other processing elements or an outside
connection, as dictated by the structure of the network.
Layer arrangement in neural network:
Neural networks are the simple clustering of artificial neurons by creating layers and
interconnections
Basically, a neural network is the grouping of neurons into layers, the connections
between these layers, and the summation and transfer functions that comprises a
functioning neural network. Most applications require networks that contain at least
the three layers - input, hidden, and output.
Input Layers
The input layer is the first layer of an ANN that receives the input information in the
form of various texts, numbers, audio files, image pixels, etc. Input data has to be
numerical. This means it might have to take something that is non-numerical and find
a way to make it numerical. The process of manipulating data before inputting it into
the neural network is called data processing and often times will be the most time
consuming part to making machine learning models.
Hidden Layers
In the middle of the ANN model are the hidden layers. There can be a single hidden
layer, as in the case of a perceptron or multiple hidden layers. The hidden layers are
composed of most of the neurons in the neural network and is the heart of
manipulating the data to get a desired output. Data will pass through the hidden layers
and be manipulated by many weights and biases.
Output Layer
The output layer is the final product that is obtained from manipulating the data in the
neural network performed by the middle layer and can represent different things. Often
times, the output layer consists of neurons that each represents an object and the
numerical value attached is the probability that it is that specific object.
Processing of ANN: It mainly depends upon the following three building blocks
a) Network Topology
b) Adjustments of Weights or Learning
c) Activation Functions
Network Topology: A network topology is the arrangement of a network along with its
nodes and connecting lines. According to the topology, ANN can be classified as the
following kinds
F(x)=sigm(x)=11+exp(−x)F(x)=sigm(x)=11+exp(−x)
Bipolar sigmoidal function − This activation function performs input editing between
-1 and 1. It can be positive or negative in nature. It is always bounded, which means its
output cannot be less than -1 and more than 1. It is also strictly increasing in nature like
sigmoid function. It can be defined as
F(x)=sigm(x)=21+exp(−x)−1=1−exp(x)1+exp(x)
Network selection:Because all artificial neural networks are based on the concept of
neurons, connections, and transfer functions, there is a similarity between the different
structures, or architectures, of neural networks. The majority of the variations stems from
the various learning rules and how those rules modify a network's typical topology.
Basically, most applications of neural networks fall into the following five categories:
a) prediction
b) classification
c) data association
d) data conceptualization
e) data filtering
Advantages:
2. Classication of the animal fibers is one of the most typical problems. It has been resolved
successfully using ANNs.
She et al., 2002 developed an intelligent fiber classification system to objectively identify
and classify two types of animal fibers, merino and mohair, by two different methods
based on image processing and artificial neural network. There are considerable
variations in the shape and contour of the scale cells and their arrangement within the
cuticle. They used these two systems based on how the scale features of the animal fibers
were extracted. The data was cast images of fibers captured by optical microscopy. Then
they applied principal component analysis (PCA) to reduce the dimension of input
images and extract an optimal linear feature before applying neural network. Furthermore
neural network classifiers generalize better when they have a small number of
independent inputs. Finally they used an unsupervised neural network in which the
outputs used as inputs in the supervised network (a multilayer perception with a back
propagation algorithm) for classification while the fiber classes were the outputs of the
output layer. For the unsupervised network, learning rate at 0.005 (step size) was set
which linearly decayed to 0.0005 within the first 100 epochs and three different numbers
of units in the hidden layer (80, 50, and 20) was used. Multilayer perception used for
fiber classification had a hyperbolic tangent activation function in the processing
elements of the hidden layer and output layer. They also compared their two systems and
concluded that neural network system was more robust since only raw images were used
and by developing more powerful learning strategies, the classification accuracy of model
would be improved (She et al., 2002).
4. ANNs have supported to identify the production control parameters and the prediction of
the properties of the melt spun bers in the case of synthetic fibers.
Kuo et al., (2004) applied neural network theory to consider the extruder screw speed,
gear pump gear speed, and winder winding speed of a melt spinning system as the inputs
and the tensile strength and yarn count of spun fibers as the outputs. The data from the
experiments were used as learning information for the neural network to establish a
reliable prediction model that can be applied to new projects. The neural network model
can predict the tensile strength and yarn count of spun fibers so that it can provide a very
good and reliable reference for spun fiber processing.
5. ANNs have been used in conjunction with NIR` spectroscopy for the identication of the
textile fibers.
Durand et al., (2007) studied different approaches for variable selection in the context of
near-infrared (NIR) multivariate calibration of the cotton–viscose textiles composition.
First, a model-based regression method was proposed. It consisted of genetic algorithm
optimization combined with partial least squares regression (GA–PLS). The second
approach was a relevance measure of spectral variables based on mutual information
(MI), which can be performed independently of any given regression model. As MI made
no assumption on the relationship between X and Y, non-linear methods such as feed-
forward artificial neural network (ANN) were thus encouraged for modeling in a
prediction context (MI–ANN). GA–PLS and MI–ANN models were developed for NIR
quantitative prediction of cotton content in cotton–viscose textile samples. The results
were compared to full spectrum (480 variables) PLS model (FS-PLS). The model
required 11 latent variables and yielded a 3.74% RMS prediction error in the range 0–
100%. GA–PLS provided more robust model based on 120 variables and slightly
enhanced prediction performance (3.44% RMS error). Considering MI variable selection
procedure, great improvement can be obtained as 12 variables only were retained. On the
basis of these variables, a 12 inputs of ANN model was trained and the corresponding
prediction error was 3.43% RMS error.
Yarn:
1. ANN can help in imparting the better control on yarn quality during carding process.
Beltran et al., (2004) developed an artificial neural network (ANN) trained with
back-propagation encompassed all known processing variables that existed in
different spinning mills, and then generalized this information to accurately predict
yarn quality of worsted spinning performance for an individual mill. The ANN was
then subsequently trained with commercial mill data to assess the feasibility of the
method as a mill-specific performance prediction tool. The ANN was a suitable tool
for predicting worsted yarn quality for a specific mill.
2. ANN is used in auto leveling in the draw frame for imparting desired linear density
control.
Farooq and Cherif (2008) have reported a method of predicting the leveling action
point, which was one of the important auto-leveling parameters of the drawing frame
and strongly influences the quality of the manufactured yarn, by using artificial neural
networks (ANN). Various leveling action point affecting variables were selected as
inputs for training the artificial neural networks, which was aimed to optimize the
auto-leveling by limiting the leveling action point search range. The Levenberg–
Marquardt algorithm was incorporated into the back-propagation to accelerate the
training and Bayesian regularization was applied to improve the generalization of the
networks. The results obtained were quite promising that the accuracy in computation
can lead to better sliver CV% and better yarn quality.
3. ANN used in optimization of the top roller diameter as well as the study of the
spinning balloon in the main spinning phase is important for controlling yarn quality.
Ghane et al., 2008 revealed that cotton and cotton/polyester yarns regularity
improved by optimising front top rollers diameter of the ring machine using a self-
organised Kohonen neural network. The diameter of top roller has been reduced in
stages and in each stages yarns are produced. The uneveness as well as the
imperfections of the produced yarns have been measured. The results showed that the
uneveness decrase with the decrease in the top rollers diameter up to optimum
diameter beyond which the uneveness of yarns increase rapidly as the top roller
diameter decreases,. These optimum values are different in the cases of cotton /
cotton polyester yarns. Kohonen neural network has been applied to observe optimum
values of top rollers diameter in case of each type yarn. The optimum diameter of top
rollers as estimated by neural network is found to be 27.5mm for the most of the
cotton and cotton/polyester (35:75) yarns.
4. Application of ANN, warp breakage rate reduce during the weaving.
Yao et al., (2005) investigated the predictability of the warp breakage rate from a
sizing yarn quality index using a feed-forward back-propagation network in an
artificial neural network system. An eight-quality index (size add-on, abrasion
resistance, abrasion resistance irregularity, hairiness beyond 3 mm, breaking strength,
breaking strength irregularity, breaking elongation, and breaking elongation
irregularity) and warp breakage rates were rated in controlled conditions. A good
correlation between predicted and actual warp breakage rates indicated that warp
breakage rates can be predicted by neural networks. A model with a single sigmoid
hidden layer with four neurons was able to produce better predictions than the other
models of this particular data set in the study.
5. ANNs have been used for the prediction of hairiness of worsted wool yarns.
Khan et al., (2009) studied the performance of multilayer perceptron (MLP) and
multivariate linear regression (MLR) models for predicting the hairiness of worsted-
spun wool yarns objectively by examining 75 sets of yarns consisting of various top
specifications and processing parameters of shrink-resist treated, single-ply, pure
wool worsted yarns. The results indicated that the MLP model predicted yarn
hairiness was more accurately than the MLR model and showed that a degree of
nonlinearity existed in the relationship between yarn hairiness and the input factors
considered. Therefore, the artificial neural network (ANN) model had the potential
for wide mill specific applications for high precision prediction of hairiness of a yarn
from limited top, yarn and processing parameters. The use of the ANN model as an
analytical tool may facilitate the improvement of current products by offering
alternative material specification and/or selection and improved processing
parameters governed by the predicted outcomes of the model. On sensitivity analysis
on the MLP model, yarn twist, ring size, average fiber length (hauteur) had the
greatest effect on yarn hairiness with twist having the greatest impact on yarn
hairiness.
6. The spinning of the staple fibers for the production of the yarns is a multistage
procedure including many parameters, which influence the characteristics of the end
product, viz; the spun yarn. ANN is the excellent method for predictors. The cost
minimization of cotton fiber is also ensured by using classical linear programming
approach in combination with ANN.
Zeng et al., (2004) tried to predict the tensile properties (yarn tenacity) of air-jet spun
yarns produced from 75/25 polyester on an air-jet spintester by two models, namely
neural network model and numerical simulation. Fifty tests were undergone to obtain
average yarn tenacity values for each sample. A neural network model provided
quantitative predictions of yarn tenacity by using the following parameters as inputs:
first and second nozzle pressures, spinning speed, distance between front roller nip
and first nozzle inlet, and the position of the jet orifice in the first nozzle so that the
effects of parameters on yarn tenacity can be determined. Meanwhile, a numerical
simulation provided a useful insight into the flow characteristics and wrapping
formation process of edge fibers in the nozzle of an air-jet spinning machine; hence,
the effects of nozzle parameters on yarn tensile properties can be predicted. The result
showed that excellent agreement was obtained between these two methods.
Moreover, the predicted and experimental values agreed well to indicate that the
neural network was an excellent method for predictors.
7. ANNs have been used for the prediction of hairiness of worsted wool yarns and of
cotton yarns. In a same way, ANNs have been used for the prediction of the evenness
of ring spun worsted yarns and cotton yarns or the evenness of blended rotor yarns.
Khan et al., (2009) studied the performance of multilayer perceptron (MLP) and
multivariate linear regression (MLR) models for predicting the hairiness of worsted-
spun wool yarns objectively by examining 75 sets of yarns consisting of various top
specifications and processing parameters of shrink-resist treated, single-ply, pure
wool worsted yarns. The results indicated that the MLP model predicted yarn
hairiness was more accurately than the MLR model and showed that a degree of
nonlinearity existed in the relationship between yarn hairiness and the input factors
considered. Therefore, the artificial neural network (ANN) model had the potential
for wide mill specific applications for high precision prediction of hairiness of a yarn
from limited top, yarn and processing parameters. The use of the ANN model as an
analytical tool may facilitate the improvement of current products by offering
alternative material specification and/or selection and improved processing
parameters governed by the predicted outcomes of the model. On sensitivity analysis
on the MLP model, yarn twist, ring size, average fiber length (hauteur) had the
greatest effect on yarn hairiness with twist having the greatest impact on yarn
hairiness.
8. ANN also used in splicing of two yarn ends more perfectly.
Ünal et al., (2010) investigated the retained spliced diameter with regard to splicing
parameters and fiber and yarn properties. The yarns were produced from eight
different cotton types in three yarn counts (29.5, 19.7 and 14.8 tex) and three different
twist coefficients (αTex 3653, αTex 4038, αTex 4423). To investigate the effects of
splicing parameters on the retained spliced diameter, opening air pressure, splicing air
pressure and splicing air time were set according to an orthogonal experimental
design. The retained spliced diameter was calculated and predicted by using an
artificial neural network (ANN) and response surface methods. Analyses showed that
ANN models were more powerful compared with response surface models in
predicting the retained spliced diameter of ring spun cotton yarns.
9. The image processing technology is interfaced with neural networks to extract the
defects in yarn packages and thereby used to classify the quality grades of the yarn
packages.
10. ANN is useful in defining relationship between process variables and molecular
structure for synthetic yarns.
11. ANNs have also been used for the appearance analysis of false twist textured yarn
packages, for the prediction of yarn shrinkage or for the modelling of the relaxation
behaviour of yarns.
Lin (2007) studied the shrinkages of warp and weft yarns of 26 woven fabrics
manufactured by air jet loom by using neural net model which were used to determine
the relationships between the shrinkage of yarns and the cover factors of yarns and
fabrics. The shrinkages were affected by various factors such as loom setting, fabric
type, and the properties of warp and weft yarns. The neural net was trained with 13
experimental data points. A test on 13 data points showed that the mean errors
between the known output values and the output values calculated using the neural
net were only 0.0090 and 0.0059 for the shrinkage ratio of warp (S1) and weft (S2)
yarn, respectively. There was a close match between the actual and predicted
shrinkage of the warp (weft) yarn. The test results gave R2 values of 0.85 and 0.87
for the shrinkage of the warp (i.e., S1) and weft (i.e., S2), respectively. This showed
that the neural net produced good results for predicting the shrinkage of yarns in
woven fabrics. Different woven fabrics manufactured on different looms like rapier,
gripper, etc., raw material yarn ingredients (e.g., T/C × T/R, T/R × T/R, T/C × C,
etc.), and fabric structural class (e.g., twill, satin, etc.) were examined to measure the
shrinkage ratio of warp and weft yarns. The developed neural net model was then
used to train the obtained data and the result showed that the prediction of yarn
shrinkage in the off-loomed fabrics can be fulfilled through a prediction model
constructed with neural net.
Fabric:
20. Colour measurement, evaluation, comparison and prediction are major actions in the
dyeing and finishing field of the textile process. It is done by ANNs.
Apparel:
Non-Woven:
Conclusion: In summary, artificial neural networks is an ability to perform tasks outside the
scope of traditional processors. They can recognize patterns within vast data sets and then
generalize those patterns into recommended courses of action. Neural networks requires an "art."
This art involves the understanding of the various network topologies, current hardware, current
software tools, the application to be solved, and a strategy to acquire the necessary data to train
the network. This art further involves the selection of learning rules, transfer functions,
summation functions, and how to connect the neurons within the network. Then, the art of neural
networking requires a lot of hard work as data is fed into the system, performances are
monitored, processes tweaked, connections added, rules modified, and on and on until the
network achieves the desired results. These desired results are statistical in nature. The network
is not always right. It is for that reason that neural networks are finding themselves in
applications where humans are also unable to always be right.
In this modern era, ANN is being used in many areas to solve various problems with intelligence
similar to human being. The application of ANN was not widely accepted in the labor-intensive
clothing production. However, the global competitive environment and a target to achieve low
cost of production are the main reasons for the AI’s wider applications in apparel industry
starting from material selection and sourcing, through manufacturing till retailing. ANN can be
used in various processes of textile production such as fiber grading, prediction of yarn
properties, detection of fabric faults, and dye recipe prediction. Similarly, ANN can be applied in
all the stages of garment production such as preproduction, production, and postproduction
operations. Developed countries have already started using ANN to improve quality of garment,
enhanced customer service, and hence increased sales. Much progress is undergoing in AI
rapidly and in near future it will become an important tool for the garment manufacturers for
enhancing quality, increasing production, lowering operating costs, and exercising in house
control over production, leading to quick response and just-in-time concept.
References: