You are on page 1of 57

FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER1

INTRODUCTION

1.1 General

In design and analysis of foundations due to variants in type of foundations, soil support
(SBC), earthquake parameters and different loading, process get complicated and lengthy,
resulting in uneconomical or unsafe design.
Artificial Neural Network (ANN) is based on human brains neuron system, predicting results
by data and bias provided to model, resulting in accuracy more than 95%. While feeding data
to neural network model variable have to specify and accordingly neural system runs the
model. After generating model design and analysis for new project is predicted by neuron
system in accordance with data entered in model.
As Foundation design is time and energy consuming, also having chances that design
uneconomical and unsafe foundations. Artificial Neural Network is solution for this, sample
data of various foundation type, soil support and diff. Load has to enter in model data. By
using MATLAB software's NNTOOL the model can generate, after model is prepared,
foundation design can be done.

1.2 Foundation

The foundation is that portion of a structure that transmits the loads from the structure to the
underlying foundation material. There are two major requirements to be satisfied in the
design of foundations:
1) Provision of an adequate factor of safety against failure of the foundation material. Failure
of the foundation material may lead to failure of the foundation and may also lead to failure
of the entire structure.
2) Adequate provision against damage to the structure which may be caused by total or
differential settlements of the foundations.
In order to satisfy these requirements, it is necessary to carry out a thorough exploration of
the foundation materials together with an investigation of the properties of these materials by
means of laboratory or field testing. Using these physical properties of the foundation
materials the foundations may be designed to carry the loads from the structure with an
adequate margin of safety. In doing this, much use may be made of soil mechanics but to a

TCOER/CIVIL/STRUCTURE 2017-2018 1
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

large extent foundation engineering still remains an art. This chapter will be largely
concerned with the contributions that may be made by soil mechanics to foundation
engineering. There are four major types of foundations which are used to transmit the loads
from the structure to the underlying material. The most common type of foundation is the
footing which consists of an enlargement of the base of a column or wall so that the pressure
transmitted to the foundation material will not cause failure or excessive settlement. In order
to reduce the bearing pressure transmitted to the foundation material the area of the footing
may be increased. As the size of the footing increases.
If the foundation material cannot withstand the pressure transmitted by the footings the
pressure may be reduced by combining all of the footings into a single slab or raft covering
the entire plan area of the structure as raft foundations are also used to bridge localized weak
or compressible areas in the foundation material. They are also used where it is desirable to
reduce the differential settlements that may occur between adjacent columns.
1.3 Soft Computing Technique
The definition of soft computing is not precise. Lotfi A. Zadeh, the inventor of the term soft
computing, describes it as follows (Zadeh, 1994):‘‘Soft computing is a collection of
methodologies that aim to exploit the tolerance for imprecision and uncertainty to achieve
tractability, robustness, and low solution cost. Its principal constituents are fuzzy logic, neuro
computing, and probabilistic reasoning, soft computing is likely to play an increasingly
important role in many application areas, including software engineering. The role model for
soft computing is the human mind.’’
Soft Computing is an emerging field that consists of complementary elements of fuzzy logic,
neural computing, evolutionary computation, machine learning and probabilistic reasoning.
Due to their strong learning, cognitive ability and good tolerance of uncertainty and
imprecision, soft computing techniques have found wide applications. Generally speaking,
soft computing techniques resemble human reasoning more closely than traditional
techniques which are largely based on conventional logical systems, such as sentential logic
and predicate logic, or rely heavily on the mathematical capabilities of a computer.
The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty
and partial truth to achieve tractability, robustness and low solution cost. Soft Computing
Techniques (Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic Models,) have
been recognized as attractive alternatives to the standard, well established “hard computing”
paradigms. Traditional hard computing methods are often too cumbersome for today’s

TCOER/CIVIL/STRUCTURE 2017-2018 2
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

problems. They always require a precisely stated analytical model and often a lot of
computational time. Soft computing techniques, which emphasize gains in understanding
system behavior in exchange for unnecessary precision, have proved to be important practical
tools for many contemporary problems. Neural Network (NN) and FLMs are universal
approximations of any multivariate function because they can be used for modeling highly
nonlinear, unknown, or partially known complex systems, plants, or processes.
1.4 Artificial Neural Networks. (ANN)
Artificial neural networks (ANNs) are non-linear data driven self-adaptive approach as
opposed to the traditional model based methods. They are powerful tools for modeling,
especially when the underlying data relationship is unknown. ANNs can identify and learn
correlated patterns between input data sets and corresponding target values. After training,
ANNs can be used to predict the outcome of new independent input data. ANNs imitate the
learning process of the human brain and can process problems involving non-linear and
complex data even if the data are imprecise and noisy. Thus they are ideally suited for the
modeling of agricultural data which are known to be complex and often non-linear.

A very important feature of these networks is their adaptive nature, where "learning by
example" replaces "programming" in solving problems. This feature makes such
computational models very appealing in application domains where one has little or
incomplete understanding of the problem to be solved but where training data is readily
available. These networks are "neural" in the sense that they may have been inspired by
neuroscience but not necessarily because they are faithful models of biological neural or
cognitive phenomena. In fact, majority of the network are more closely related to traditional
mathematical and/or statistical models such as non-parametric pattern classifiers, clustering
algorithms, nonlinear filters, and statistical regression models than they are to neurobiology
models.
Artificial neural networks (ANNs) or connectionist systems are a computational model used
in machine learning, computer science and other research disciplines, which is based on a
large collection of connected simple units called artificial neurons, loosely analogous
to axons in a biological brain. Connections between neurons carry an activation signal of
varying strength. If the combined incoming signals are strong enough, the neuron becomes
activated and the signal travels to other neurons connected to it. Such systems can be trained
from examples, rather than explicitly programmed, and excel in areas where the solution
or feature detection is difficult to express in a traditional computer program. Like other

TCOER/CIVIL/STRUCTURE 2017-2018 3
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

machine learning methods, neural networks have been used to solve a wide variety of tasks,
like computer vision and speech recognition, that are difficult to solve using
ordinary rule-based programming.

Typically, neurons are connected in layers, and signals travel from the first (input), to the last
(output) layer. Modern neural network projects typically have a few thousand to a few
million neural units and millions of connections; their computing power is similar to a worm
brain, several orders of magnitude simpler than a human brain. The signals and state of
artificial neurons are real numbers, typically between 0 and 1. There may be a threshold
function or limiting function on each connection and on the unit itself, such that the signal
must surpass the limit before propagating. Back propagation is the use of forward stimulation
to modify connection weights, and is sometimes done to train the network using known
correct outputs. However, the success is unpredictable: after training, some systems are good
at solving problems while others are not. Training typically requires several thousand cycles
of interaction.

Fig 01. Typical Neuron System

In the above figure circles represents neurons and arrows are connections from output of one
neuron to input of another.
The goal of the neural network is to solve problems in the same way that a human would,
although several neural network categories are more abstract. New brain research often

TCOER/CIVIL/STRUCTURE 2017-2018 4
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

stimulates new patterns in neural networks. One new approach is use of connections which
span further to connect processing layers rather than adjacent neurons. Other research being
explored with the different types of signal over time that axons propagate, such as deep
learning, interpolates greater complexity than a set of Boolean variables being simply on or
off. Newer types of network are more free flowing in terms of stimulation and inhibition,
with connections interacting in more chaotic and complex ways. Dynamic neural networks
are the most advanced, in that they dynamically can, based on rules, form new connections
and even new neural units while disabling others.

There are problem categories that cannot be formulated as an algorithm. Problems that
depend on many subtle factors, for example the purchase price of a real estate which our
brain can (approximately) calculate. Without an algorithm a computer cannot do the same.
Therefore, the question to be asked is: How do we learn to explore such problems? Exactly
we learn; a capability computer obviously does not have. Humans have Computers cannot
learn a brain that can learn. Computers have some processing units and memory. They allow
the computer to perform the most complex numerical calculations in a very short time, but
they are not adaptive. If we compare computer and brain1, we will note that, theoretically,
the computer should be more powerful than our brain: It comprises 109 transistors with a
switching time of 10−9 seconds. The brain contains 1011 neurons, but these only have a
switching time of about 10−3 seconds. The largest part of the brain is working continuously,
while the largest part of the computer is only passive data storage. Thus, the brain is parallel
and therefore parallelism performing close to its theoretical maxi- 1 Of course, this
comparison is - for obvious reasons - controversially discussed by biologists and computer
scientists, since response time and quantity do not tell anything about quality and
performance of the processing units as well as neurons and transistors cannot be compared
directly. Nevertheless, the comparison serves its purpose and indicates the advantage of
parallelism by means of processing time.

1.5 Objectives
Foundations design is complex process due to variants in type of foundations, soil support
(SBC), material constant, earthquake parameters and different loading. Manual design of
such foundation may consume time and working man hours. For simplifying the work some
spread sheets are available but they also need to have manual data feeding, which may cause
manual error while entering data, and also consumes time for clerical workman ship.

TCOER/CIVIL/STRUCTURE 2017-2018 5
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Artificial Neural Network predicts results by data and bias provided to model, resulting in
accuracy more than 95% and may results in to safety and economy of foundation design. NN
Model should achieve

1) Regression value near to unity to accuracy.


2) Should predict the results based on new inputs.

TCOER/CIVIL/STRUCTURE 2017-2018 6
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER 2

LITERATURE REVIEW

2.1 General

Artificial neural networks are a family of numerical learning techniques. They consist of
many nonlinear computational elements that form the network nodes, linked by weighted
interconnections. Research into artificial neural networks (ANNs) is trying to model the
human brain neurons and their processes. The human nervous system is composed of a vast
number of single, interconnected cellular units, the neurons.
Artificial neural networks are algorithms for cognitive tasks, such as learning and
optimization. They have the ability to learn and generalize from examples without knowledge
of rules. Research into artificial neural networks and their application to structural
engineering problems is gaining interest and is growing rapidly. The use of artificial neural
networks in structural engineering has evolved as a new computing paradigm, even though
still very limited.
Neural networks are typically organized in layers. Layers are made up of a number of
interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the
network via the 'input layer', which communicates to one or more 'hidden layers' where the
actual processing is done via a system of weighted 'connections'. The hidden layers then link
to an 'output layer' where the answer is output.

2.2 Literature Review

2.2.1 Maria J. Sulewska (2011): “Applying Artificial Neural Networks for analysis of
geotechnical problems”
In this study a discussion of some applications of Artificial Neural Networks (ANNs) in geo -
engineering using the analysis of the following six geotechnical problems, related mainly to
prediction and classification purposes: 1) prediction of Over consolidation Ratio (OCR), 2)
determination of potential soil liquefaction, 3) prediction of foundation settlement, 4)
evaluation of piles bearing capacity, 5) prediction of compaction parameters for cohesive
soils, 6) compaction control of embankments built of cohesion less soils. The problems

TCOER/CIVIL/STRUCTURE 2017-2018 7
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

presented are based on the applications of the Multi-Layered Perceptron (MLP) neural
networks.

2.2.2 Abbes Berrais (1999): “Artificial Neural Networks in Structural Engineering:


Concept and Applications”
In this paper the concept and theoretical background of ANNs have been introduced. ANNs
form a family of numerical learning techniques. They can be used to model any nonlinear
mapping between variables and can be considered as complementary to the existing
conventional programming tools. The novelty of ANNs lies in the way in which the novel
computing techniques are able to contribute new and effective solutions to problems that
have challenged the power of more conventional methods. The investigation described in this
paper has provided a valuable insight and identified a number of advantages and
shortcomings of using ANNs in structural engineering. Some of the advantages of ANNs are:
1) The ability to extract information from incomplete and noisy data. 2) Acquire experience
and knowledge through self-training and organization of the knowledge. 3) The potential for
very fast optimization. 4) Their suitability for problems in which algorithmic solutions are
difficult to develop or do not exist. 5) Suitability of rapid application development.

2.2.3 Mr. Kumar Gaurav, Dr. Sankha Bhaduri, Dr. Arbind Kumar (2016): “Prediction
of Deflection of Cantilever Beam of Any Arbitrary Length Using Soft Computation
Technique.”
In this study the prediction of the large deflection of cantilever beam is performed
for variable lengths of the beam. Artificial Neural Networking technique is applied here to
predict the deflection for any arbitrary length of the beam. The results from the analysis
clearly prove the efficiency of the network. This soft computation application to solve the
large deflection problem of a beam helps to avoid the mathematical complicacy of the
nonlinear equation of geometrical non linearity. The proposed artificial neural network is
capable enough to predict the deflection of free end of cantilever beam of any length under
the action of a static load at the tip of the beam. Therefore, this paper is very effective in the
field of nonlinear deflection of the structures.

2.2.4 Sangeeta Yadav, K. K. Pathak, Rajesh Shrivastava (2010): “Shape Optimization of


Cantilever Beams Using Neural Network”

TCOER/CIVIL/STRUCTURE 2017-2018 8
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

In this study, an application of neural network is demonstrated on shape optimization


problems. Considering different geometrical parameters, finite element analyses of cantilever
beams are carried out. Using these results, a back propagation neural network is trained.
Successfully trained networks are further used for shape optimisation of newer problems.
Thus optimised beams are further validated with finite element analyses results and found to
be in closer match. It overcomes some of the drawbacks of conventional approaches of shape
optimisation, like high computational time and large memory requirement. It is observed that
proposed approach works efficiently for optimizing shape accounting displacement and stress
criteria. It offers a handy tool for design engineers who are not familiar with the theoretical
and computational aspects of shape optimization.

2.2.5 Ehsan Momeni, Ramli Nazir, Danial Jahed Armaghani, Harnedi Maizir (2015):
“Application of Artificial Neural Network for Predicting Shaft and Tip Resistances of
Concrete Piles”
In this study, an ANN-based predictive model for estimating ABC of piles and its distribution
is proposed. For network construction purpose, 36 PDA tests were performed on various
concrete piles in different project sites. The PDA results, pile geometrical characteristics as
well as soil investigation data were used for training the ANN models. Findings indicate the
feasibility of ANN in predicting ultimate, shaft and tip bearing resistances of piles. The
coefficients of determination, R², equal to 0.941, 0.936, and 0.951 for testing data reveal that
the shaft, tip and ultimate bearing capacities of piles predicted by ANN-based model are in
close agreement with those of HSDPT. By using sensitivity analysis, it was found that the
length and area of the piles are dominant factors in the proposed predictive model.

2.2.6 H SUDARSANA RAO and B RAMESH BABU (2007): “Hybrid neural network
model for the design of beam subjected to bending and shear”
This paper demonstrates the applicability of Artificial Neural Networks (ANN) and Genetic
Algorithms (GA) for the design of beams subjected to moment and shear. A hybrid neural
network model which combines the features of feed forward neural networks and genetic
algorithms has been developed for the design of beam subjected to moment and shear. The
network has been trained with design data obtained from design experts in the field. The
hybrid neural network model learned the design of beam in just 1000 training cycles. After
successful learning, the model predicted the depth of the beam, area of steel, spacing of

TCOER/CIVIL/STRUCTURE 2017-2018 9
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

stirrups required for new problems with accuracy satisfying all design constraints. The
various stages involved in the development of a genetic algorithm based neural network
model are addressed at length in this paper.

2.2.7 Mohamed A. Shahin, Mark B. Jaksa and Holger R. Maier (2001) : “Artificial
Neural Network Applications In Geotechnical Engineering”
A review of the literature reveals that ANNs have been used successfully in pile capacity
prediction, modelling soil behaviour, site characterisation, earth retaining structures,
settlement of structures, slope stability, design of tunnels and underground openings,
liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and
classification of soils. The objective of this paper is to provide a general view of some
ANN applications for solving some types of geotechnical engineering problems. It is not
intended to describe the ANNs modelling issues in geotechnical engineering. The paper also
does not intend to cover every single application or scientific paper that found in the
literature. For brevity, some works are selected to be described in some detail, while others
are acknowledged for reference purposes. The paper then discusses the strengths and
limitations of ANNs compared with the other modelling approaches.

2.2.8 Lirong Shaa and Yang Yueb(2013): “Structural Reliability Optimization Design
based on Artificial Neural Network”
The ANN-based optimization design for considering fatigue reliability requirements on
structure was proposed in this paper. The ANN-based response surface method was used to
analysis fatigue reliability of the structure. The fatigue reliability requirements were taken as
constraints while the structural weight as the objective function, the ANN model was
performed to simulate the relationship between the fatigue reliability and geometry
dimension of the structure, the optimization result of the structure with a minimum weight
was obtained, thus can make economic benefit meanwhile ensure the safety of the structure.

2.2.9 H. Elarabi, S. A. Abdelgalil(2014): “Application of artificial neural network for


prediction of Sudan soil profile”
The aim of this paper is to predict the natural geotechnical profile of Sudan country, which is
very important and vital for all civil engineering work. Availability of pre-predicted profile
before performing drilling and boring minimize time and cost. To achieve this goal Artificial

TCOER/CIVIL/STRUCTURE 2017-2018 10
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Neural Networks (ANNs) program is used. This program has capabilities to study and
process the problems that have complex variable, such as Sudan topographical profile. Five
main ANN models were constructed based on the soil data of 1909 boreholes from 417 sites.
These models use the three dimensional coordinates as input data to predict soil profile and
soil parameters at different locations. Artificial Neural Networks is found to have the
acceptable ability to predict the soil classification and soil parameters in Sudan. The lack in
accuracy in some predicted data when compared with the soil profile obtained from actual
boreholes is due to inconsistency of coordinates and depth.

2.1.10 Wardani S.P.R., Surjandari N.S., Jajaputra A.A.(2013): “Analysis of Ultimate


Bearing Capacity of Single Pile Using the Artificial Neural Networks Approach: A Case
Study”
This study aims to apply NN model for prediction of ultimate bearing capacity of single pile
foundation, was named NN_Qult model. The results of analysis model were then compared
with Meyerhof, 1976 and Briaud, 1985 formulas. At the stage of modeling, data from
full-scale pile load test and SPT were used. The selected input variables are: d (pile
diameter), L (length of the pile embedded), the N60 (shaft) value, and the N60 (tip) value.
The study generates design Charts that are expected to predict the ultimate bearing capacity
of a single pile foundation. The results showed that neural networks can be used for
prediction of ultimate bearing capacity of single pile foundation. This is particularly due to
the sensitivity analysis results indicated the suitability of artificial neural network model with
existing theories.
2.1.11 Harnedi Maizir and Khairul Anuar Kassim(2013): “Neural Network Application
in Prediction of Axial Bearing Capacity of Driven Piles”
This paper presents the application of the Artificial Neural Network (ANN) for prediction of
axial capacity of a driven pile by adopting data collected from several projects in Indonesia
and Malaysia. As many as 300 data were selected for this study. In this study, ANN was set
and trained to predict the axial bearing capacity from high strain dynamic testing, i.e. Pile
Driving Analyzer (PDA) data. A system was developed by a computerized intelligent system
for predicting the total pile capacity for various pile characteristics and hammer energy.
The results show that the neural network models give a good prediction of axial bearing
capacity of piles if both stress wave data and properties of both driven pile and driving

TCOER/CIVIL/STRUCTURE 2017-2018 11
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

system are considered in the input data. Verification of the model indicates that the numbers
of data are not always related to the quality of the prediction.

2.1.12 Marijana Lazarevska, Milos Knezevic, Meri Cvetkovska, Ana


Trombeva-Gavriloska (2014): “APPLICATION OF ARTIFICIAL NEURAL
NETWORKS IN CIVIL ENGINEERING”

In this paper, the application of artificial neural networks for solving complex civil
engineering problems is of huge importance for the construction design process is discussed.
They can be successfully used for prognostic modelling in different engineering fields,
especially in those cases where some prior (numerical or experimental) analyses were already
made. This paper presents some of the positive aspects of neural network’s model that was
used for determination of fire resistance of construction elements.

2.1.13 D.-S. Jeng, D. H. Cha and M. Blumenstein: “Application of Neural Network in


Civil Engineering Problems”

In this paper, an artificial neural network (ANN) is applied to several civil engineering
problems, which have difficulty to solve or interrupt through conventional approaches of
engineering mechanics. These include tide forecasting, earthquake-induced liquefaction and
wave-induced seabed instability. As shown in the examples, ANN model can provide
reasonable accuracy for civil engineering problems, and a more effective tool for engineering
applications.

TCOER/CIVIL/STRUCTURE 2017-2018 12
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER 3

SOFT COMPUTING TECHNIQUES

3.1 Artificial Neural Network

Work on artificial neural networks, commonly referred to as neural networks, has been
motivated right from its inception by the recognition that the brain computes in an entirely
different way from the conventional digital computer. Fig.3.1.1 and Fig.3.1.2 shows
biological neuron

Fig.02. Biological Neuron


A neural network is a massively parallel distributed processor that has a natural propensity
for storing experiential knowledge and making it available for use. It resembles the brain in
two respects:
1. Knowledge is acquired by the network through a learning process.
2. Interneuron connection strengths known as synaptic weights are used to store the
knowledge.
Neural networks have seen an explosion of interest over the last few years and are being
successfully applied across an extraordinary range of problem domains, in areas as diverse as
finance, medicine, engineering, geology, physics and biology. The excitement stems from the
fact that these networks are attempts to model the capabilities of the human brain. From a
statistical perspective neural networks are interesting because of their potential use in
prediction and classification problems.

TCOER/CIVIL/STRUCTURE 2017-2018 13
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Most important functional unit in human brain – a class of cells called – NEURON

Dendrites

Cell
Body

Axon

Synapse

Hippocampal Schemati
Neurons c
Fig.03. Schematic Diagram of Neuron
• Dendrites – Receive information
• Cell Body – Process information
• Axon – Carries processed information to other neurons
• Synapse – Junction between Axon end and Dendrites of other Neurons
Artificial neural networks (ANNs) are non-linear data driven self-adaptive approach as
opposed to the traditional model based methods. They are powerful tools for modeling,
especially when the underlying data relationship is unknown. ANNs can identify and learn
correlated patterns between input data sets and corresponding target values. After training,
ANNs can be used to predict the outcome of new independent input data. ANNs imitate the
learning process of the human brain and can process problems involving non-linear and
complex data even if the data are imprecise and noisy. Thus they are ideally suited for the
modeling of agricultural data which are known to be complex and often non-linear.
A very important feature of these networks is their adaptive nature, where "learning by
example" replaces "programming" in solving problems. This feature makes such
computational models very appealing in application domains where one has little or
incomplete understanding of the problem to be solved but where training data is readily
available. These networks are "neural" in the sense that they may have been inspired by
neuroscience but not necessarily because they are faithful models of biological neural or
cognitive phenomena. In fact, majority of the network are more closely related to traditional
mathematical and/or statistical models such as non-parametric pattern classifiers, clustering

TCOER/CIVIL/STRUCTURE 2017-2018 14
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

algorithms, nonlinear filters, and statistical regression models than they are to neurobiology
models.

3.1.1 Benefits of neural networks

In practice, however, neural networks cannot provide the solution working by themselves
alone. Rather, they need to be integrated into a consistent system engineering approach.
Specifically, a complex problem of interest is decomposed into a number of relatively simple
tasks, and neural networks are assigned a subset of the tasks (e.g. pattern recognition,
associative memory, control) that match their inherent capabilities. It is important to
recognize, however, that we have a long way to go (if ever) before we can build a computer
architecture that mimics a human brain. The use of neural networks offers the following
useful properties and capabilities:
1. Nonlinearity. A neuron is basically a nonlinear device. Consequently, a neural network,
made up of an interconnection of neurons, is itself nonlinear. Moreover, the nonlinearity is of
a special kind in the sense that it is distributed throughout the network.
2. Input-output mapping. A popular paradigm of learning called supervised learning involves
the modification of the synaptic weights of a neural network by applying a set of training
samples. Each sample consists of a unique input signal and the corresponding desired
response. The network is presented a sample picked at random from the set, and the synaptic
weights (free parameters) of the network are modified so as to minimize the difference
between the desired response and the actual response of the network produced by the input
signal in accordance with an appropriate criterion. The training of the network is repeated for
many samples in the set until the network reaches a steady state, where there are no further
significant changes in the synaptic weights. The previously applied training samples may be
reapplied during the training session, usually in a different order. Thus the network learns
from the samples by constructing an input-output mapping for the problem at hand.
3. Adaptivity- Neural networks have a built-in capability to adapt their synaptic weights to
changes in the surrounding environment. In particular, a neural network trained to operate in
a specific environment can be easily retrained to deal with minor changes in the operating
environmental conditions. Moreover, when it is operating in a non-stationary environment a
neural network can be designed to change its synaptic weights in real time. The natural
architecture of a neural network for pattern classification, signal processing, and control

TCOER/CIVIL/STRUCTURE 2017-2018 15
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

applications, coupled with the adaptive capability of the network, makes it an ideal tool for
use in adaptive pattern classification, adaptive signal processing, and adaptive control.
4. Contextual information. Knowledge is represented by the very structure and activation
state of a neural network. Every neuron in the network is potentially affected by the global
activity of all other neurons in the network. Consequently, contextual information is dealt
with naturally by a neural network.
5. Fault tolerance. A neural network, implemented in hardware form, has the potential to be
inherently fault tolerant in the sense that its performance is degraded gracefully under
adverse operating. For example, if a neuron or its connecting links are damaged, recall of a
stored pattern is impaired in quality. However, owing to the distributed nature of information
in the network, the damage has to be extensive before the overall response of the network is
degraded seriously. Thus, in principle, a neural network exhibits a graceful degradation in
performance rather than catastrophic failure.
6. VLSI implement ability. The massively parallel nature of a neural network makes it
potentially fast for the computation of certain tasks. This same feature makes a neural
network ideally suited for implementation using very-large-scale-integrated (VLSI)
technology.
7. Uniformity of analysis and design. Basically, neural networks enjoy universality as
information processors. We say this in the sense that the same notation is used in all the
domains involving the application of neural networks. This feature manifests itself in
different ways:
a) Neurons, in one form or another, represent an ingredient common to all neural networks.
b) This commonality makes it possible to share theories and learning algorithms in different
applications of neural networks.
c) Modular networks can be built through a seamless integration of modules.
8. Neurobiological analogy. The design of a neural network is motivated by analogy with the
brain, which is a living proof that fault-tolerant parallel processing is not only physically
possible but also fast and powerful. Neurobiologists look to (artificial) neural networks as a
research tool for the interpretation of neurobiological phenomena. On the other hand,
engineers look to neurobiology for new ideas to solve problems more complex than those
based on conventional hard-wired design techniques.

TCOER/CIVIL/STRUCTURE 2017-2018 16
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

3.1.2 Models of a Neuron

A neuron is an information-processing unit that is fundamental to the operation of a neural


network. We may identify three basic elements of the neuron model.

Fig. 04. Nonlinear model of a neuron.

1. A set of synapses, each of which is characterized by a weight or strength of its own.


Specifically, a signal xj at the input of synapse j connected to neuron k is multiplied by the
synaptic weight wkj. It is important to make a note of the manner in which the subscripts of
the synaptic weight wkj are written. The first subscript refers to the neuron in question and
the second subscript refers to the input end of the synapse to which the weight refers. The
weight wkj is positive if the associated synapse is excitatory; it is negative if the synapse is
inhibitory.
2. An adder for summing the input signals, weighted by the respective synapses of the
neuron.
3. An activation function for limiting the amplitude of the output of a neuron. The activation
function is also referred to in the literature as a squashing function in that it squashes (limits)
the permissible amplitude range of the output signal to some finite value. Typically, the
normalized amplitude range of the output of a neuron is written as the closed unit interval [0,
1] or alternatively [-1, 1].

TCOER/CIVIL/STRUCTURE 2017-2018 17
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

The model of a neuron also includes an externally applied bias (threshold) wk = bk that has
the effect of lowering or increasing the net input of the activation function. In mathematical
terms, we may describe a neuron k by writing the following pair of equations:

3.1.3 Learning Process

Among the many interesting properties of a neural network is the ability of the network to
learn from its environment and to improve its performance through learning. A neural
network learns about its environment through an iterative process of adjustments applied to
its synaptic weights and thresholds. We define learning in the context of neural networks as
follows: Learning is a process by which the free parameters of a neural network are adapted
through a continuing process of stimulation by the environment in which the network is
embedded. The type of learning is determined by the manner in which the parameter changes
take place. This definition of the learning process implies the following sequence of events:
1. The neural network is stimulated by an environment.
2. The neural network undergoes changes as a result of this stimulation.
3. The neural network responds in a new way to the environment, because of the changes that
have occurred in its internal structure. Let wkj (n) denote the value of the synaptic weight wkj
at time n. At time n an adjustment Δwkj (n) is applied to the synaptic weight wkj (n), yielding
the updated value

W KJ (N+1) = W KJ (N) + ΔW KJ (N)

A prescribed set of well-defined rules for the solution of a learning problem is called a learning
algorithm. As one would expect, there is no unique learning algorithm for the design of neural
networks. Rather, we have a “kit of tools” represented by a diverse variety of learning
algorithms, each of which offers advantages of its own. Basically, learning algorithms differ
from each other in the way in which the adjustment Δwkj to the synaptic weight wkj is
formulated.

3.1.4 Back-Propagation Networks

Multilayer perceptron’s have been applied successfully to solve some difficult diverse
problems by training them in a supervised manner with a highly popular algorithm known as
the error back-propagation algorithm. This algorithm is based on the error-correction learning

TCOER/CIVIL/STRUCTURE 2017-2018 18
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

rule. Basically, the error back-propagation process consists of two passes through the different
layers of the network: a forward pass and a backward pass. In the forward pass, activity pattern
(input vector) is applied to the sensory nodes of the network, and its effect propagates through
the network, layer by layer. Finally, a set of outputs is produced as the actual response of the
network. During the forward pass the synaptic weights of network are all fixed. During the
backward pass, on the other hand, the synaptic weights are all adjusted in accordance with the
error-correction rule. Specifically, the actual response of the network is subtracted from a
desired (target) response to produce an error signal. This error signal is then propagated
backward through the network, against direction of synaptic connections - hence the name
“error back-propagation”. The synaptic weights are adjusted so as to make the actual response
of the network move closer the desired response. The error back propagation algorithm is also
referred to in literature as the back-propagation algorithm, or simply back-prop. The
feed-forward back-propagation neural network in Figure 4-1 is fully connected, which means
that a neuron in any layer is connected to all neurons in the previous layer. Signal flow through
the network progresses in a forward direction, from left to right and on a layer-by layer basis.
Fig.3.1.4 shows Back-Propagation Networks.

Fig. 05. Back-Propagation Networks

3.1.5 Stopping Criteria

The back-propagation algorithm cannot, in general, be shown to converge, nor are there well
defined criteria for stopping its operation. Rather, there are some reasonable criteria, each with
its own practical merit, which may be used to terminate the weight adjustments. To formulate
such a criterion, the logical thing to do is to think in terms of the unique properties of a local or
global minimum of the error surface.

TCOER/CIVIL/STRUCTURE 2017-2018 19
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

1. The back-propagation algorithm is considered to have converged when the Euclidean norm
of the gradient vector reaches a sufficiently small gradient threshold. The drawback of this
convergence criterion is that, for successful trials, learning times may be long.
2. The back-propagation algorithm is considered to have converged when the absolute rate of
change in the average squared error per epoch is sufficiently small. Typically, the rate of
change in the average squared error is considered to be small enough if it lies in the range of 0.1
to 1 percent per epoch.
3. The back-propagation algorithm is terminated when the weight updates are sufficiently
small.

3.2 Introduction to NN-tool.

Neural Network Toolbox™ provides algorithms, pre-trained models, and apps to create, train,
visualize, and simulate both shallow and deep neural networks. You can perform
classification, regression, clustering, dimensionality reduction, time-series forecasting, and
dynamic system modeling and control.
Deep learning networks include convolutional neural networks (ConvNets, CNNs), directed
acyclic graph (DAG) network topologies, and auto encoders for image classification,
regression, and feature learning. For time-series classification and regression, the toolbox
provides long short-term memory (LSTM) deep learning networks. You can visualize
intermediate layers and activations, modify network architecture, and monitor training
progress.
For small training sets, you can quickly apply deep learning by performing transfer learning
with pre-trained deep network models (including Inception-v3, ResNet-50, ResNet-101,
Google Net, Alex Net, VGG-16, and VGG-19) and models imported from Tensor Flow
Keras or Caffe.
To speed up training on large datasets, you can distribute computations and data across
multicore processors and GPUs on the desktop (with Parallel Computing Toolbox™), or
scale up to clusters and clouds, including Amazon EC2® P2, P3, and G3 GPU instances
A few of the new features and applications introduced with this version of the Neural
Network Toolbox are discussed below.
3.3 Neural Network Applications
Now a days neural network is adopted in various industries and institute to predict the results
of complex problems.

TCOER/CIVIL/STRUCTURE 2017-2018 20
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Business Applications
The 1988 DARPA Neural Network Study lists various neural network applications,
beginning in about 1984 with the adaptive channel equalizer. This device, which is an
outstanding commercial success, is a single neuron network used in long-distance telephone
systems to stabilize voice signals. The DARPA report goes on to list other commercial
applications, including a small word recognizer, a process monitor, a sonar classifier, and a
risk analysis system.
Neural networks have been applied in many other fields since the DARPA report was written.
A list of some applications mentioned in the literature follows.

Aerospace
High performance aircraft autopilot, flight path simulation, aircraft control systems, autopilot
enhancements, aircraft component simulation, aircraft component fault detection.

Automotive
Automobile automatic guidance system, warranty activity analysis

Banking
Check and other document reading, credit application evaluation

Credit Card Activity Checking


Neural networks are used to spot unusual credit card activity that might possibly be
associated with loss of a credit card

Defense
Weapon steering, target tracking, object discrimination, facial recognition, new kinds of
sensors, sonar, radar and image signal processing including data compression, feature
extraction and noise suppression, signal/image identification.

Electronics
Code sequence prediction, integrated circuit chip layout, process control, chip failure
analysis, machine vision, voice synthesis, nonlinear modeling.

TCOER/CIVIL/STRUCTURE 2017-2018 21
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Entertainment
Animation, special effects, market forecasting.

Financial
Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit-line use
analysis, portfolio trading program, corporate financial analysis, currency price prediction.

Industrial
Neural networks are being trained to predict the output gasses of furnaces and other industrial
processes. They then replace complex and costly equipment used for this purpose in the past.

Insurance
Policy application evaluation, product optimization.

Manufacturing
Manufacturing process control, product design and analysis, process and machine diagnosis,
real-time particle identification, visual quality inspection systems, beer testing, welding
quality analysis, paper quality prediction, computer-chip quality analysis, analysis of
grinding operations, chemical product design analysis, machine maintenance analysis, project
bidding, planning and management, dynamic modeling of chemical process system.

Medical
Breast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of
transplant times, hospital expense reduction, hospital quality improvement, emergency-room
test advisement.

Oil and Gas


Exploration.

Robotics
Trajectory control, forklift robot, manipulator controllers, vision systems.

TCOER/CIVIL/STRUCTURE 2017-2018 22
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Speech
Speech recognition, speech compression, vowel classification, text-to-speech synthesis.

Securities
Market analysis, automatic bond rating, stock trading advisory systems.

Telecommunications
Image and data compression, automated information services, real-time translation of spoken
language, customer payment processing systems.

Transportation
Truck brake diagnosis systems, vehicle scheduling, routing systems.

Summary
The list of additional neural network applications, the money that has been invested in neural
network software and hardware, and the depth and breadth of interest in these devices have
been growing rapidly. The authors hope that this toolbox will be useful for neural network
educational and design purposes within a broad field of neural network applications.

TCOER/CIVIL/STRUCTURE 2017-2018 23
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER 4

METHODOLOGY

An eight step work methodology was adopted for the present study. Following chart shows
the overview of methodology followed.

PROBLEM COLLECTION OF LITERATURE


IDENTIFICATION PAPERS SURVEY

ANN DATA
ANN STUDY
MODELLING COLLECTION

RESULT AND
VALIDATION
CONCLUSION

4.1 Problem Identification

Foundation design has various variable as well as its require some experience to try out sizes
satisfying codal provisions. Spread sheets and software available in industry are formula
base, in which designer had to put values based on his previous experience and tentative
guideline provided by that interface. Artificial neural networks are more effective as model
have both, data of foundations and correlations between variables.
The objective of this work is to develop a hybrid neural network i.e. genetic algorithm based
neural network model for the design of foundation subjected to various loading and under
different material conditions. This requires a comprehensive set of examples that cover
various parameters influencing the design of foundation.

TCOER/CIVIL/STRUCTURE 2017-2018 24
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Parameter considered as input data of neuron system for design of foundation are mentioned
below,

A) Material Properties
a. Safe Bearing Capacity of foundation material.
b. Characteristic strength of concrete.
c. Characteristic strength of Steel.
B) Loading
a. Axial Load
b. Moment in Shorter Direction
c. Moment in Longer Direction
C) Column Sizes

Output or targets sets for foundation design are mentioned below

A) Foundation Size
a. Width
b. Length
c. Depth
B) Reinforcement requirement
a. In shorter direction
b. In longer direction

4.2 Collection of Papers

Papers containing the guideline for neural network models and applications of neuron system
in engineering, specifically to Civil Engineering were listed out from different journal’s
website.
Paper related to MATLAB software also listed for interface study of neural networking
software.

TCOER/CIVIL/STRUCTURE 2017-2018 25
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

4.3 Literature Review

“Applying Artificial Neural Networks for analysis of geotechnical problems” by Maria J.


Sulewska, “Artificial Neural Networks in Structural Engineering: Concept and Applications”
by Abbes Berrais, “Prediction of Deflection of Cantilever Beam of Any Arbitrary Length
Using Soft Computation Technique.” By Mr. Kumar Gaurav, “Shape Optimization of
Cantilever Beams Using Neural Network” Dr. Sankha Bhaduri, Dr. Arbind Kumar, Sangeeta
Yadav, K. K. Pathak, Rajesh Shrivastava were some of the paper referred for study of neural
network, and found useful for project work.

4.4 ANN Study

Neuron Network has various model basis on problem statement, choosing a type of
architecture for network modeling is worked out, feed forward network and back propagation
algorithm were studied.
Simple feed forward network is useful where input and output are linear functions of each
other, network studies the correlation between both and compute the result. Mostly single
hidden layer is used for feed forward network. For problems such as foundation design,
output data derived from input data may use as input data for another output, hence back
propagation is required for deriving the results. In back propagation algorithm multiple
neuron layers are present and the process the data in both directions.

4.5 Data Collection.

Various foundation subjected to different loading conditions and different material properties
are designed to define input and target to neural network model. Type of foundation is
restricted to Open shallow foundation, and data range is defined as mentioned below.

A) Material Properties
a. Characteristic strength of concrete: - M20 to M30
b. Characteristic strength of Steel: - Fe415 & Fe500
B) Loading
a. Axial Load: - 50 Kn to 10000 Kn
b. Moment in Shorter Direction: - 0 Kn-m to 500 Kn-m
c. Moment in Longer Direction: - 0 Kn-m to 500 Kn-m

TCOER/CIVIL/STRUCTURE 2017-2018 26
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

For each set, the sizes of foundation i.e. length, width and depth and reinforcement required
to satisfy one way and punching shear checks as per IS 456 - 2000 (2000) are obtained.

4.6 ANN Modeling

MATLAB software’s NNTOOL interface is used for model neuron system, data set is
provided to network, out of which inputs (table -1) and targets data (table -2) are defined.
Fitnet system with hidden layer neurons is adopted for architecture, levenberg-marquardt
training parameters are used for training of network. Data is used randomly for network
training, validation and testing, out of which 70% for data is define for training, 15% for
validation and 15% for testing. Mean square error (MSE) and Regression (R) is worked out
for model. MSE should be minimum to achieve the accuracy between output and targets and
R falls in between 0 to 1. Value near to unity means closer relationship.

Neural network model can trained by three different algorithm

1) Levenberg-Marquardt algorithm – is used for fast learning linear problem, its stops
generalization when improving stops, hence resulting in more mean square error.
2) Bayesin regularization algorithm – this algorithm typically requires more time but can
good in generalization for difficult, small or noisy datasets, training stops according to
adaptive weight minimization.
3) Scaled Conjugate gradient – requires less memory but more time and its automatically
stops generalization when improving stops, hence resulting in more mean square
error.

4.7 Validation

Neural Network Tool (NNTOOL) is used for simulating the result. with additional input
prediction of Neural Network (NN) model is worked out, predictions are cross checked with
conventional design spread sheet’s output.

4.8 Result and Conclusion

Regression analysis in performance chart prepared by MATLAB’s NNTOOL is checked for


regression of training samples, validation samples and for testing samples which states the
good correlation in between input and output data.

TCOER/CIVIL/STRUCTURE 2017-2018 27
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER 5

RESULT & DISCUSSION

In this study neuron network is model is formed by using MATLAB’s Neural network Fitting
tool (NNFTOOL) and Neural Network tool box (NNTOOL), designed examples of various
foundation is calculated from spread sheets by using Microsoft excel, data derived from
spread sheets is used as input and output data to training of network, validation and testing.
Regression analysis is done to correlate the inputs and outputs after that prediction for sample
input data is computed.
Parameters and design calculation for typical footing are as following, yellow highlighted
box are input and output data for neural network model.

Fig. 06. Typical Footing Design

TCOER/CIVIL/STRUCTURE 2017-2018 28
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

According to above spread sheet 119 foundation calculations are worked out, input and
output tables of them are listed below,

Table – 1. Training set (Input Data)

Input data
Sr. No. Column Sizes
Sbc Fck Fy Pu Mux Muy
b d
1 230 380 300 25 415 1400 0 0
2 300 450 250 30 415 2100 0 0
3 230 450 250 20 415 1200 0 0
4 230 600 300 20 500 2300 0 0
5 300 300 300 20 500 500 0 0
6 380 750 300 25 500 4550 0 0
7 230 450 200 20 415 750 0 0
8 230 380 200 20 500 900 0 0
9 300 600 350 25 415 1450 0 0
10 300 530 300 20 415 1050 0 0
11 230 380 250 25 500 800 0 0
12 300 450 300 20 415 1300 0 0
13 230 600 350 25 500 1550 0 0
14 300 750 300 25 415 1800 0 0
15 230 450 250 20 415 950 0 0
16 900 1200 500 30 500 11772 350.3 503.7
17 600 600 500 30 500 735.9 53.3 23.2
18 750 750 500 30 500 1921.3 117.6 62
19 400 900 500 30 520 3634.4 80.6 23
20 750 750 500 30 500 4822.5 252.7 58.3
21 750 1550 500 30 500 7839.7 233 1204
22 750 750 500 30 500 5921.5 133.8 24.2
23 450 1400 500 30 500 5996.1 436.5 82
24 450 1300 500 30 500 4498.7 116.2 51.4
25 750 750 500 30 500 4873.3 75 134
26 900 900 500 30 500 6652.6 413.7 473.5
27 900 900 500 30 500 7779.8 400.9 248.5
28 900 1850 500 30 500 13574.3 483.3 3456
29 900 900 500 30 500 6818.3 378.2 393.5
30 900 900 500 30 500 6853 218.5 428
31 600 1800 500 30 500 9596.1 166 1331
32 300 900 500 30 500 1965.5 21 168.5
33 350 1000 500 30 500 2703.3 103.4 42.9

TCOER/CIVIL/STRUCTURE 2017-2018 29
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Input data
Sr. No. Column Sizes
Sbc Fck Fy Pu Mux Muy
b d
34 450 1100 500 30 500 4843 262 38
35 300 750 500 30 500 1662 82 19.5
36 300 1300 500 30 500 2472.4 34.4 38.5
37 750 750 500 30 500 5755.6 201.3 170
38 350 1000 500 30 500 2637.9 130 20
39 300 750 500 30 500 573.7 78.5 18
40 300 750 500 30 500 816.8 38.5 9
41 600 600 500 30 500 836 80 133.3
42 350 1200 500 30 500 3458.6 298.3 8
43 300 900 500 30 500 345 19 4.5
44 450 1100 500 30 500 5119 393 32
45 750 900 500 30 500 6845.7 174.5 228
46 750 750 500 30 500 5129 219 231
47 450 900 500 30 500 3112 88 27
48 300 750 250 20 500 2923 0 0
49 380 750 250 25 500 3567.6 0 0
50 300 750 250 25 500 1969.3 0 0
51 300 750 250 25 500 2240.9 0 0
52 300 750 250 25 500 2632 0 0
53 600 600 250 25 500 2379.3 0 0
54 380 900 250 25 500 4126.7 0 0
55 300 750 250 25 500 2566.6 0 0
56 300 750 250 25 500 2311.5 0 0
57 300 750 250 25 500 2787 0 0
58 380 900 250 25 500 4376.5 0 0
59 380 900 250 25 500 3306.1 0 0
60 300 750 250 25 500 3032.8 0 0
61 600 900 250 25 500 5686.7 0 0
62 300 750 250 25 500 2693.3 0 0
63 450 750 250 25 500 4825.8 0 0
64 450 750 250 25 500 4762 0 0
65 450 750 250 25 500 3619.5 0 0
66 300 750 250 25 500 3237.1 0 0
67 300 750 250 25 500 2519.7 0 0
68 300 750 250 25 500 2361.8 0 0
69 300 750 250 25 500 2742.2 0 0
70 450 750 250 25 500 3298.3 0 0
71 380 750 250 30 500 3256.8 0 0
72 300 600 250 30 500 400.7 0 0
73 380 750 250 30 500 3658.7 0 0

TCOER/CIVIL/STRUCTURE 2017-2018 30
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Input data
Sr. No. Column Sizes
Sbc Fck Fy Pu Mux Muy
b d
74 300 750 250 30 500 2428.2 0 0
75 300 750 250 30 500 2317.8 0 0
76 300 750 250 30 500 2491.8 0 0
77 300 750 250 30 500 2286.2 0 0
78 300 750 250 30 500 2737.9 0 0
79 300 750 250 30 500 2724.4 0 0
80 300 750 250 30 500 2203.5 0 0
81 300 300 250 30 500 1272.7 0 0
82 600 600 150 30 415 533.3 0 0
83 600 1250 150 30 415 533.3 0 0
84 450 450 150 30 415 533.3 0 0
85 450 950 150 30 415 533.3 0 0
86 350 600 150 30 415 533.3 0 0
87 600 600 150 30 415 833.3 0 0
88 600 1250 150 30 415 833.3 0 0
89 300 600 150 30 415 833.3 0 0
90 450 450 150 30 415 833.3 0 0
91 450 950 150 30 415 833.3 0 0
92 600 1000 150 30 415 833.3 0 0
93 450 1000 150 30 415 833.3 0 0
94 500 1150 150 30 415 833.3 0 0
95 400 1200 150 30 415 833.3 0 0
96 600 600 150 30 415 1166.7 0 0
97 300 600 150 30 415 1166.7 0 0
98 350 600 150 30 415 1166.7 0 0
99 450 1000 150 30 415 1166.7 0 0
100 450 600 150 30 415 1166.7 0 0
101 450 450 150 30 415 1166.7 0 0
102 400 1000 150 30 415 1166.7 0 0
103 600 600 150 30 415 1666.7 0 0
104 600 1250 150 30 415 1666.7 0 0
105 500 1150 150 30 415 1666.7 0 0
106 400 1200 150 30 415 1666.7 0 0
107 450 1000 150 30 415 1666.7 0 0
108 200 1400 150 30 415 1666.7 0 0
109 600 600 150 30 415 2066.7 0 0
110 600 1250 150 30 415 2066.7 0 0
111 450 1000 150 30 415 2066.7 0 0
112 450 600 150 30 415 2066.7 0 0
113 450 450 150 30 415 2066.7 0 0

TCOER/CIVIL/STRUCTURE 2017-2018 31
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Input data
Sr. No. Column Sizes
Sbc Fck Fy Pu Mux Muy
b d
114 600 600 150 30 415 2400 0 0
115 600 1250 150 30 415 2400 0 0
116 450 600 150 30 415 2400 0 0
117 600 600 150 30 415 2666.7 0 0
118 600 600 150 30 415 2833.3 0 0
119 600 600 150 30 415 3000 0 0

Table – 2. Training set (Output/Target data)

Out Put Data


Footing
Sr. No. Steel
Sizes d
B L Ptx Pty
1 2250 2400 700 878.86 878.86

2 3050 3200 900 1023.75 1023.75

3 2250 2450 650 788.69 805.02

4 2750 3100 850 1020 1020

5 1400 1400 400 480 480

6 3950 4400 1150 1416 1416

7 1950 2150 500 629.16 644.53

8 2200 2350 550 613.81 613.81

9 2000 2300 650 821.22 821.22

10 1900 2100 550 121.62 126.32

11 1850 2000 500 600 600

12 2350 2500 650 780 780

13 2050 2400 650 780 783.57

14 2400 2850 750 900 900

15 2000 2200 575 690 702.3

16 4900 5200 1700 3247.45 3495.83

17 1500 1500 450 583.143 499.229

18 2200 2200 650 1110.29 1020.95

TCOER/CIVIL/STRUCTURE 2017-2018 32
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Out Put Data


Footing
Sr. No. Steel
Sizes d
B L Ptx Pty
19 2500 3000 1000 1923.45 1410

20 3300 3300 1100 2151.56 1965.63

21 4000 4800 1400 2642.31 2373.62

22 3500 3500 1150 2542.94 2435.88

23 3300 4250 1300 2709.06 1671.11

24 2700 3550 1000 2158.21 1465.22

25 3200 3200 1050 2151.94 2090.97

26 4000 4000 1250 2563.33 2645.38

27 4200 4200 1400 2645.12 2739.84

28 5500 6450 2000 3928.37 2818.9

29 4000 4000 1250 2680 2680

30 4000 4000 1250 2732.56 2578.87

31 4200 5400 1600 3186.83 2352.26

32 2000 2600 750 1531.05 1062.61

33 2200 2850 900 1311.67 1265.64

34 3000 3650 1200 1870.69 1852.68

35 1800 2250 700 1087.06 1125.44

36 2000 3000 800 1230.11 1066.55

37 3600 3600 1200 445.714 757.4

38 2200 2850 800 1507.62 1476.36

39 1400 1900 500 646.154 633.333

40 1300 1750 400 650 654.545

41 1800 1800 550 660 660

42 2600 3450 1000 1573.32 1650.15

43 1200 1800 450 589.091 571.765

44 3200 3850 1200 2131.45 2136.61

45 3800 3950 1250 2860.24 2643.82

46 3500 3500 1150 2234.18 2223.09

TCOER/CIVIL/STRUCTURE 2017-2018 33
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Out Put Data


Footing
Sr. No. Steel
Sizes d
B L Ptx Pty
47 2400 2850 850 1641.26 1579.31

48 3250 3700 1250 1717.46 1212.74

49 3600 4000 1250 1652.72 1651.01

50 2600 3050 900 1063.8 1226.95

51 2800 3250 1000 1130.24 1268.86

52 3050 3500 1150 1184.41 1304.41

53 3100 3100 900 1308 1308

54 3850 4400 1400 1588.34 1671.87

55 3000 3450 1100 1229.25 1336.03

56 2850 3300 1050 1092.56 1240.73

57 3150 3600 1200 1223.51 1330

58 4000 4500 1400 1815.45 1812.62

59 3400 3950 1150 1493.64 1609.56

60 3250 3700 1250 1321.67 1400.79

61 4650 4950 1450 2185.57 2203.3

62 3100 3550 1150 1248.99 1345.53

63 4250 4550 1500 1879.33 1869.4

64 4250 4550 1400 2175.51 2040.48

65 3700 4000 1200 1680.26 1713.33

66 3400 3850 1200 1845.07 1618.61

67 3000 3450 1000 1553.43 1487.24

68 2900 3350 950 1507.69 1462.43

69 3100 3550 1100 1186.09 1456.47

70 3500 3800 1100 1380 1691.65

71 3450 3850 1150 1493.6 1591.34

72 1200 1500 400 414.286 654.545

73 3650 4050 1250 1583.91 1661.13

74 2900 3350 1000 1214.77 1375.18

TCOER/CIVIL/STRUCTURE 2017-2018 34
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Out Put Data


Footing
Sr. No. Steel
Sizes d
B L Ptx Pty
75 2850 3300 1000 1118.13 1298.91

76 3000 3450 1100 1092.54 1268

77 2850 3300 1000 1097.81 1279.64

78 3100 3550 1100 1283.19 1420.83

79 3100 3550 1200 1003.77 1267.43

80 2800 3250 950 1126.51 1303.52

81 2300 2300 750 1031.49 1031.49

82 2000 2000 450 568.421 568.421

83 2000 2700 450 561.154 568.421

84 2000 2000 500 631.579 631.579

85 1800 2300 450 564.545 571.765

86 2000 2300 550 690 694.737

87 2400 2400 600 751.304 751.304

88 2400 3200 600 743.226 751.304

89 2300 2600 650 811.2 815.455

90 2300 2300 600 752.727 752.727

91 2200 2700 600 747.692 754.286

92 2300 2700 550 685.385 690

93 2200 2800 600 746.667 754.286

94 2200 2900 600 745.714 754.286

95 2200 3000 600 744.828 744.828

96 3000 3000 750 931.034 931.034

97 2700 3000 750 931.034 931.034

98 2700 3000 750 931.034 931.034

99 2700 3300 750 928.125 928.125

100 2900 3100 800 992 992

101 2900 2900 800 994.286 994.286

102 2700 3300 700 866.25 866.25

TCOER/CIVIL/STRUCTURE 2017-2018 35
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Out Put Data


Footing
Sr. No. Steel
Sizes d
B L Ptx Pty
103 3400 3400 900 1112.73 1112.73

104 3200 3900 850 1046.84 1046.84

105 3200 3900 850 1046.84 1046.84

106 3100 3900 850 1046.84 1046.84

107 3100 3700 850 1048.33 1048.33

108 2800 4000 850 1046.15 1046.15

109 3800 3800 1000 1232.43 1232.43

110 3500 4200 900 1106.34 1106.34

111 3500 4100 950 1168.5 1168.5

112 3800 3900 1050 1293.16 1293.16

113 3800 3800 1050 1294.05 1294.05

114 4000 4000 1050 1292.31 1292.31

115 3700 4400 1000 1227.91 1227.91

116 4000 4200 1100 1352.2 1352.2

117 4300 4300 1150 1412.86 1412.86

118 4400 4400 1150 1412.09 1412.09

119 4400 4400 1200 1473.49 1473.49

TCOER/CIVIL/STRUCTURE 2017-2018 36
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 07. GA/BPN Hybrid Model Architecture

Typical neuron network architecture for given input and output is shown above, figure
illustrate flow of data through input layer, hidden layer and output layer. In the above figure
input layer have 8 variables, hidden layer have 10 neurons and output layers have 5 targets.
To prepare the neuron system, model is formed by using MATLAB 2016 a by using Hundred
and nineteen samples of eight different inputs and five different outputs/targets values.

Two layers’ neuron system is developed with one hidden layer comprising of ten neurons and
one output layer with five neurons representing targets, input data of six variables is
processed through ten neurons hidden layer and five neurons output layer to result in final
outputs. Back propagation analysis is use to train the network. Performance chart and
regression’s charts based on inputs, target data and training of neuron are prepared.

TCOER/CIVIL/STRUCTURE 2017-2018 37
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 08. Neural Network fitting toolbox data interface


In MATLAB workspace data is feeded and input data is assigned as i and target data is
assigned as t.

Fig. 09. Neural Network Architecture

TCOER/CIVIL/STRUCTURE 2017-2018 38
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 10. Neural Network Data Assigning


Out of 119 different output and inputs 70% data i.e. 83 samples are assigned for training of
network, 15 % i.e. 18 samples are assigned for validation & 15 % i.e. 18 samples are
assigned for testing.

Fig. 11. Training interface of NNFTOOL

TCOER/CIVIL/STRUCTURE 2017-2018 39
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Data is trained under Levenberg-Marquardt algorithm as it is best suited for our dataset,
resulting in 99.80 % regression for training data, 99.47% regression for validation data &
96.81% regression for testing data overall regression of model is 98.96 %.

Fig. 12. Regression analysis of NNFTOOL

To Simulink NN model for prediction of new data set NNTOOL is used and for that new
input data is added to input as ip. Network net is imported from MATLAB’s workspace to
NNTOOL interface.

TCOER/CIVIL/STRUCTURE 2017-2018 40
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 13. Data Manager interface of NNTOOL

Training of model is worked out, 1000 iterations of epoch are set to have more accuracy in
prediction. After training regression charts are prepared.

TCOER/CIVIL/STRUCTURE 2017-2018 41
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 14. Training interface of NNTOOL


New inputs are added to network as ip in table no 3 with eight different inputs accordingly
output data is worked out by conventional method and represent in table-4, prediction given
by neural network models are computed in table-5.

TCOER/CIVIL/STRUCTURE 2017-2018 42
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 15. Input provided to NN model

Table – 3. New Input data

Input data
Sr. No. Column Sizes
Sbc Fck Fy Pu Mux Muy
b d
1 300 600 280 30 500 922.4 0 0
2 300 750 280 30 500 1341.8 0 0
3 300 600 280 30 500 935.2 0 0
4 300 600 280 30 500 1562.7 0 0
Table – 4. Output data (Calculated by Spreadsheet)

Output Data
Sr. No. Footing Sizes Steel
d
B L Ptx Pty
1 1700 2000 600 760 765
2 2000 2450 700 876 899
3 1800 2100 625 788 795
4 2300 2600 750 1034 1177.727

Table – 5. Predicted data (NNTOOL prediction)

Predicted Data
Sr. No. Footing Sizes Steel
d
B L Ptx Pty
1 1783.469 1974.068 622.251 768.5949 944.1192
2 2037.333 2419.215 740.56 894.7507 1047.758
3 1794.538 1985.159 626.287 773.304 948.8691
4 2369.733 2558.456 832.8671 987.1571 1179.351

TCOER/CIVIL/STRUCTURE 2017-2018 43
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Fig. 16. Prediction by NN Model

TCOER/CIVIL/STRUCTURE 2017-2018 44
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER 6
CONCLUSIONS

The following conclusions can be drawn from this work:

 Neural network model trained by error back-propagation algorithm have the


Levenberg-Marquardt algorithm with 2 hidden layers having ten numbers of neurons in
hidden layer could predict the 5 different outputs for corresponding 8 inputs with satisfactory
performance.

 The NN model have overall regression coefficient about 98.96%.

 The ANN models, which can easily incorporate additional model parameters, give
less scattered estimated value results than those given by the other models.

 From the results obtained it can be concluded that the ANN models are more suitable
and feasible in modeling of complex problems and save a lot of computational effort
compared to conventional methods significantly. The use of these networks will help in
solving more complex problems.

 Models having all eight parameters show good performance in all models than
changing one parameter, so it proves that all eight parameters are essential for estimating of
foundation design.

FUTURE SCOPE OF WORK

 The study can be carried out with more no. of sample to achieve the more accuracy.

 The study can be carried out for more number of parameters and properties of
different types of foundation for design of foundations.

 Application of NN model for different part of structure can be carried out.

TCOER/CIVIL/STRUCTURE 2017-2018 45
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

APPENDIX

MATLAB Funcion

function [Y,Xf,Af] = myNeuralNetworkFunction(X,~,~)


%MYNEURALNETWORKFUNCTION neural network simulation function.
%
% Generated by Neural Network Toolbox function genFunction, 27-Jul-2018 15:34:10.
%
% [Y] = myNeuralNetworkFunction(X,~,~) takes these arguments:
%
% X = 1xTS cell, 1 inputs over TS timesteps
% Each X{1,ts} = 8xQ matrix, input #1 at timestep ts.
%
% and returns:
% Y = 1xTS cell of 1 outputs over TS timesteps.
% Each Y{1,ts} = 5xQ matrix, output #1 at timestep ts.
%
% where Q is number of samples (or series) and TS is the number of timesteps.

%#ok<*RPMT0>

% ===== NEURAL NETWORK CONSTANTS =====

% Input 1
x1_step1.xoffset = [200;300;150;20;415;345;0;0];
x1_step1.gain =
[0.00285714285714286;0.00129032258064516;0.00571428571428571;0.2;0.019047619047
619;0.000151179578662514;0.00413821642871922;0.000578703703703704];
x1_step1.ymin = -1;

% Layer 1
b1 =
[-3.6358681539633482;-4.1994486260729955;4.1082077156014014;1.6160214867297278;
4.1755758645480689;2.4422435103900049;-1.6836833233742829;-3.3592027398478943;-
0.6707993408634132;-5.8645387033760485];
IW1_1 = [-2.7636126888561012 -1.6950089620325763 -0.049083055624589422
2.481939619597147 1.2084682647364615 -2.2119804627977753 2.1784910209678139
-1.3236672186155307;-0.45895827116011845 -1.547022290828445 8.3955215479646057
-7.7928658592003526 5.741512138281009 -1.2783817418608368 -2.19977272338001
0.56504407700710102;0.47771680611346079 0.41492993644113307
-2.3595562304169206 -2.2134058021350875 0.54806457499116434 -2.2569625530063968

TCOER/CIVIL/STRUCTURE 2017-2018 46
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

1.3901382618269194 -0.55927372565314726;-0.88542068067037516
-0.57620882279678187 1.0567470459010089 0.04232102910556032 0.47054392807487577
-2.9012329928768894 0.083018558899232153
3.1177356083675249;0.64947237907410749 -1.3642273348514928 0.21072942648011161
0.21196233803961384 -0.64730742571969802 -1.5055808996664226
-1.8529090053822943 7.8708886843366637;-0.0074075727575568726
-0.0027087281880739354 -0.052528523909706734 -0.024422961247443781
0.059292107595160945 1.241468339565736 0.068550627921031171
1.0260083931522674;0.29164325380891798 -0.13580837857643427 1.2712236501510834
0.21116228588012481 -0.36517454823499507 -5.0165255868086058 3.8271400791712318
-1.836675953605154;-0.12995039468730993 1.1600709881954792 -1.4774338437320986
-0.62620307933220776 1.2362653610963086 1.1938359558667795 -0.27736093031698295
-4.2693446709201845;0.51112755658532694 -0.88358989760059337
-0.2409466920324409 -0.04069504485710905 0.30751108117075115
0.32893654941126094 0.0022149663358370375
-0.88327661488885245;0.046199703016479604 1.0966090291119985
-2.4961582411232541 2.3607531263590968 3.9559732901156268 0.71140055195781104
2.0162084821389548 -5.8183414278075656];

% Layer 2
b2 =
[-1.0328282429162881;-0.88966825254600335;-1.1088581323773337;-0.697546949131186
76;-0.6303273101307526];
LW2_1 = [0.0034539395433909881 0.020178884332566868 0.33077500206646704
-0.21219382460923372 -0.0054826761348220759 1.2771926968672829
-0.34184695639721407 -0.27044926252766538 0.072655052996631192
0.0049688370595059143;0.0143542045678389 0.024831986817717377
0.23878395177712813 -0.21814509325535911 0.027509026663124797
1.1699885981877864 -0.2667847491900841 -0.10522251740277527
-0.28122238835657309 0.018138209530271537;0.0022672381753999458
-0.0073203816806769401 0.17178314598812594 -0.046900104510032278
-0.073862878098105461 1.3267798864442089 -0.29098469857824943
-0.21998939072839424 -0.01268842560397182
0.029614501454784073;0.030585519001787048 -0.077168007455176726
-0.3231299658632068 -0.072094854543351961 -0.078450815416541295
1.2713104495838055 0.077113922296766113 -0.027045068580994647
0.10693516816899237 -0.076864425001194336;-0.024365822903056739
-0.20054737839465339 -0.3557117208391023 0.0049494463184024968
-0.40393252824698028 0.71555015040681647 -0.068747284149930157
-0.25768305382248846 0.33290088483286495 -0.086960785887604225];

% Output 1
y1_step1.ymin = -1;

TCOER/CIVIL/STRUCTURE 2017-2018 47
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

y1_step1.gain =
[0.000465116279069767;0.000396039603960396;0.00125;0.000525382493100014;0.00059
3557526664935];
y1_step1.xoffset = [1200;1400;400;121.62;126.32];

% ===== SIMULATION ========

% Format Input Arguments


isCellX = iscell(X);
if ~isCellX, X = {X}; end;

% Dimensions
TS = size(X,2); % timesteps
if ~isempty(X)
Q = size(X{1},2); % samples/series
else
Q = 0;
end

% Allocate Outputs
Y = cell(1,TS);

% Time loop
for ts=1:TS

% Input 1
Xp1 = mapminmax_apply(X{1,ts},x1_step1);

% Layer 1
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*Xp1);

% Layer 2
a2 = repmat(b2,1,Q) + LW2_1*a1;

% Output 1
Y{1,ts} = mapminmax_reverse(a2,y1_step1);
end

% Final Delay States


Xf = cell(1,0);
Af = cell(2,0);

% Format Output Arguments

TCOER/CIVIL/STRUCTURE 2017-2018 48
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

if ~isCellX, Y = cell2mat(Y); end


end

% ===== MODULE FUNCTIONS ========

% Map Minimum and Maximum Input Processing Function


function y = mapminmax_apply(x,settings)
y = bsxfun(@minus,x,settings.xoffset);
y = bsxfun(@times,y,settings.gain);
y = bsxfun(@plus,y,settings.ymin);
end

% Sigmoid Symmetric Transfer Function


function a = tansig_apply(n,~)
a = 2 ./ (1 + exp(-2*n)) - 1;
end

% Map Minimum and Maximum Output Reverse-Processing Function


function x = mapminmax_reverse(y,settings)
x = bsxfun(@minus,y,settings.ymin);
x = bsxfun(@rdivide,x,settings.gain);
x = bsxfun(@plus,x,settings.xoffset);
end

TCOER/CIVIL/STRUCTURE 2017-2018 49
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

NNTOOL Script

% Solve an Input-Output Fitting problem with a Neural Network


% Script generated by Neural Fitting app
% Created 27-Jul-2018 07:34:13
%
% This script assumes these variables are defined:
%
% i - input data.
% t - target data.

x = i;
t = t;

% Choose a Training Function


% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. Suitable in low memory situations.
trainFcn = 'trainlm'; % Levenberg-Marquardt backpropagation.

% Create a Fitting Network


hiddenLayerSize = 10;
net = fitnet(hiddenLayerSize,trainFcn);

% Choose Input and Output Pre/Post-Processing Functions


% For a list of all processing functions type: help nnprocess
net.input.processFcns = {'removeconstantrows','mapminmax'};
net.output.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing


% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

% Choose a Performance Function


% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean Squared Error

% Choose Plot Functions

TCOER/CIVIL/STRUCTURE 2017-2018 50
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

% For a list of all plot functions type: help nnplot


net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'};

% Train the Network


[net,tr] = train(net,x,t);

% Test the Network


y = net(x);
e = gsubtract(t,y);
performance = perform(net,t,y)

% Recalculate Training, Validation and Test Performance


trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,y)
valPerformance = perform(net,valTargets,y)
testPerformance = perform(net,testTargets,y)

% View the Network


view(net)

% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, ploterrhist(e)
%figure, plotregression(t,y)
%figure, plotfit(net,x,t)

% Deployment
% Change the (false) values to (true) to enable the following code blocks.
% See the help for each generation function for more information.
if (false)
% Generate MATLAB function for neural network for application
% deployment in MATLAB scripts or with MATLAB Compiler and Builder
% tools, or simply to examine the calculations your trained neural
% network performs.
genFunction(net,'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x);
end
if (false)

TCOER/CIVIL/STRUCTURE 2017-2018 51
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

% Generate a matrix-only MATLAB function for neural network code


% generation with MATLAB Coder tools.
genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');
y = myNeuralNetworkFunction(x);
end
if (false)
% Generate a Simulink diagram for simulation or deployment with.
% Simulink Coder tools.
gensim(net);
end

TCOER/CIVIL/STRUCTURE 2017-2018 52
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

CHAPTER 7
REFERENCES

1) Maria J. Sulewska, (2011) “Appliying Artificial Neural Networks for Analysis of


Geotechnical Problems.” Computer Assited Mechanics and Engineering Sciences 18:
231-241.
2) Abbes Berrais, “Artificial Neural Networks in Structural Engineering: Concept and
Applications,” JKAU: Eng Sci., Vol. 12 no.01, pp 53-67 (1420 A.H. / 1999 A.D.)
3) Kumar Gaurav, Sankha Bahaduri, Arbind Kumar, (2016) “Prediction of Deflection of
Cantilive r Beam of any Arbitary Length Using Soft Computation Technique.” Internation
Journal for Resarch in Applied Science & Engineering Technology, Vol. 4, zissue III, March
2016.
4) Sangeeta Yadav, K. K. Pathak and Rajesh Shrivastava (2010) “Shape Optimization of
Cantilever Beams using Neural Network” Applied Mathematical Sciences, Vol. 4, 2010, no.
32, 1563 - 1572
5) Ehsan Momeni, Ramli Nazir, Danial Jahed Armaghani, Harnedi Maizir, (2015)
“Application of Artificial Neural Network for Predicting Shaft and Tip Resistances of
Concrete Piles.” Earth Sci. Res. J. Vol. 19, No. 1 (June, 2015): 85 - 93
6) H Sudarsana Rao and B Ramesh Babu, (2007) “Hybrid Neural Network for The
Design of Beam Subjected To Bending and Shear.” (paper title and editor), in Sadhana, vol.
32, part 5, pp 577-586.
7) Mohamed A. Shahin, Mark B. Jaksa and Holger R. Maier, (2001) “Artificial Neural
Network Applications In Geotechnical Engineering.” Australian Geomechanics – March
2001.
8) Lirong Sha, Yang Yue “ Structural Reliablity Optimization Design Based on Artificial
Neural Network” Advance Material Resarch , ISSN1662-8985. Vols 834-836, pp 1877-1880.
9) H. Elarabi, S.A. Abdelgalil “Application of artificial Neural Network for Prediction of
Sudan Soil Profile.” Amrican Journal of Engineering, Technology & Society.2014;1(2) 7-10
10) Wardani S. P. R., Surjandari N. S. and Jajaputra A. A., “Analysis of Ultimate Bearing
Capacity of Single Pile Using the Artificial Neural Networks Approach: A Case Study.” 18th
International Conference on Soil Mechanics and Geotechnical Engineering, Paris, 2013

11) Harnedi Maizir and Khairul Anuar Kassim “Neural Network Application in Prediction
of Axial Bearing Capacity of Driven Piles” Proceedings of the International MultiConference

TCOER/CIVIL/STRUCTURE 2017-2018 53
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

of Engineers and Computer Scientists 2013 Vol I, IMECS 2013, March 13 - 15, 2013, Hong
Kong

12) Marijana Lazarevska, Milos Knezevic, Meri Cvetkovska, Ana Trombeva-Gavriloska,


“APPLICATION OF ARTIFICIAL NEURAL NETWORKS IN CIVIL ENGINEERING”.
Technical Gazette 21, 6(2014), 1353-1359, ISSN 1330-3651(Print), ISSN 1848-6339
(Online)

13) D.-S. Jeng, D. H. Cha and M. Blumenstein “Application of Neural Network in Civil
Engineering Problems.”
14) H. K. D. H. Bhadeshia, “Neural Networks in Material Science.” ISIJ International, Vol.
39, no. pp 966-979, 1999.
15) H Sudarsana Rao and B Ramesh Babu, “Optimized Column Design Using Genetic
Algorithm Based Neural Networks.” Indian Journal of Engineering & material Science, Vol.
13, pp 503-511, December 2006.
16) Susan Hentschel Tully, “A Neural Network Approach for Predicting The Structural
Behavior of Concrete Slabs.” (Book type), Thesis of Memorial University, Newfoundland,
Canada.
17) Mehmet Avcar, “An Artificial Neural Network Application for estimations of Neural
Frequencies of Beams” International Journal of Computer Science and Applications Vol 6,
No. 6 2015.
18) Ema COELHO, Paulo CANDEIAS and Artur V.PINTO, “Assessment of the Seismic
Behaviour of RC Flat Slab Building Structures” 13th World Conference on Earthquake
Engineering, Vancouver ,B.C., Canada, August 1-6,2004,Paper No.2630
19) D.-S. Jeng, D. H. Cha and M. Blumenstein, “Application of Neural Network in Civil
Engineering Problems”
20) John D. McKinley, “Extracting pattern from scattered data - applicability of artificial
neural networks to the interpretation of bearing capacity data”, CUED / D - SOILS / TR.299
(1996)

21) L.A. Dobrzański, R. Honysz, “Application of artificial neural networks in modelling


of normalised structural steels mechanical properties”, Journal of Achievements in Materials
and Manufacturing Engineering,Volume 32, Issue 1 January 2009

TCOER/CIVIL/STRUCTURE 2017-2018 54
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

22) J noorzaei, S. J. S. Hakim and M. S. Jaafar. “An Approach to Predict Ultimate Bearing
Capacity of Surface Footings using Artificial Neural Network.” (Periodical Style), Indian
Geotechnical Journal, 38(4), 2008, 513-526.

TCOER/CIVIL/STRUCTURE 2017-2018 55
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

LIST OF PUBLICATIONS

a) Certificate for Fifth Civil PGCON (2018) held at Imperial College of Engineering &
Research, Wagholi, Pune on 11th and 12 th June 2018.

TCOER/CIVIL/STRUCTURE 2017-2018 56
FOUNDATION DESIGN USING ARTIFICIAL NEURAL NETWORK

Remarks & Comments

TCOER/CIVIL/STRUCTURE 2017-2018 57

You might also like