58 views

Uploaded by Rijas Rasheed

Neural Networks,Robotics,Artificial Intelligents,

save

- EJ01.pdf
- Neural Networks
- Neural Networks
- NeuralNetworks-Assign1
- Local Minimim Paper
- Bus Voltages Calculation by Using Ann
- Artificial Neural Networks (ANN)
- Assembly Sequence Planning Using
- Neural Networks
- Neural
- 112.pdf
- amini[1]
- Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid Algorithm
- 21382.pdf
- cs224n-2017-notes3
- Fault Diagnostics of Rolling Bearing based on Improve Time and Frequency Domain Features using Artificial Neural Networks
- ANN, SVM and KNN Classifiers for Prognosis of Cardiac Ischemia- A Comparison
- 文献調査票テンプレ
- CST196.Chap12.060124
- Nn Class 2016
- Artificial Intelligence Techniques in Power Systems
- softcomputing.pdf
- SPEECH RECOGNITION IN NEURO-FUZZY SYSTEM
- A New Power System Transient Stability Assessment Method Based
- 00752797
- IJCAS_v6_n5_pp.639-650
- 2-Artificial Neural Network Models for Biomass
- r d research paper
- Artificial Neural Network Model for Prediction solarradiation.pdf
- Neural Cfar
- IEEE format for Seminar Report
- Strassen's Matrix Multiplication
- NP-HARD ANDNP-COMPLETE PROBLEMS
- Dead Lock in O.S
- SIMULATION
- Android Bluetooth Tutorial
- Dead Lock
- Divide and Conquer Strategy
- Multiplexing
- SignalEncodingTechniques,Computer Networks
- Light Peak
- SignalEncodingTechniques
- FAST PHONG MODEL
- Human Resource Management
- POLYGON RENDERING METHOD
- Human Resource Management
- FAST PHONG MODEL
- RFID technology
- Human Resource Management
- Technical Drawing -
- El pasado en Enzo Traverso
- Confira Edital para Supervisor do PROVAB
- Bipolar
- NBR 6137 - Pisos Para Revestimento de Pavimentos
- A Filosofia como Análise Lógica da Linguagem (Mauricio)
- Encuesta - Copia
- GONDIM, L. Pesquisa em Ciências Sociais.pdf
- Cursos de Botes de Rescate
- basalt rock fibre
- 06 PHP. Instalando MySQL
- Convocatoria Segundo Encuentro de Escuelas Normales Interculturales Bilngues
- Fisica General Binmat
- RM227_2017_PRODUCE
- Promocion Turistica-Capitulo II
- Alexitmia y Su Trascendencia Clinica y Social
- IPA
- MecII_-_2013_-_2_-_E_-_PP1
- ecuacionesDEL MOLINO
- ATSC 3
- Muka Depan Lab
- Gateo
- GRAVIMETRIA
- gsmoptimization-130402072333-phpapp01
- SEDIMENTACIÓN
- Especificaciones diseño tablero distribución Parte3
- Informe Test de Luscher Completo Zara
- 00760124
- Considerações Sobre a Análise de Discurso
- IEEE Spectrum Footprints

You are on page 1of 85

Submitted By: Rijas Rasheed S5 MCA SNGIST

**What is a neural network (NN)?
**

Neural networks is a branch of "Artificial Intelligence". Artificial Neural Network is a system loosely modeled based on the human brain. The field goes by many names, such as connectionism, parallel distributed processing, neuro-computing, natural intelligent systems, machine learning algorithms, and artificial neural networks.

A vague description is as follows: An ANN is a network of many very simple processors ("units"), each possibly having a (small amount of) local memory. The units are connected by unidirectional communication channels ("connections"), which carry numeric (as opposed to symbolic) data. The units operate only on their local data and on the inputs they receive via the connections. The design motivation is what distinguishes neural networks from other mathematical techniques A neural network is a processing device, either an algorithm, or actual hardware, whose design was motivated by the design and functioning of human brains and components thereof. Most neural networks have some sort of "training" rule whereby the weights of connections are adjusted on the basis of presented patterns. In other words, neural networks "learn" from examples, just like children learn to recognize dogs from examples of dogs, and exhibit some structural capability for generalization. Neural networks normally have great potential for parallelism, since the computations of the components are independent of each other.

Introduction

Biological neural networks are much more complicated in their elementary structures than the mathematical models we use for ANNs. It is an inherently multiprocessor-friendly architecture and without much modification, it goes beyond one or even two processors of the von Neumann architecture. It has ability to account for any functional dependency. The network discovers (learns, models) the nature of the dependency without needing to be prompted. Neural networks are a powerful technique to solve many real world problems. They have the ability to learn from experience in order to improve their performance and to adapt themselves to changes in the environment. In addition to that they are able to deal with incomplete information or noisy data and can be very effective especially in situations where it is not possible to define the rules or steps that lead to the solution of a problem. They typically consist of many simple processing units, which are wired together in a complex communication network.

Introduction

•There is no central CPU following a logical sequence of rules - indeed there is no set of rules or program. This structure then is close to the physical workings of the brain and leads to a new type of computer that is rather good at a range of complex tasks.

•In principle, NNs can compute any computable function, i.e. they can do everything a normal digital computer can do. Especially anything that can be represented as a mapping between vector spaces can be approximated to arbitrary precision by Neural Networks.

•In practice, NNs are especially useful for mapping problems which are tolerant of some errors and have lots of example data available, but to which hard and fast rules can not easily be applied.

•In a nutshell a Neural network can be considered as a black box that is able to predict an output pattern when it recognizes a given input pattern. Once trained, the neural network is able to recognize similarities when presented with a new input pattern, resulting in a predicted output pattern.

(The actual figures vary greatly. On average. depending on the local neuroanatomy. or neurons.The Brain The Brain as an Information Processing System The human brain contains about 10 billion nerve cells.) . each neuron is connected to other neurons through about 10 000 synapses.

most programs and engineered systems are brittle: if you remove some arbitrary parts. complex visual perception occurs within less than 100 ms. Despite of being built with very slow hardware. consider the time taken for each elementary operation: neurons typically operate at a maximum rate of about 100 Hz. that is. Against this. very likely the whole will cease to function. In contrast.) As a discipline of Artificial Intelligence. the brain has quite remarkable capabilities: Its performance tends to degrade gracefully under partial damage. This means that partial recovery from damage is possible if healthy units can learn to take over the functions previously carried out by the damaged areas. in which a single processor executes a single series of instructions. .Computation in the brain The brain's network of neurons forms a massively parallel information processing system. For example. 10 processing steps! It supports our intelligence and self-awareness. while a conventional CPU carries out several hundred million machine level operations per second. (Nobody knows yet how this occurs. It performs massively parallel computations extremely efficiently. This contrasts with conventional computers. Neural Networks attempt to bring computers a little closer to the brain's capabilities by imitating certain aspects of information processing in the brain. in a highly simplified way. It can learn (reorganize itself) from experience.

and cerebellum.Neural Networks in the Brain The brain is not homogeneous. midbrain. The best mapped (and largest) system in the human brain is the visual system. neurons also link up with many thousands of their neighbours. At the largest anatomical scale. In this way they form very dense. •In addition to these long-range connections. either according to the anatomical structure of the neural networks within it. we distinguish cortex. where the first 10 or 11 processing stages have been identified. from feedback connections that go in the opposite direction. and only partially known. •The overall pattern of projections (bundles of neural connections) between areas is extremely complex. brainstem. complex local networks . We distinguish feedforward projections that go from earlier processing stages (near the sensory input) to later ones (near the motor output). and areas within each region. Each of these can be hierarchically subdivided into many regions. or according to the function performed by them.

chemicals which are released from the first neuron and which bind to receptors in the second. the amount of neurotransmittor available.g. e. down the axon. .an electrical pulse that travels from the body. amount of neurotransmittor reabsorbed. during which the neuron is unable to fire. Transmission of an electrical signal from one neuron to the next is effected by neurotransmittors. to the next neuron(s) (or other receptors).Neurons and Synapses •The basic computational unit in the nervous system is the nerve cell. Once input exceeds a critical level. This link is called a synapse. Inputs sum (approximately). •The axon endings (Output Zone) almost touch the dendrites or cell body of the next neuron. or neuron. the number and arrangement of receptors. The extent to which the signal from one neuron is passed on to the next depends on many factors. etc. A neuron has: •Dendrites (inputs) •Cell body •Axon (output) •A neuron receives input from other neurons (typically many thousands). the neuron discharges a spike . This spiking event is also called depolarization. and is followed by a refractory period.

we are more interested in the general properties of neural networks. abstract "neurons".Artificial Neuron Models Computational neurobiologists have constructed very elaborate computer models of neurons in order to run detailed simulations of particular circuits in the brain. This has obvious advantages over having to use special "neural" computer hardware. Remember though that computers run much faster than brains . which (hopefully) capture the essence of neural computation even if they leave out much of the details of how biological neurons work. independent of how they are actually "implemented" in the brain. People have implemented model neurons in hardware as electronic circuits. This means that we can use much simpler. As Computer Scientists. often integrated on VLSI chips. .we can therefore run fairly large networks of simple model neurons as software simulations in reasonable time.

In the simplest case. which can be modified so as to model synaptic learning. •Note that wij refers to the weight from unit j to unit i (not the other way around). It receives input from some other units. can serve as input to other units. •The weighted sum is called the net input to unit i.A Simple Artificial Neuron •The basic computational element (model neuron) is often called a node or unit. This is called a linear unit. often written neti. The unit computes some function f of the weighted sum of its inputs •Its output. f is the identity function. in turn. . Each input has an associated weight w. or perhaps from an external source. •The function f is the unit's activation function. and the unit's output is just its net input.

Function approximation: The tasks of function approximation is to find an estimate of the unknown function f() subject to noise. .Applications: Neural Network Applications can be grouped in following categories: Clustering: A clustering algorithm explores the similarity between patterns and places similar patterns in a cluster. Best known applications include data compression and data mining. Prediction/Dynamical Systems: The task is to forecast some future values of a time-sequenced data. Prediction has a significant impact on decision support systems. This category includes algorithmic implementations such as associative memory. Here the system is dynamic and may produce different results for the same input data based on system state (time). Various engineering and scientific disciplines require function approximation. Prediction differs from Function approximation by considering time factor. Classification/Pattern recognition: The task of pattern recognition is to assign an input pattern (like handwritten symbol) to one of many classes.

Unsupervised .Types of Neural Networks Neural Network types can be classified based on following attributes: • Applications -Classification -Clustering -Function approximation -Prediction • Connection Type .Static (feedforward) .Self-organized • Learning Methods .Dynamic (feedback) • Topology .Supervised .Single layer .Multilayer .Recurrent .

The McCulloch-Pitts neural model is also known as linear threshold gate. The function f is a linear step function at threshold T as shown in figure .W2…Wm are weight values normalized in the range of either (0. Such a function can be described mathematically using these equations: •W1. Sum is the weighted sum. The linear threshold gate simply classifies the set of inputs into two different classes. It is a neuron of a set of inputs I1.I3…Im and one output y .1) or (-1. and T is a threshold constant.1) and associated with each input line.The McCulloch-Pitts Model of Neuron The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in 1943. Thus the output y is binary.I2.

McCulloch-Pitts model of an artificial neuron and Hebbian learning rule of adjusting weights. the modified equation is now as follows: where b represents the bias value.The Perceptron In late 1950s. Frank Rosenblatt introduced a network composed of the units that were enhanced version of McCulloch-Pitts Threshold Logic Unit (TLU) model. was the result of merger between two concepts from the 1940s. Thus. the perceptron model added an extra input that represents bias. a perceptron. Rosenblatt's model of neuron. In addition to the variable weight values. .

The neural computing algorithm has diverse features for various applications .The McCulloch-Pitts Model of Neuron Figure :Symbolic Illustration of Linear Threshold Gate The McCulloch-Pitts model of a neuron is simple yet has substantial computing potential. It also has a precise mathematical definition. However. Thus. this model is so simplistic that it only generates a binary output and also the weight and threshold values are fixed. . we need to obtain the neural model with more flexible computational features.

The behavior of the activation function will describe the characteristics of an artificial neuron model. The activation function ``squashes" the amplitude the output in the range of [0. . which is normally between 0 and 1. In the first stage. The summation function will be then performed as •The sum-of-product value is then passed into the second stage to perform the activation function which generates the output from the neuron. the general form an artificial neuron can be described in two stages shown in figure. Each value of input array is associated with its weight value. the linear combination of inputs is calculated. Also.Artificial Neuron with Continuous Characteristics Based on the McCulloch-Pitts model described previously.1] or [-1.1] alternately. the summation function often takes an extra input value Theta with weight value of 1 to represent threshold or bias of a neuron.

Similarly. the output value y approaches to 1. As the input x tends to large positive value. The linear threshold function should be ``softened". With this kind of observation. .Artificial Neuron with Continuous Characteristics •The signals generated by actual biological neurons are the action-potential spikes. and the biological neurons are sending the signal in patterns of spikes rather than simple absence or presence of single spike pulse. the signal could be a continuous stream of pulses with various frequencies. For example. we should consider a signal to be continuous with bounded range. However. or in short. the output gets close to 0 as x goes negative. the output value is neither close to 0 nor 1 near the threshold point. •One convenient form of such ``semi-linear" function is the logistic sigmoid function. sigmoid function as shown in figure.

. As x approaches to . the slope is zero.infinity or + infinity . This characteristic often plays an important role in learning of neural networks. This function is expressed mathematically as follows: Additionally. the sigmoid function describes the ``closeness" to the threshold point by the slope. the slope increases as x approaches to 0.

…xN is connected to every artificial neuron in the output layer through the connection weight. Since every value of outputs y1.y2. Thus. Although the presented network is fully connected.…yN is calculated from the same set of input values. the only true layer of neurons is the one on the right.the weight value of zero can be represented as ``no connection". though even a single neuron can perform substantial level of computation. the true computing power of the neural networks comes. The most common structure of connecting neurons into a network is by layers. each output is varied based on the connection weights. The input layer neurons are to only pass and distribute the inputs and perform no computation. Each of the inputs x1.Single-Layer Network By connecting multiple neurons. the true biological neural network may not have all possible connections . The shaded nodes on the left are in the so-called input layer. The simplest form of layered network is shown in figure.x2. .

Multilayer network can be also viewed as cascading of groups of single-layer networks. The designer of an artificial neural network should consider how many hidden layers are required. a more complex structure of neural network is required. In this multilayer structure. then the outputs from the first hidden layer are passed to the next layer.Multilayer Network To achieve higher level of computational capabilities. the input nodes pass the information to the units in the first hidden layer. Figure shows the multilayer neural network which distinguishes itself from the single-layer network by having one or more hidden layers. The level of complexity in computing can be seen by the fact that many single-layer networks are combined into this multilayer network. and so on. depending on complexity in desired computation. .

are feedforward networks with distinct input. There may be any number of hidden layers. and any number of hidden units in any given hidden layer.Backpropagation Networks Backpropagation networks. The figure on next page presents the architecture of backpropagation networks. +1}. . in general. and multi layered perceptrons. bi-polar {-1. output. Input and output units can be binary {0. 1]. except that the transition (output) rule and the weight update (learning) mechanism are more complex. or may have real values within a specific range such as [-1. The units function basically like perceptrons. and hidden layers. 1}. Note that units within the same layer are not interconnected.

Backpropagation Networks .

5. . wki is the weight of the connection between unit k and unit i. and asymptotically approaches 1 as x increases. The activation value ak of unit k is computed as follows. xi is the input signal coming from unit i at the other end of the incoming connection. At x = 0. The output yk of unit k is computed as follows: The function f(x) is referred to as the output function. and so on until the output units will have produced the network's actual response to the current input. As illustrated above. asymptotically approaching 0 as x decreases. units of hidden layer 1 compute their activation and output values and pass these on to the next layer. f(x) is equal to 0. This is basically the same activation function of linear threshold units (McCulloch and Pitts model).Backpropagation Networks In feedforward activation. It is a continuously increasing function of the sigmoid type. Unlike in the linear threshold unit. the output of a unit in a backpropagation network is no longer based on a threshold.

the output function uses the hypertangent function. This is where the retropropagation of error is called for. it is convenient to have input and output values that are bipolar. the network’s response is compared to the desired output ydi which accompanies the training pattern. There are two types of error. This cannot be computed directly since there is no available information on the desired outputs of the hidden layers. Once activation is fed forward all the way to the output units. but would be asymptotic to –1 as x decreases. . The first error is the error at the output layer.Backpropagation Networks In some implementations of the backpropagation model. In this case. This function has value 0 when x is 0. This can be directly computed as follows: The second type of error is the error at the hidden layers. which has basically the same shape.

Once this is computed.Backpropagation Networks Essentially. The retropropagation of error is illustrated in the figure below: . This is done sequentially until the error at the very first hidden layer is computed. this is used in turn to compute for the error of the next hidden layer immediately preceding the last hidden layer. the error at the output layer is used to compute for the error at the hidden layer immediately preceding the output layer.

These could be error values at the output layer or at a hidden layer.Backpropagation Networks •Computation of errors ei at a hidden layer is done as follows: •The errors at the other end of the outgoing connections of the hidden unit h have been earlier computed. . These error signals are multiplied by their corresponding outgoing connection weights and the sum of these is taken.

. These could be error values at the output layer or at a hidden layer.Backpropagation Networks The errors at the other end of the outgoing connections of the hidden unit h have been earlier computed. These error signals are multiplied by their corresponding outgoing connection weights and the sum of these is taken.

. its first derivative (slope) f’(x) is easily computed as follows: •We note that the change in weight is directly proportional to the error term computed for the unit at the output end of the incoming connection. this weight change is controlled by the output signal coming from the input end of the incoming connection. depending on the actual output f(x). we can infer that there will likewise be little weight change when the output of the unit at the other end of the connection is close to 0 or 1. learning will take place mainly at those connections with high pre-synaptic signals and non-committed (hovering around 0. whether it be at a hidden unit or at an output unit. Thus. The learning rate a is typically a small value between 0 and 1. and knowing the shape of the function. Because this term measures the slope of the function. However. f’(x) also controls the size of weight adjustments.Backpropagation Networks After computing for the error for each unit. In the case of the sigmoid function above. •The weight change is further controlled by the term f’(ak). It controls the size of weight adjustments and has some bearing on the speed of the learning process as well as on the precision by which the network can possibly operate. The weight update rule is uniform for all connection weights. We can infer that very little weight change (learning) occurs when this input signal is almost zero.5) post-synaptic signals. the network then fine-tunes its connection weights wkjt+1.

. which represents the output (range) of the function being mapped. both the inputs and the outputs are provided. In supervised training. will resist such reshaping. The network then processes the inputs and compares its resulting outputs against the desired outputs. by its nature. causing the system to adjust the weights which control the network.e. The training set (domain) acts as energy required to bend the sheet of metal such that it passes through predefined points. Errors are then calculated. The learning process of a Neural Network can be viewed as reshaping a sheet of metal. Learning can be done in supervised or unsupervised training. the metal. However. So the network will attempt to find a low energy configuration (i.Learning Process One of the most important aspects of Neural Network is the learning process. This process occurs over and over as the weights are continually tweaked. a flat/non-wrinkled shape) that satisfies the constraints (training data).

edu.html http://www.html http://www.comp. •Following geometrical interpretations will demonstrate the learning process within different Neural Models: •Parallel. the network is provided with inputs but not with desired outputs.ece.edu. distributed information processing •High degree of connectivity among basic units •Connections are modifiable based on experience •Learning is a constant process.nus.au/teaching/units/233. The system itself must then decide what features it will use to group the input data.uwa.edu/research/webfuzzy/docs/kk-thesis/kk-thesis-html/node12.csse. This is often referred to as self-organization or adaption.Summary The following properties of nervous systems will be of particular interest in our neurally-inspired models: •In unsupervised training.utep. and usually unsupervised •Learning is based only on local information •Performance degrades gracefully if some units are removed References: http://www.407/lectureNotes/Lect1-UWA.sg/~pris/ArtificialNeuralNetworks/LinearThresholdUnit.pdf .

Supervised and Unsupervised Neural Networks .

users.pdf http://www.ac.pdf .cs.ai.rug.uk/~sok/IML/iml _nn_arch.pdf http://ilab.york.nl/vakinformatie/ias/slides/ 3_NeuralNetworksAdaptation.edu/classes/2005cs561/notes/ LearningInNeuralNetworks-CS561-3-05.usc.References http://www.

Understanding Supervised and Unsupervised Learning A A B B B A .

Two possible Solutions… A B A B A B B A B B A A .

Supervised Learning It is based on a labeled training set. The class of each piece of data in training set is known. Class labels are pre-determined and provided in the training phase. Class Class A Class B Class A Class B B A Class .

Supervised Vs Unsupervised Task performed Classification Pattern Recognition Task performed Clustering NN Model : Self Organizing Maps NN model : Preceptron Feed-forward NN ―What is the class of this data point?‖ ―What groupings exist in this data?‖ ―How is each data point related to the data set as a whole?‖ .

Unsupervised Learning

Input : set of patterns P, from n-dimensional space S, but little/no information about their classification, evaluation, interesting features, etc. It must learn these by itself! : )

Tasks: Clustering - Group patterns based on similarity Vector Quantization - Fully divide up S into a small set of regions (defined by codebook vectors) that also helps cluster P. Feature Extraction - Reduce dimensionality of S by removing unimportant features (i.e. those that do not help in clustering P)

**Unsupervised Neural Networks – Kohonen Learning
**

Also defined – Self Organizing Map Learn a categorization of input space Neurons are connected into a 1-D or 2-D lattice. Each neuron represents a point in Ndimensional pattern space, defined by N weights During training, the neurons move around to try and fit to the data Changing the position of one neuron in data space influences the positions of its neighbors via the lattice connections

**Self Organizing Map – Network Structure
**

All inputs are connected by weights to each neuron size of neighbourhood changes as net learns Aim is to map similar inputs (sets of values) to similar neuron positions. Data is clustered because it is mapped to the same node or group of nodes

SOM-Algorithm

1. Initialization :Weights are set to unique random values 2. Sampling : Draw an input sample x and present in to network 3. Similarity Matching : The winning neuron i is the neuron with the weight vector that best matches the input vector i = argmin(j){ x – wj }

Updating : Adjust the weights of the winning neuron so that they better match the input.Algorithm 4.SOM . hij ( x – wj) neighbourhood function : hij over time neigbourhood function gets smaller Result: The neurons provide a good approximation of the input space and correspond . Also adjust the weights of the neighbouring neurons. ∆wj = η .

INTRUSION DETECTION .

Department of Computer Science.References http://ilab.usc.NCSA . Chung-E Wang04/24/2002.California State University.edu/classes/2005cs561/notes/LearningInNeural Networks-CS561-3-05.Nov 16 2003. Sacramento A Briefing Given to the SC2003 Education Program on Knowledge Discovery In Databases. Zhen Zhang Advisor: Dr.pdf 2004 IEEE International Conference on Systems. Man and Cybernetics" A Hybrid Training Mechanism for Applying Neural Networks to Web-based Applications” Ko-Kang Chu Maiga Chang Yen-Teh Hsia Data Mining Approach for Network Intrusion Detection.

OUTLINE What is IDS? IDS with Data Mining IDS and Neural Network Hybrid Training Model 3 steps of Training .

What is an IDS? The process of monitoring and analyzing the events occurring in a computer and/or network system in order to detect signs of security problems Misuse detection: patterns of well-known attacks. Anomaly detection: deviation from normal usage Network based intrusion detection (NIDS) – monitors network traffic Host based intrusion detection (HIDS) – monitors a single host .

What is an IDS? .

Limitations of IDS Limitations of Misuse Detection Signature database has to be manually revised for each new type of discovered intrusion They cannot detect emerging threats Substantial latency in deployment of newly created signatures Limitations of Anomaly Detection False Positives – alert when no attack exists. anomaly detection is prone to a high number of false alarms due to previously unseen legitimate behavior. This is the problem that data mining looks to solve. Data Overload The amount of data for analysts to examine is growing too large. Lack of Adaptability System has to instantiated each new attack identified. . Lack of co-operative training and learning phase of system. Data Mining Based IDS and alleviate these limitations. Typically.

Why Data Mining can help? Learn from traffic data Supervised learning: learn precise models from past intrusions Unsupervised learning: identify suspicious activities Maintain models on dynamic data .

Overview .Data Mining .

IDS with Data Mining .

.Adaptability Train for known pattern of attacks Train for normal behavior Learning and re-training in the face of a new attack pattern.Neural Networks and IDS NN is trainable -. Adjusts weights of synapses recursively to learn new behaviors.

Training is carried out offline and used online. Model follows……. An intrusion detection model for webased applications. .Hybrid Training Model Identify abnormal browser access behaviors.

Hybrid Training Model configure Feedback / Storing Log Weight DB Decision Module configure Update weights New Training Set LOG DB Decision Abnormal Behavior Detection Module Online Decision Module Supervised Filtering Offline Training Module .

44 67588 130.1.010 Real IPAddress 140.1. Offline Training process Real and Simulated Datasets with expected results are put into the neural network for training.55 67589 140.245.66 64533 AM 09:30:56 2003/10/03 20000 PM 08:45:44 2003/11/04 30000 AM 07:12:08 2003/11/05 26000 .3.001 AN . without malicious data Virtual IP Access time Access Date Brows Access er time type (msec) N AN .245.158.1.

the preparation for training the neural network is done. Back Propagation Neural Network Model – Input Layer – 115 neurons Hidden Layer – 60 neurons Output Layer – 5 neurons .Neural Network Model IP Addresses are translated to 32 bit. The expected outputs from the neural network are— Is it a normal access? (0/1) What kinds of malicious actions is it? (000/001/010/011) Can it be positively sure? (0/1) Since we have input attribute values and expected outputs.binary and time data is fed in milliseconds.

store information in database. . ( 0 000 0) (Refer to model diagram) Decision module identifies the output to be definitive or nondefinitive and stores the information for later review by administrator. Analyze Access Behavior Trained Neural Network model is used ―online‖ to judge whether the access is normal or abnormal. When confusion arises between similar patterns and new malicious behavior. Administrator categorizes the new patterns as normal or abnormal behavior and re-trains the NN. through web – interface.2.

Web Interface for Online Training .

Training The administrator engages the neural network for training online. Separate copy of the neural network is being used for analysis Once the training completes new neural network replaces the old one.3. (Refer to model diagram) . Online .

Design Suitability Internet Ad Industry Network Security Antivirus .

1997 .Using Neural Networks to Forecast Stock Market Prices Project report : Ramon Lawrence Department of Computer Science University of Manitoba umlawren@cs.ca December 12.umanitoba.

Contents Abstract Motivation behind using NN Traditional Methods NN for Stock Prediction .

with the ability to discover patterns in nonlinear and chaotic systems.Abstract Neural networks offer the ability to predict market directions more accurately. .

Motivation behind using NN Neural networks are used to predict stock market prices because they are able to learn nonlinear mappings between inputs and outputs. . It may be possible that NN outperforms the traditional analysis and other computer-based methods.

.Traditional Methods Traditional methods used were: Statistics. fundamental analysis. technical analysis. None of these techniques has proven to be the consistently correct prediction tool that is desired. and linear regression.

. and their results are fed into neural networks as input.contd. Also. many of these techniques are used to preprocess raw data inputs. these methods are presented as they are commonly used in practice and represent a base-level standard. However. which neural networks should outperform.

Technical Analysis Technical analysis rests on the assumption that history repeats itself and that future market direction can be determined by examining past prices. and open interest statistics. . volume. the technical analyst uses charts to predict future stock movements. Using price. Ex.

. it is possible to determine expected returns and the intrinsic value of shares. By studying the overall economic conditions. the company’s competition.Fundamental Analysis Fundamental analysis involves the in-depth analysis of a company’s performance and profitability to determine its share price. and other factors. This type of analysis assumes that a share’s current (and future) price depends on its intrinsic value and anticipated return on investment.

and interpretation of this knowledge may be subjective. it becomes harder to formalize all this knowledge for purposes of automation (with a neural network for example). Unfortunately. The advantages of fundamental analysis are its systematic approach and its ability to predict changes. .contd.

using only deterministic or statistical techniques will not fully capture the nature of a chaotic system. A neural networks ability to capture both deterministic and random features makes it ideal for modeling chaotic systems. The deterministic process can be characterized using regression fitting. .Chaotic System A chaotic system is a combination of a deterministic and a random process. while the random process can be characterized by statistical parameters of a distribution function. Thus.

Fuzzy logic has also been used. . They range from charting programs to sophisticated expert systems.Other techniques Many other computer based techniques have been employed to forecast the stock market.

In this capacity. .Comparison with Expert Systems Expert systems process knowledge sequentially and formulate it into rules. expert systems can be used in conjunction with neural networks to predict the market. while the expert system could validate the prediction based on its well-known trading rules. the neural network can perform its prediction. In such a combined system.

However. The advantage of expert systems is that they can explain how they derive their results. With neural networks. neural networks are faster because they execute in parallel and are more fault tolerant.contd. Expert systems are only good within their domain of knowledge and do not work well when there is missing or incomplete information. it is difficult to analyze the importance of input data and how the network derived its results." Thus. It is hard to extract information from experts and formalize it in a way usable by expert systems. . Neural networks handle dynamic data better and can generalize and make "educated guesses. neural networks are more suited to the stock market environment than expert systems.

Application of NN to Stock Market Prediction The networks are examined in three main areas: Network environment and training data Network organization .

Training a NN A neural network must be trained on some input data. The two major problems in implementing this training discussed in the following sections are: Defining the set of input to be used (the learning environment) Deciding on an algorithm to train the network .

The goal of most of these networks is to decide when to buy or sell securities based on previous market indicators. .Learning Environment One of the most important factors in constructing a neural network is deciding on what the network will learn. The challenge is determining which indicators and input data will be used. and gathering enough training data to train the system appropriately.

The input data may be raw data on volume. economic environment. etc. etc. .contd. or daily change. The input data should allow the neural network to generalize market behavior while containing limited redundant data.). but it may also include derived data such as technical indicators (moving average.) or fundamental indicators (intrinsic share value. price. trend-line indicators.

in an attempt to get an overall view of the market environment by using raw data and derived indicators.volume.moving averages. etc. gold price/foreign exchange rates(3) interest rates(4) economic statists(7) . henceforth called the JSE-system. yield. metals. imports. price/earnings technical(17) . volume trends. This system had 63 indicators. JSE indices(20) . etc.exports. modeled the performance of the Johannesberg Stock Exchange.An Example A comprehensive example neural network system. . etc. The 63 input data values can be divided into the following classes with the number of indicators in each class in parenthesis: fundamental(3) .market indices for various sectors: gold. from a variety of categories.

The JSE-system normalized all data to the range [-1. Such pruning techniques are very important because they reduce the network size which speeds up recall and training times. It is interesting to note that although the final JSEsystem was trained with all 63 inputs.1]. As the number of inputs to the network may be very large. the analysis showed that many of the inputs were unnecessary. The authors used cross-validation techniques and sensitivity analysis to discard 20 input values negligible effect on system performance.1] or [-1.contd. Normalizing data is a common feature in all systems as neural networks generally use input data in the range [0. pruning techniques are especially useful.1]. .

The training algorithm may vary depending on the network architecture. .Network Training Training a network involves presenting input patterns in a way so that the system minimizes its error and improves its performance. but the most common training algorithm used when designing financial neural networks is the back-propagation algorithm.

the connection weights are changed. Back-propagation is the process of back-propagating errors through the system from the output layer towards the input layer during training. As the errors are back-propagated through the nodes. .contd. The output layer is the only layer which has a target value for which to compare. Training occurs until the errors in the weights are sufficiently small to be accepted. so they must be trained based on errors from previous layers. The most common network architecture for financial neural networks is a multilayer feedforward network trained using backpropagation. Back-propagation is necessary because hidden units have no training target value that can be used.

overtraining is a serious problem. It is an important factor in these prediction systems as their primary use is to predict (or generalize) on input data that it has never seen. Overtraining occurs when the system memorizes patterns and thus looses the ability to generalize.Over training The major problem in training a neural network is deciding when to stop training. Since the ability to generalize is fundamental for these networks to predict future stock prices. .

For example.Training Data Training on large volumes of historical data is computationally and time intensive and may result in the network learning undesirable information in the data set. but very old data may lead the network to learn patterns or factors that are no longer important or valuable. . Sufficient data should be presented so that the neural network can capture most of the trends. stock market data is timedependent.

In supplementary learning.Supplementary Learning A variation on the back-propagation algorithm which is more computationally efficient was proposed when developing a system to predict Tokyo stock prices. which makes the process faster and able to handle larger amounts of data. the weights are updated based on the sum of all errors over all patterns (batch updating). They called it supplementary learning. This procedure has the effect of only changing the units in error. Each output node in the system has an associated error threshold (tolerance). and errors are only back-propagated if they exceed this tolerance. .

Conclusion Although neural networks are not perfect in their prediction. they outperform all other methods and provide hope that one day we can more fully understand dynamic. chaotic systems such as the stock market. .

THANKS .

- EJ01.pdfUploaded bywiliansageri
- Neural NetworksUploaded byGoutham Beesetty
- Neural NetworksUploaded byKiran Moy Mandal
- NeuralNetworks-Assign1Uploaded bySundeep Chopra
- Local Minimim PaperUploaded byShakeel Ahmed
- Bus Voltages Calculation by Using AnnUploaded byBharadwaj Santhosh
- Artificial Neural Networks (ANN)Uploaded byДушко Маџоски
- Assembly Sequence Planning UsingUploaded bysagarbolisetti
- Neural NetworksUploaded byrian ngganden
- NeuralUploaded bydsjcfnpsdufbvp
- 112.pdfUploaded bynauli10
- amini[1]Uploaded bySoni Baba
- Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid AlgorithmUploaded byAnonymous 7VPPkWS8O
- 21382.pdfUploaded byAleksandar
- cs224n-2017-notes3Uploaded byDu Phan
- Fault Diagnostics of Rolling Bearing based on Improve Time and Frequency Domain Features using Artificial Neural NetworksUploaded byInternational Journal for Scientific Research and Development - IJSRD
- ANN, SVM and KNN Classifiers for Prognosis of Cardiac Ischemia- A ComparisonUploaded byBONFRING
- 文献調査票テンプレUploaded bycloud_evil
- CST196.Chap12.060124Uploaded byHetal Langalia
- Nn Class 2016Uploaded bybasualok@rediffmail.com
- Artificial Intelligence Techniques in Power SystemsUploaded byBharadwaj Santhosh
- softcomputing.pdfUploaded bytemp
- SPEECH RECOGNITION IN NEURO-FUZZY SYSTEMUploaded byIJSTE
- A New Power System Transient Stability Assessment Method BasedUploaded bywisnu
- 00752797Uploaded byAnand Maheshwari
- IJCAS_v6_n5_pp.639-650Uploaded byZaheer Khan
- 2-Artificial Neural Network Models for BiomassUploaded byGiuseppe
- r d research paperUploaded byapi-379004742
- Artificial Neural Network Model for Prediction solarradiation.pdfUploaded byWissal Abid
- Neural CfarUploaded bysbgayen

- IEEE format for Seminar ReportUploaded byRijas Rasheed
- Strassen's Matrix MultiplicationUploaded byRijas Rasheed
- NP-HARD ANDNP-COMPLETE PROBLEMSUploaded byRijas Rasheed
- Dead Lock in O.SUploaded byRijas Rasheed
- SIMULATIONUploaded byRijas Rasheed
- Android Bluetooth TutorialUploaded byRijas Rasheed
- Dead LockUploaded byRijas Rasheed
- Divide and Conquer StrategyUploaded byRijas Rasheed
- MultiplexingUploaded byRijas Rasheed
- SignalEncodingTechniques,Computer NetworksUploaded byRijas Rasheed
- Light PeakUploaded byRijas Rasheed
- SignalEncodingTechniquesUploaded byRijas Rasheed
- FAST PHONG MODELUploaded byRijas Rasheed
- Human Resource ManagementUploaded byRijas Rasheed
- POLYGON RENDERING METHODUploaded byRijas Rasheed
- Human Resource ManagementUploaded byRijas Rasheed
- FAST PHONG MODELUploaded byRijas Rasheed
- RFID technologyUploaded byRijas Rasheed
- Human Resource ManagementUploaded byRijas Rasheed

- Technical Drawing -Uploaded byramanand_yadav82
- El pasado en Enzo TraversoUploaded byGavrilo Mora
- Confira Edital para Supervisor do PROVABUploaded byFolha de Pernambuco
- BipolarUploaded byAkaje Huamedab
- NBR 6137 - Pisos Para Revestimento de PavimentosUploaded byDaniel Rocha
- A Filosofia como Análise Lógica da Linguagem (Mauricio)Uploaded byapi-3767762
- Encuesta - CopiaUploaded byGus Yañez
- GONDIM, L. Pesquisa em Ciências Sociais.pdfUploaded byFrancianeAlmeida
- Cursos de Botes de RescateUploaded byRicardo Jimenez
- basalt rock fibreUploaded byAditya Mittal
- 06 PHP. Instalando MySQLUploaded bycmslan
- Convocatoria Segundo Encuentro de Escuelas Normales Interculturales BilnguesUploaded byAdan Lora Quezada
- Fisica General BinmatUploaded byWidman Gutiérrez Reyes
- RM227_2017_PRODUCEUploaded byLa Torre Paul
- Promocion Turistica-Capitulo IIUploaded byRui Rodrigues
- Alexitmia y Su Trascendencia Clinica y SocialUploaded byErick Mota
- IPAUploaded byGni Zamora
- MecII_-_2013_-_2_-_E_-_PP1Uploaded byoseas carvajal
- ecuacionesDEL MOLINOUploaded byjsihuen
- ATSC 3Uploaded byMonica Ortiz
- Muka Depan LabUploaded byMohamad Shalihen
- GateoUploaded byisabel
- GRAVIMETRIAUploaded byRONALD ANKUASH
- gsmoptimization-130402072333-phpapp01Uploaded byJulen Uranga
- SEDIMENTACIÓNUploaded byJhilver Abau
- Especificaciones diseño tablero distribución Parte3Uploaded byJulio Garcia
- Informe Test de Luscher Completo ZaraUploaded byJorgeebanks
- 00760124Uploaded byAmit Debnath
- Considerações Sobre a Análise de DiscursoUploaded byMariana Mesquita
- IEEE Spectrum FootprintsUploaded byJesun Sahariar Firoz