Introduction to Neural Networks

2nd Year UG, MSc in Computer Science

Lecturer: Dr. John A. Bullinaria

© John A. Bullinaria, 2004

4. 2.Module Administration and Organisation Introduction to Neural Networks : Lecture 1 (part 1) © John A. Bullinaria. Aims and Learning Outcomes Assessment Lecture Plan Recommended Books Comments on Mathematical Requirements L1-2 . 5. 3. 2004 1.

Discuss the main factors involved in achieving good learning and generalization performance in neural network systems. Evaluate the practical considerations in applying neural networks to real classification and regression problems. L1-3 . 5. 2. 2. and Kohonen Self-Organising Maps. Explain and contrast the most common architectures and learning algorithms for MultiLayer Perceptrons. 4. Describe the relation between real brains and simple artificial neural network models. Radial-Basis Function Networks. Introduce the main fundamental principles and techniques of neural network systems. Investigate the principal neural network models and applications. Learning Outcomes 1. 3.Aims and Learning Outcomes Aims 1. Committee Machines. Identify the main implementational issues for common neural network systems.

) Important Dates There will be lab sessions on 8/9th and 22/23rd November The continuous assessment deadline will be 12th January The 2 hour examination will be in May/June There will be a resit examination in August/September L1-4 .Assessment From the Introduction to Neural Networks Module Description 70% 2 hour closed book examination 30% continuous assessment by mini-project report (Resit by written examination only with the continuous assessment mark carried forward.

Exercise Session 3 L1-5 .Lecture Plan Week 1 2 3 4 5 6 7 8 9 10 11 Session 1 Wednesdays 12:00-13:00 Introduction to Neural Networks and their History. Learning in Multi-Layer Perceptrons. Hebbian Learning. Artificial Neurons. Under-Fitting and Over-Fitting. Overview of More Advanced Topics. Radial Basis Function Networks: Introduction. Committee Machines. Bias and Variance. Back-Propagation. Exercise Session 2 Self Organizing Maps: Algorithms and Applications. Session 2 Thursdays 10:00-11:00 Biological Neurons and Neural Networks. Single Layer Perceptrons. The Generalized Delta Rule. Applications of Multi-Layer Perceptrons. Improving Generalization. Learning and Generalization in Single Layer Perceptrons. Practical Considerations. Networks of Artificial Neurons. Learning with Momentum. Radial Basis Function Networks: Applications. Exercise Session 1 Radial Basis Function Networks: Algorithms. Learning Vector Quantization (LVQ). Conjugate Gradient Learning. Gradient Descent Learning. Self Organizing Maps: Fundamentals.

1997 Clarendon Press. Sejnowski F. 1994 McGraw Hill. Date Comments Introduction to Neural Networks An Introduction to the Theory of Neural Computation Parallel Distributed Processing: Volumes 1 and 2 The Computational Brain Principles of Neurocomputing for Science and Engineering R. Date Prentice Hall. 1986 MIT Press. 1995 Prentice Hall Europe. Ham & I.L. This is the book I always use. P. Oxford. Non-mathematical introduction. Concise introductory text. et al. Good intermediate text. 1994 Comments Very comprehensive and up-todate. but rather mathematical. Beale & T.E.S.J. 1999 Prentice Hall. Good advanced book. Other Good Books Title Author(s) Publisher. Slightly mathematical. The original neural networks bible. A. but heavy in maths. J. 1990 Addison Wesley. 1991 MIT Press. Palmer D.M. Kostanic L1-6 IOP Publishing. Good all round book.G. McClelland. . 2001 Former recommended book. Churchland & T.Main Recommended Books Title Neural Networks: A Comprehensive Foundation An Introduction to Neural Networks Neural Networks for Pattern Recognition The Essence of Neural Networks Fundamentals of Neural Networks Author(s) Simon Haykin Kevin Gurney Christopher Bishop Robrt Callan Laurene Fausett Publisher. Hertz. Jackson J. Krogh & R. Good for computational neuroscience. 1999 UCL Press. Rummelhart.

Comments on Mathematical Requirements 1. 2. Don’t be surprised if it takes a while to become familiar with all the new notation – this is normal! 5. such as Machine Learning. 3. 4. This module will introduce and explain any necessary mathematics as and when we need it. Once you have the equations. Much of this will also be useful for other modules. L1-7 . it is fairly straightforward to convert them into C/C++/Java/Fortran/Pascal/MATLAB programs. The easiest way to formulate and understand neural networks is in terms of mathematical concepts and equations. You will not be required to perform any mathematical derivations in the examination.

6. 4. 2004 1. 3. 2. What are Neural Networks? Why are Artificial Neural Networks Worth Studying? Learning in Neural Networks A Brief History of the Field Artificial Neural Networks compared with Classical Symbolic AI Some Current Artificial Neural Network Applications L1-8 . Bullinaria. 5.Introduction to Neural Networks and Their History Introduction to Neural Networks : Lecture 1 (part 2) © John A.

One should never lose sight of how crude the approximations are. 5. or purely mathematical constructs. Artificial Neurons are crude approximations of the neurons found in brains. 3. 4. 2. They may be physical devices. L1-9 . and hence constitute crude approximations to parts of real brains. They may be physical devices. Artificial Neural Networks (ANNs) are networks of Artificial Neurons. an ANN is just a parallel computational system consisting of many simple processing elements connected together in a specific way in order to perform a particular task.What are Neural Networks ? 1. Neural Networks (NNs) are networks of neurons. as found in real (i.e. for example. or simulated on conventional computers. and how over-simplified our ANNs are compared to real brains. From a practical point of view. biological) brains.

They are particularly fault tolerant – this is equivalent to the “graceful degradation” found in biological systems. They are very noise tolerant – so they can cope with situations where normal symbolic systems would have difficulty. Massive parallelism makes them very efficient. They are extremely powerful computational devices (Turing equivalent. getting them to do it can be rather difficult…) L1-10 . In principle. 4. 3. (In practice. 6. and more. they can do anything a symbolic/logic system can do. 2. They can learn and generalize from training data – so there is no need for enormous feats of programming. universal computers).Why are Artificial Neural Networks worth studying? 1. 5.

formulate better teaching strategies. and the need for computational efficiency in artificial system building. and may even improve upon human performance. L1-11 . Artificial System Building : The engineering goal of building efficient systems for real world applications. There are fundamental differences though. Frequently progress is made when the two approaches are allowed to feed into each other. e. there are two basic goals for neural network research: Brain modelling : The scientific goal of building models of how real brains work. These should not be thought of as competing goals. We often use exactly the same networks and techniques for both. the need for biological plausibility in brain modelling. or better remedial actions for brain damaged patients.g.What are Artificial Neural Networks used for? As with the field of AI in general. relieve humans of tedious tasks. This may make machines more powerful. This can potentially help us understand the nature of human intelligence.

Learning in Neural Networks There are many forms of neural networks. L1-12 . 2. 3. learning with a teacher) Reinforcement learning (i.e. Supervised Learning (i. learning with no help) This module will study in some detail the most common learning algorithms for the most common types of neural network.e. They adapt the strengths/weights of the connections between neurons so that the final output activations are correct. Most operate by passing neural ‘activations’ through a network of connected neurons. There are three broad types of learning: 1.e. One of the most powerful features of neural networks is their ability to learn and generalize from a set of training data. learning with limited feedback) Unsupervised learning (i.

1990s The sub-field of Radial Basis Function Networks was developed. L1-13 . Kohonen developed the Self-Organising Maps that now bear his name. The Back-Propagation learning algorithm for Multi-Layer Perceptrons was rediscovered and the whole field took off again. Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer perceptrons. 2000s The power of Ensembles of Neural Networks and Support Vector Machines becomes apparent.A Brief History of the Field 1943 1949 1958 1969 1982 1982 1986 McCulloch and Pitts proposed the McCulloch-Pitts neuron model Hebb published his book The Organization of Behavior. in which the Hebbian learning rule was proposed. and almost the whole field went into hibernation. Hopfield published a series of papers on Hopfield networks. Rosenblatt introduced the simple single layer networks now called Perceptrons.

3. Top Down Parallel processing vs. 2. 5.ANNs compared with Classical Symbolic AI The distinctions can put under three headings: 1. L1-14 . the distinctions are becoming increasingly blurred. Level of Explanation Processing Style Representational Structure These lead to a traditional set of dichotomies: 1. 2. Modular Distributed representation vs. Sub-symbolic vs. Sequential processing In practice. Localist representation Bottom up vs. 3. 4. Symbolic Non-modular vs.

DNA sequencing L1-15 . microwave controllers Pattern recognition – speech recognition. airline marketing tactician Computer games – intelligent agents. data mining. PCA. ECG noise reduction Bioinformatics – protein secondary structure. first person shooters Control systems – autonomous adaptable robots.Some Current Artificial Neural Network Applications Brain modelling Models of human development – help children with developmental problems Simulations of adult performance – aid our understanding of how the brain works Neuropsychological models – suggest remedial actions for brain damaged patients Real world applications Financial modelling – predicting stocks. sonar signals Data analysis – data compression. backgammon. weather. currency exchange rates Other time series prediction – climate. hand-writing recognition. shares. GTM Noise reduction – function approximation.

3. 1. 4.4 Ham & Kostanic: Sections 1. They are massively parallel.9 Gurney: Sections 1.1. 1. 1.2. function approximation.1. 2.1. 1.2. fault tolerant and noise tolerant.3. robust. … Reading 1.3 Beale & Jackson: Sections 1. 1. which makes them efficient. They can learn from training data and generalize to new situations.2 L1-16 . 1. 4.1. prediction. 1.8. 2.Overview and Reading 1. 3. Artificial Neural Networks are powerful computational systems consisting of many simple processing elements connected together to perform tasks analogously to biological brains. Haykin: Sections 1. They are useful for brain modelling and real world applications involving pattern recognition. 1.

Sign up to vote on this title
UsefulNot useful