You are on page 1of 3

CS1AC16 ARTIFICIAL INTELLIGENCE SUMMARY

Neuron - fundamental unit of nervous system tissue. Each neuron consists of a cell body (soma), that
contains a cell nucleus. Dendrites branch into a bushy network around the cell. Axon is a single long
fiber that stretches out for long distances and eventually connects to the dendrites of other cells. The
connection junction is called a synapse. Signals are conveyed from neuron to neuron by a complicated
electrochemical reactions. Transmitter substances are released from the synapse and enter the
dendrite, raising or lowering the electrical potential of a cell body. When the potential reaches a
threshold, an action potential is sent down the axon, reaching synapses and releasing transmitters into
the bodies of different cells. Synapses that increase the potential are called excitatory, those that
decrease it are called inhibitory. Synaptic connections exhibit plasticity – long-term changes in the
strength of connections in response to the pattern of stimulation. Neurons can also form new
connections. These mechanisms are thought to form the basis for learning in the brain.

Artificial neural networks try to employ technological means to realize some of the characteristics of
the original biological version. The overall aim is to build an ANN by connecting together lots of
individual neuron models.
In the basic sense an artificial neuron operates in the same way a biological neuron:
o It receives a number of signals on its input, each of which can be more or less influential.
It adds these signals up and compares them with a threshold level. If the total sum is the
same or more than the threshold level, the neuron fires.
o The inputs have weights associated with them. They are multiplied by their respective
weights and summed.
o The neurons may have a bias value instead of a threshold. The bias is effectively a
negative value that the sum of the inputs must surpass.
o The output can then be multiplied by its own further weighting before being sent as
input to the next neuron.
This model of neuron is called a perceptron. Another way of looking at it is in terms of its ability to
pigeon-hole pieces of information into particular classes – a perceptron can assess whether something
belongs to class 1 or 2. Using only 1 neuron it is only possible to decide between 2 classes, no matter
how many inputs – the problem must be linearly separable for a neuron to solve it.

Self-organizing neural network


 Consists of a single layer of neurons
 Usually doesn’t have a strict threshold
 The neurons are formed into a square matrix
 The idea is that particular input pattern will excite a specific region of the array
 This network is called a feature map
 The same input signals are applied to all neurons in the same way
 The outputs are then fed back to form further inputs to each of the neurons – lateral connection
 Each of those signals will have an associated weight which can initially be set to random values
 When a particular input pattern is applied, one of the neurons will have an output signal higher
than the rest – this neuron is selected as winner and its connections are adjusted to make the
output even stronger
 Neurons it its vicinity also have their weights adjusted but not as much, neurons further away
have their weights modified so the output decreases
 The same procedure is repeated with other input patterns
 The map organizes itself so that it can monitor a set of inputs and when a particular pattern is
applied, the specific region of the map is excited
This has proven to be very successful in speech recognition.

N-tuple network –weightless network


The basic building block is a standard RAM chip. All of the signals are binary.
The input connections to the neuron are the address lines of the RAM chip and the output is the value
stored at that address.
 In learning mode, the pattern being taught is input in terms of 1s and 0s on the memory address
lines, with the appropriate values being stored
 The number of inputs used to address the RAM neuron is referred to a tuple
 In analysis mode, the neuron is addressed with the same pattern and data is extracted
This type of network is useful to recognize image.

Evolutionary computing – a technique of working out the best (or at least a working) solution inspired
by evolution. Different solutions in one generation are mixed to produce a new, improved generation.
Many generations later, a much better solution is created.

Genetic algorithms
 Each member of a population is defined in terms of its genetic make-up, usually in binary
fashion
 To get from one generation to the next, the binary chromosome codes are mixed by crossovers
and mutations
 Some of the members need to be killed off to keep the same size of the population – they're
usually the weakest ones or those very similar to others but not as good
 The most important aspect is to decide how the individuals are going to be measured – a fitness
function that depends on the problem must be constructed
To start the algorithm, an initial population is required. It might be obtained randomly or by guesses at
the solution. Then the fitness of them is assessed. A pair is then selected for breeding – the ones with
higher fitness score are more likely to be chosen. Crossover and mutations operations are applied, as a
result of which one or more offspring is produced. Then the new solutions are tested for fitness and
some solutions are removed from the population. This process is repeated many times until there is no
or little change in the fitness of the best solutions.
Comments:
 Population size usually remains stable
 This could lead to the problem of lack of diversity
 In some case GAs can be operated adaptively to deal with changing circumstances – in this case
diversity is important
 GA will sometimes not reach the global best solution but converge on a local maximum

Agent methods
Agents are individual simple entities. Through a collection of these, an overall intelligent behavior can
emerge. In an agent-based problem solving approach a complex problem is broken down into a number
of smaller problems. Agents can then be used to find solutions to those small problems and combine
them together to produce a final solution. Advantage is this is that each agent only keeps information
about its own small problem.
Some agents have fixed actions, while some are flexible and adaptive. Some are autonomous and others
are dependent on decisions by others. Most are responsive to the environment in which they exist.
In a particular design all agents may have the same power and capabilities but it may also be that some
can override decisions of others – subsumption architecture.
A large proportion of financial transactions is carried out by software agents.

There can be situations where multiple agents are required to perform a task – they may operate in a
cooperative fashion (each providing part of a solution) or competitively.

You might also like