This document discusses different types of artificial neural network topologies. It describes feedforward neural networks, including single layer and multilayer feedforward networks. It also describes recurrent neural networks, which differ from feedforward networks in having at least one feedback loop. Single layer networks have an input and output layer, while multilayer networks have one or more hidden layers between the input and output layers. Recurrent networks can learn temporal patterns due to their internal memory capabilities.
3. Artificial Neural Network (ANN)
An artificial neural network is defined as a data processing
system consisting of a large number of simple highly
interconnected processing elements (artificial neurons) in
an architecture inspired by the structure of the cerebral
cortex of the brain. ( Tsoukalas and Uhring, 1997)
4. Neural Network Architectures
Generally , an ANN structure can be represented using a directed graph. A
graph G is an ordered 2-tuple (V, E) consisting of a set V of vertices and a set
E of edge.
When each edge is assigned an orientation , the graph is directed and is
called a directed graph or digraph.
There are several classes of NN. Classified according to their learning
mechanisms. However we identify 3 fundamentally different classes of
Networks.
Single layer feedforward network
Multilayer feedforward network
Recurrent network
All the three classes employ the digraph structure for their representation.
5. Feed-forward neural networks
These are the commonest type of neural network in practical applications.
The first layer is the input and the last layer is the output.
If there is more than one hidden layer, we call them “deep” neural networks.
They compute a series of transformations that change the similarities
between cases.
6. Single Layer Feedforward Network
This type of network comprises of two layers , namely the input layer and the
output layer.
The input layer neurons receive the input signals and the output layer
neurons receive the output signals.
The synaptic links carrying the weights connect every input neuron to the
output neuron but not vise – versa.
Such a network is said to be feedforward in type or acyclic in nature.
Despite the two layers , the network is termed single layer since it is the
output layer , alone which performs computation.
The input layer merely transmits the signals to the output layer.
Hence the name single layer feedforward network.
8. Multilayer Feedforward Network
This network, as its name indicates is made up of multiple layers.
Architecture of this class besides possessing an input and output layer also
have one or more intermediary layers called hidden layers.
The computational units of the hidden layer are known as hidden neurons or
hidden units.
The hidden layer aids in performing useful intermediary computation before
directing the input to the output layer.
The input layer neurons are linked to the hidden layer neurons and the
weights on these links are referred to as input hidden layer weights.
Again , the hidden layer neurons are linked to the output layer neurons and
the corresponding weights are referred to as hidden-output layer weights.
9. Multilayer Feedforward Network(con’t)
A multilayer feedforward network with l input neurons m1 neurons in the first
hidden layer. m2 neurons in the second hidden layer and n output neurons in
the output layer in written as l – m1 – m2 – n
10. Multilayer Feedforward Network(con’t)
• Applications:
Pattern classification
Pattern matching
Function approximation
any nonlinear mapping is possible with nonlinear Processing Elements
11. Multi-layer Networks: Issues
How many layers are sufficient?
How many Processing Elementss needed in (each)
hidden layer?
How much data needed for training?
12. Examples of Multi-layer NNs
Backpropagation
Neocognitron
Probabilistic NN (radial basis function NN)
Cauchy machine
Radial basis function networks
13. Recurrent Network
These networks differ from feedforward architecture in the sense that there
is atleast one feedback loop.
Not necessarily stable
Symmetric connections can ensure stability
Why use recurrent networks?
Can learn temporal patterns (time series or oscillations)
Biologically realistic
Majority of connections to neurons in cerebral cortex are feedback connections from local
or distant neurons
Examples
Hopfield network
Boltzmann machine (Hopfield-like net with input & output units)
Recurrent backpropagation networks: for small sequences, unfold network in time
dimension and use backpropagation learning
14. Recurrent Networks (con’t)
Example
Elman networks
Partially recurrent
Context units keep internal
memory of part inputs
Fixed context weights
Backpropagation for learning
E.g. Can disambiguate ABC
and CBA
Elman network
15. Recurrent Network(con’t)
Advantages
Unlike feedforward neural networks, RNNs can use their internal memory
to process arbitrary sequences of inputs.
This makes them applicable to tasks such as unsegmented
connected handwriting recognition or speech recognition.
They have the ability to remember information in their hidden state for a
long time.
But its very hard to train them to use this potential.
Disadvantages
Most RNNs used to have scaling issues.
RNNs could not be easily trained for large numbers of neuron units nor for
large numbers of inputs units
17. References
Neural Networks, Fuzzy Logic, and Genetic Algorithems synthesis and
applications by S.Ranjasekaran , G.A Vijayalakshmi pai
Neural netowks , a comprehensive foundation by Simon Haykin (second
edition)
https://en.wikipedia.org/wiki/Recurrent_neural_network ( 2016-05-14)
https://class.coursera.org/neuralnets-2012-001/lecture