SlideShare a Scribd company logo
1 of 56
BackpropagationBackpropagation
NetworksNetworks
Introduction toIntroduction to
BackpropagationBackpropagation
- In 1969 a method for learning in multi-layer network,
BackpropagationBackpropagation, was invented by Bryson and Ho.
- The Backpropagation algorithm is a sensible approach
for dividing the contribution of each weight.
- Works basically the same as perceptrons
Backpropagation Learning Principles:
Hidden Layers and Gradients
There are two differences for the updating rule :differences for the updating rule :
1) The activation of the hidden unit is used instead ofinstead of
activation of the input value.activation of the input value.
2) The rule contains a term for the gradient of the activation
function.
Backpropagation NetworkBackpropagation Network
trainingtraining
• 1. Initialize network with random weights
• 2. For all training cases (called examples):
– a. Present training inputs to network and
calculate output
– b.b. For all layers (starting with output layer,
back to input layer):
• i. Compare network output with correct output
(error function)
• ii. Adapt weights in current layer
This is
what
you
want
Backpropagation LearningBackpropagation Learning
DetailsDetails
• Method for learning weights in feed-forward (FF)
nets
• Can’t use Perceptron Learning Rule
– no teacher values are possible for hidden units
• Use gradient descent to minimize the error
– propagate deltas to adjust for errors
backward from outputs
to hidden layers
to inputs
forward
backward
Backpropagation Algorithm – Main
Idea – error in hidden layers
The ideas of the algorithm can be summarized as follows :
1. Computes the error term for the output units using the
observed error.
2. From output layer, repeat
- propagating the error term back to the previous layer
and
- updating the weights between the two layers
until the earliest hidden layer is reached.
Backpropagation Algorithm
• Initialize weights (typically random!)
• Keep doing epochs
– For each example ee in training set do
• forward passforward pass to compute
– O = neural-net-output(network,e)
– miss = (T-O) at each output unit
• backward pass to calculate deltas to weights
• update all weights
– end
• until tuning set error stops improving
Backward pass explained in next slideForward pass explained
earlier
Backward Pass
• Compute deltas to weights
– from hidden layer
– to output layer
• Without changing any weights (yet),
compute the actual contributions
– within the hidden layer(s)
– and compute deltas
Gradient DescentGradient Descent
• Think of the N weights as a point in an N-
dimensional space
• Add a dimension for the observed error
• Try to minimize your position on the “error
surface”
Error Surface
Error as function of
weights in
multidimensional space
error
weights
Gradient
• Trying to make error decrease the fastest
• Compute:
• GradE = [dE/dw1, dE/dw2, . . ., dE/dwn]
• Change i-th weight by
• deltawi = -alphaalpha * dE/dwi
• We need a derivative!
• Activation function must be continuouscontinuous,
differentiable, non-decreasing, and easy to
compute
Derivatives of error for weights
Compute
deltas
Can’t use LTU
• To effectively assign credit / blame to units
in hidden layers, we want to look at the
first derivative of the activation function
• Sigmoid function is easy to differentiate
and easy to compute forward
Linear Threshold Units Sigmoid function
Updating hidden-to-output
• We have teacher supplied desired values
• deltawji = α * aj * (Ti - Oi) * g’(ini)
= α * aj * (Ti - Oi) * Oi * (1 - Oi)
– for sigmoid the derivative is, g’(x) = g(x) * (1 - g(x))
alphaalpha
derivative
miss
Here we have
general formula with
derivative, next we
use for sigmoid
Updating interior weights
• Layer k units provide values to all layer
k+1 units
• “miss” is sum of misses from all units on k+1
• missj = Σ [ ai(1- ai) (Ti - ai) wji ]
• weights coming into this unit are adjusted
based on their contribution
deltakj = α * Ik * aj * (1 - aj) * missj
For layer k+1
Compute deltas
How do we pick αα?
1. Tuning set, or
2. Cross validation, or
3. Small for slow, conservative learning
How many hidden layers?
• Usually just one (i.e., a 2-layer net)
• How many hidden units in the layer?
– Too few ==> can’t learn
– Too many ==> poor generalization
How big a training set?
• Determine your target error rate, e
• Success rate is 1- e
• Typical training set approx. n/e, where n is the
number of weights in the net
• Example:
– e = 0.1, n = 80 weights
– training set size 800
trained until 95% correct training set classification
should produce 90% correct classification
on testing set (typical)
ExamplesExamples of Backpropagation
Learning
In the restaurant
problem NN was
worse than the
decision tree
Error
decreases
with number
of epochs
Decision tree still
better for
restaurant
example
Examples of Backpropagation Learning
Majority example,
perceptron better
Restaurant
example, DT
better
BackpropagationBackpropagation
Learning MathLearning Math
See next
slide for
explanation
Visualization ofVisualization of
BackpropagationBackpropagation
learninglearning
Backprop output layer
bias neuron in input layer
Bias Neurons inBias Neurons in BackpropBackpropagationagation
LearningLearning
Training pairs
Software for Backpropagation LearningSoftware for Backpropagation Learning
Calculate difference to
desired output
Calculate total error
Run network forward.
Was explained earlier
This routine
calculate error for
backpropagation
Update output weights
Software for Backpropagation LearningSoftware for Backpropagation Learning
continuationcontinuation
Calculate hidden difference values
Update input weights
Return total error
Here we do not use
alpha, the learning rate
The generalgeneral Backpropagation Algorithm for updating weights in a multilayermultilayer network
Run network to
calculate its
output for this
example
Go through all
examples
Compute the
error in output
Update weights
to output layer
Compute error in
each hidden layer
Update weights in
each hidden layer
Repeat until
convergent
Return learned network
Here we use alpha, the
learning rate
Examples andExamples and
ApplicationsApplications
of ANNof ANN
Neural Network in Practice
NNs are used for classification and function approximation
or mapping problems which are:
- Tolerant of some imprecision.
- Have lots of training data available.
- Hard and fast rules cannot easily be applied.
NETalk (1987)
• Mapping character strings into phonemes so they
can be pronounced by a computer
• Neural network trained how to pronounce each
letter in a word in a sentence, given the three
letters before and three letters after it in a window
• Output was the correct phoneme
• Results
– 95% accuracy on the training data
– 78% accuracy on the test set
Other Examples
• Neurogammon (Tesauro & Sejnowski, 1989)
– Backgammon learning program
• Speech Recognition (Waibel, 1989)
• Character Recognition (LeCun et al., 1989)
• Face Recognition (Mitchell)
ALVINNALVINN
• Steer a van down the road
– 2-layer feedforward
• using backpropagation for learning
– Raw input is 480 x 512 pixel image 15x per sec
– Color image preprocessed into 960 input units
– 4 hidden units
– 30 output units, each is a steering direction
Neural Network Approaches
ALVINN - Autonomous Land Vehicle In a Neural Network
Learning on-the-
fly
• ALVINN learned as the vehicle
traveled
– initially by observing a human
driving
– learns from its own driving by
watching for future corrections
– never saw bad driving
• didn’t know what was
dangerous, NOT correct
• computes alternate views of
the road (rotations, shifts, and
fill-ins) to use as “bad”
examples
– keeps a buffer pool of 200 pretty
old examples to avoid overfitting
to only the most recent images
Feed-forward vs. Interactive
Nets
• Feed-forward
– activation propagates in one direction
– We usually focus on this
• Interactive
– activation propagates forward & backwards
– propagation continues until equilibrium is reached in
the network
– We do not discuss these networks here, complex
training. May be unstable.
Ways of learning with an ANN
• Add nodes & connections
• Subtract nodes & connections
• Modify connection weights
– current focus
– can simulate first two
• I/O pairs:
– given the inputs, what should the output be?
[“typical” learning problem]
More Neural NetworkMore Neural Network
ApplicationsApplications
- May provide a model for massive parallel computation.
- More successful approach of “parallelizing” traditional
serial algorithms.
- Can compute any computable function.
- Can do everything a normal digital computer can do.
- Can do even more under some impractical assumptions.
Neural Network Approaches to driving
- Developed in 1993.
- Performs driving with
Neural Networks.
- An intelligent VLSI image
sensor for road following.
- Learns to filter out image
details not relevant to
driving.
Hidden layer
Output units
Input units
•Use special hardware
•ASIC
•FPGA
•analog
Neural Network Approaches
Hidden Units Output unitsInput Array
Actual Products Available
ex1. Enterprise Miner:ex1. Enterprise Miner:
- Single multi-layered feed-forward neural networks.
- Provides business solutions for data mining.
ex2. Nestor:ex2. Nestor:
- Uses Nestor Learning System (NLS).
- Several multi-layered feed-forward neural networks.
- Intel has made such a chip - NE1000 in VLSI technology.
Ex1. Software tool - Enterprise Miner
- Based on SEMMA (Sample, Explore, Modify, Model,
Access) methodology.
- Statistical tools include :
Clustering, decision trees, linear and logistic
regression and neural networks.
- Data preparation tools include :
Outliner detection, variable transformation, random
sampling, and partition of data sets (into training,
testing and validation data sets).
Ex 2. Hardware Tool - Nestor
- With low connectivity within each layer.
- Minimized connectivity within each layer results in rapid
training and efficient memory utilization, ideal for VLSI.
- Composed of multiple neural networks, each specializing
in a subset of information about the input patterns.
- Real time operation without the need of special computers
or custom hardware DSP platforms
•Software exists.
Problems with using ANNs
1. Insufficiently characterized development
process compared with conventional software
– What are the steps to create a neural network?
1. How do we create neural networks in a
repeatable and predictable manner?
2. Absence of quality assurance methods for
neural network models and implementations
– How do I verify my implementation?
Solving Problem 1 – The Steps to createSolving Problem 1 – The Steps to create
a ANNa ANN
Define the process of developing neural networks:
1. Formally capture the specifics of the problem in
a document based on a template
2. Define the factors/parameters for creation
– Neural network creation parameters
– Performance requirements
1. Create the neural network
2. Get feedback on performance
Neural Network Development Process
Problem Specification Phase
• Some factors to define in problem specification:
– Type of neural networks (based on experience or
published results)
– How to collect and transform problem data
– Potential input/output representations
– Training & testing method and data selection
– Performance targets (accuracy and precision)
• Most important output is the ranked collection of
factors/parameters
Problem 2 –Problem 2 –
How to create a Neural NetworkHow to create a Neural Network
• Predictability (with regard to resources)
– Depending on creation approach used, record time
for one iteration
– Use timing to predict maximum and minimum times
for all of the combinations specified
• Repeatability
– Relevant information must be captured in problem
specification and combinations of parameters
Problem 3 - Quality Assurance
• Specification of generic neural network software
(models and learning)
• Prototype of specification
• Comparison of a given implementation with
specification prototype
• Allows practitioners to create arbitrary neural
networks verified against models
Two Methods for Comparison
• Direct comparison of outputs:
• Verification of weights generated by learning
algorithm:
20-10-5 (with particular connections and input)
Prototype <0.123892, 0.567442, 0.981194, 0.321438, 0.699115>
Implementation <0.123892, 0.567442, 0.981194, 0.321438, 0.699115>
20-10-5 Iteration 100 Iteration 200 ………. Iteration n
Prototype Weight state 1 Weight state 2 ………. Weight state n
Implementation Weight state 1 Weight state 2 ………. Weight state n
Further Work on
improvements
• Practitioners to use the development process or
at least document in problem specification
• Feedback from neural network development
community on the content of the problem
specification template
• Collect problem specifications and analyse to
look for commonalities in problem domains and
improve predictability (eg. control)
• More verification of specification prototype
Further Work (2)
• Translation methods for formal specification
• Extend formal specification to new types
• Fully prove aspects of the specification
• Cross discipline data analysis methods (eg. ICA,
statistical analysis)
• Implementation of learning on distributed
systems
– Peer-to-peer network systems (farm each
combination of parameters to a peer)
• Remain unfashionable
SummarySummary
- Neural network is a computational model that simulate
some properties of the human brain.
- The connections and nature of units determine the
behavior of a neural network.
- Perceptrons are feed-forward networks that can only
represent linearly separable functions.
Summary
- Given enough units, any function can be represented
by Multi-layer feed-forward networks.
- Backpropagation learning works on multi-layer
feed-forward networks.
- Neural Networks are widely used in developing
artificial learning systems.
ReferencesReferences
- Russel, S. and P. Norvig (1995). Artificial Intelligence - A
Modern Approach. Upper Saddle River, NJ, Prentice
Hall.
- Sarle, W.S., ed. (1997), Neural Network FAQ, part 1 of 7:
Introduction, periodic posting to the Usenet newsgroup
comp.ai.neural-nets,
URL: ftp://ftp.sas.com/pub/neural/FAQ.html
Eddy Li
Eric Wong
Martin Ho
Kitty Wong
Sources

More Related Content

What's hot

Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance TheoryNaveen Kumar
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural NetworksDatabricks
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networksSi Haem
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagationKrish_ver2
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Activation function
Activation functionActivation function
Activation functionAstha Jain
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmMostafa G. M. Mostafa
 
Neural network
Neural networkNeural network
Neural networkSilicon
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Suraj Aavula
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNNShuai Zhang
 
Activation functions
Activation functionsActivation functions
Activation functionsPRATEEK SAHU
 

What's hot (20)

Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
Mc culloch pitts neuron
Mc culloch pitts neuronMc culloch pitts neuron
Mc culloch pitts neuron
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Cnn
CnnCnn
Cnn
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Activation function
Activation functionActivation function
Activation function
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
 
Neural network
Neural networkNeural network
Neural network
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
Transfer Learning
Transfer LearningTransfer Learning
Transfer Learning
 
Perceptron
PerceptronPerceptron
Perceptron
 
Activation functions
Activation functionsActivation functions
Activation functions
 

Similar to backpropagation in neural networks

nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryAhmed Yousry
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxDebabrataPain1
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learningNimrita Koul
 
08 neural networks
08 neural networks08 neural networks
08 neural networksankit_ppt
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkIshaneeSharma
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxPoonam60376
 
An Introduction to Deep Learning
An Introduction to Deep LearningAn Introduction to Deep Learning
An Introduction to Deep Learningmilad abbasi
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep LearningMehrnaz Faraz
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlowBarbara Fusinska
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentationOwin Will
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsPierre de Lacaze
 

Similar to backpropagation in neural networks (20)

Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptx
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learning
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Deeplearning
Deeplearning Deeplearning
Deeplearning
 
Deep Learning
Deep LearningDeep Learning
Deep Learning
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
N ns 1
N ns 1N ns 1
N ns 1
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptx
 
An Introduction to Deep Learning
An Introduction to Deep LearningAn Introduction to Deep Learning
An Introduction to Deep Learning
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
 
cnn ppt.pptx
cnn ppt.pptxcnn ppt.pptx
cnn ppt.pptx
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
 
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural Nets
 

Recently uploaded

Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 

Recently uploaded (20)

Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 

backpropagation in neural networks

  • 2. Introduction toIntroduction to BackpropagationBackpropagation - In 1969 a method for learning in multi-layer network, BackpropagationBackpropagation, was invented by Bryson and Ho. - The Backpropagation algorithm is a sensible approach for dividing the contribution of each weight. - Works basically the same as perceptrons
  • 3. Backpropagation Learning Principles: Hidden Layers and Gradients There are two differences for the updating rule :differences for the updating rule : 1) The activation of the hidden unit is used instead ofinstead of activation of the input value.activation of the input value. 2) The rule contains a term for the gradient of the activation function.
  • 4. Backpropagation NetworkBackpropagation Network trainingtraining • 1. Initialize network with random weights • 2. For all training cases (called examples): – a. Present training inputs to network and calculate output – b.b. For all layers (starting with output layer, back to input layer): • i. Compare network output with correct output (error function) • ii. Adapt weights in current layer This is what you want
  • 5. Backpropagation LearningBackpropagation Learning DetailsDetails • Method for learning weights in feed-forward (FF) nets • Can’t use Perceptron Learning Rule – no teacher values are possible for hidden units • Use gradient descent to minimize the error – propagate deltas to adjust for errors backward from outputs to hidden layers to inputs forward backward
  • 6. Backpropagation Algorithm – Main Idea – error in hidden layers The ideas of the algorithm can be summarized as follows : 1. Computes the error term for the output units using the observed error. 2. From output layer, repeat - propagating the error term back to the previous layer and - updating the weights between the two layers until the earliest hidden layer is reached.
  • 7. Backpropagation Algorithm • Initialize weights (typically random!) • Keep doing epochs – For each example ee in training set do • forward passforward pass to compute – O = neural-net-output(network,e) – miss = (T-O) at each output unit • backward pass to calculate deltas to weights • update all weights – end • until tuning set error stops improving Backward pass explained in next slideForward pass explained earlier
  • 8. Backward Pass • Compute deltas to weights – from hidden layer – to output layer • Without changing any weights (yet), compute the actual contributions – within the hidden layer(s) – and compute deltas
  • 9. Gradient DescentGradient Descent • Think of the N weights as a point in an N- dimensional space • Add a dimension for the observed error • Try to minimize your position on the “error surface”
  • 10. Error Surface Error as function of weights in multidimensional space error weights
  • 11. Gradient • Trying to make error decrease the fastest • Compute: • GradE = [dE/dw1, dE/dw2, . . ., dE/dwn] • Change i-th weight by • deltawi = -alphaalpha * dE/dwi • We need a derivative! • Activation function must be continuouscontinuous, differentiable, non-decreasing, and easy to compute Derivatives of error for weights Compute deltas
  • 12. Can’t use LTU • To effectively assign credit / blame to units in hidden layers, we want to look at the first derivative of the activation function • Sigmoid function is easy to differentiate and easy to compute forward Linear Threshold Units Sigmoid function
  • 13. Updating hidden-to-output • We have teacher supplied desired values • deltawji = α * aj * (Ti - Oi) * g’(ini) = α * aj * (Ti - Oi) * Oi * (1 - Oi) – for sigmoid the derivative is, g’(x) = g(x) * (1 - g(x)) alphaalpha derivative miss Here we have general formula with derivative, next we use for sigmoid
  • 14. Updating interior weights • Layer k units provide values to all layer k+1 units • “miss” is sum of misses from all units on k+1 • missj = Σ [ ai(1- ai) (Ti - ai) wji ] • weights coming into this unit are adjusted based on their contribution deltakj = α * Ik * aj * (1 - aj) * missj For layer k+1 Compute deltas
  • 15. How do we pick αα? 1. Tuning set, or 2. Cross validation, or 3. Small for slow, conservative learning
  • 16. How many hidden layers? • Usually just one (i.e., a 2-layer net) • How many hidden units in the layer? – Too few ==> can’t learn – Too many ==> poor generalization
  • 17. How big a training set? • Determine your target error rate, e • Success rate is 1- e • Typical training set approx. n/e, where n is the number of weights in the net • Example: – e = 0.1, n = 80 weights – training set size 800 trained until 95% correct training set classification should produce 90% correct classification on testing set (typical)
  • 18. ExamplesExamples of Backpropagation Learning In the restaurant problem NN was worse than the decision tree Error decreases with number of epochs Decision tree still better for restaurant example
  • 19. Examples of Backpropagation Learning Majority example, perceptron better Restaurant example, DT better
  • 22.
  • 23.
  • 24.
  • 25. bias neuron in input layer Bias Neurons inBias Neurons in BackpropBackpropagationagation LearningLearning
  • 26. Training pairs Software for Backpropagation LearningSoftware for Backpropagation Learning Calculate difference to desired output Calculate total error Run network forward. Was explained earlier This routine calculate error for backpropagation
  • 27. Update output weights Software for Backpropagation LearningSoftware for Backpropagation Learning continuationcontinuation Calculate hidden difference values Update input weights Return total error Here we do not use alpha, the learning rate
  • 28. The generalgeneral Backpropagation Algorithm for updating weights in a multilayermultilayer network Run network to calculate its output for this example Go through all examples Compute the error in output Update weights to output layer Compute error in each hidden layer Update weights in each hidden layer Repeat until convergent Return learned network Here we use alpha, the learning rate
  • 30. Neural Network in Practice NNs are used for classification and function approximation or mapping problems which are: - Tolerant of some imprecision. - Have lots of training data available. - Hard and fast rules cannot easily be applied.
  • 31. NETalk (1987) • Mapping character strings into phonemes so they can be pronounced by a computer • Neural network trained how to pronounce each letter in a word in a sentence, given the three letters before and three letters after it in a window • Output was the correct phoneme • Results – 95% accuracy on the training data – 78% accuracy on the test set
  • 32. Other Examples • Neurogammon (Tesauro & Sejnowski, 1989) – Backgammon learning program • Speech Recognition (Waibel, 1989) • Character Recognition (LeCun et al., 1989) • Face Recognition (Mitchell)
  • 33. ALVINNALVINN • Steer a van down the road – 2-layer feedforward • using backpropagation for learning – Raw input is 480 x 512 pixel image 15x per sec – Color image preprocessed into 960 input units – 4 hidden units – 30 output units, each is a steering direction
  • 34. Neural Network Approaches ALVINN - Autonomous Land Vehicle In a Neural Network
  • 35. Learning on-the- fly • ALVINN learned as the vehicle traveled – initially by observing a human driving – learns from its own driving by watching for future corrections – never saw bad driving • didn’t know what was dangerous, NOT correct • computes alternate views of the road (rotations, shifts, and fill-ins) to use as “bad” examples – keeps a buffer pool of 200 pretty old examples to avoid overfitting to only the most recent images
  • 36. Feed-forward vs. Interactive Nets • Feed-forward – activation propagates in one direction – We usually focus on this • Interactive – activation propagates forward & backwards – propagation continues until equilibrium is reached in the network – We do not discuss these networks here, complex training. May be unstable.
  • 37. Ways of learning with an ANN • Add nodes & connections • Subtract nodes & connections • Modify connection weights – current focus – can simulate first two • I/O pairs: – given the inputs, what should the output be? [“typical” learning problem]
  • 38. More Neural NetworkMore Neural Network ApplicationsApplications - May provide a model for massive parallel computation. - More successful approach of “parallelizing” traditional serial algorithms. - Can compute any computable function. - Can do everything a normal digital computer can do. - Can do even more under some impractical assumptions.
  • 39. Neural Network Approaches to driving - Developed in 1993. - Performs driving with Neural Networks. - An intelligent VLSI image sensor for road following. - Learns to filter out image details not relevant to driving. Hidden layer Output units Input units •Use special hardware •ASIC •FPGA •analog
  • 40. Neural Network Approaches Hidden Units Output unitsInput Array
  • 41. Actual Products Available ex1. Enterprise Miner:ex1. Enterprise Miner: - Single multi-layered feed-forward neural networks. - Provides business solutions for data mining. ex2. Nestor:ex2. Nestor: - Uses Nestor Learning System (NLS). - Several multi-layered feed-forward neural networks. - Intel has made such a chip - NE1000 in VLSI technology.
  • 42. Ex1. Software tool - Enterprise Miner - Based on SEMMA (Sample, Explore, Modify, Model, Access) methodology. - Statistical tools include : Clustering, decision trees, linear and logistic regression and neural networks. - Data preparation tools include : Outliner detection, variable transformation, random sampling, and partition of data sets (into training, testing and validation data sets).
  • 43. Ex 2. Hardware Tool - Nestor - With low connectivity within each layer. - Minimized connectivity within each layer results in rapid training and efficient memory utilization, ideal for VLSI. - Composed of multiple neural networks, each specializing in a subset of information about the input patterns. - Real time operation without the need of special computers or custom hardware DSP platforms •Software exists.
  • 44. Problems with using ANNs 1. Insufficiently characterized development process compared with conventional software – What are the steps to create a neural network? 1. How do we create neural networks in a repeatable and predictable manner? 2. Absence of quality assurance methods for neural network models and implementations – How do I verify my implementation?
  • 45. Solving Problem 1 – The Steps to createSolving Problem 1 – The Steps to create a ANNa ANN Define the process of developing neural networks: 1. Formally capture the specifics of the problem in a document based on a template 2. Define the factors/parameters for creation – Neural network creation parameters – Performance requirements 1. Create the neural network 2. Get feedback on performance
  • 47. Problem Specification Phase • Some factors to define in problem specification: – Type of neural networks (based on experience or published results) – How to collect and transform problem data – Potential input/output representations – Training & testing method and data selection – Performance targets (accuracy and precision) • Most important output is the ranked collection of factors/parameters
  • 48. Problem 2 –Problem 2 – How to create a Neural NetworkHow to create a Neural Network • Predictability (with regard to resources) – Depending on creation approach used, record time for one iteration – Use timing to predict maximum and minimum times for all of the combinations specified • Repeatability – Relevant information must be captured in problem specification and combinations of parameters
  • 49. Problem 3 - Quality Assurance • Specification of generic neural network software (models and learning) • Prototype of specification • Comparison of a given implementation with specification prototype • Allows practitioners to create arbitrary neural networks verified against models
  • 50. Two Methods for Comparison • Direct comparison of outputs: • Verification of weights generated by learning algorithm: 20-10-5 (with particular connections and input) Prototype <0.123892, 0.567442, 0.981194, 0.321438, 0.699115> Implementation <0.123892, 0.567442, 0.981194, 0.321438, 0.699115> 20-10-5 Iteration 100 Iteration 200 ………. Iteration n Prototype Weight state 1 Weight state 2 ………. Weight state n Implementation Weight state 1 Weight state 2 ………. Weight state n
  • 51. Further Work on improvements • Practitioners to use the development process or at least document in problem specification • Feedback from neural network development community on the content of the problem specification template • Collect problem specifications and analyse to look for commonalities in problem domains and improve predictability (eg. control) • More verification of specification prototype
  • 52. Further Work (2) • Translation methods for formal specification • Extend formal specification to new types • Fully prove aspects of the specification • Cross discipline data analysis methods (eg. ICA, statistical analysis) • Implementation of learning on distributed systems – Peer-to-peer network systems (farm each combination of parameters to a peer) • Remain unfashionable
  • 53. SummarySummary - Neural network is a computational model that simulate some properties of the human brain. - The connections and nature of units determine the behavior of a neural network. - Perceptrons are feed-forward networks that can only represent linearly separable functions.
  • 54. Summary - Given enough units, any function can be represented by Multi-layer feed-forward networks. - Backpropagation learning works on multi-layer feed-forward networks. - Neural Networks are widely used in developing artificial learning systems.
  • 55. ReferencesReferences - Russel, S. and P. Norvig (1995). Artificial Intelligence - A Modern Approach. Upper Saddle River, NJ, Prentice Hall. - Sarle, W.S., ed. (1997), Neural Network FAQ, part 1 of 7: Introduction, periodic posting to the Usenet newsgroup comp.ai.neural-nets, URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  • 56. Eddy Li Eric Wong Martin Ho Kitty Wong Sources

Editor's Notes

  1. Teacher values were gaussian with variance 10