SlideShare a Scribd company logo
1 of 46
Download to read offline
Cerebellar Model Articulation
Controller
By : Zahra Sadeghi
Introduction
• originally proposed by James Albus(1975)
• A class of sparse coarse-coded associative
memory algorithms that mimic the functionality
of the mammalian cerebellum
• A localized three-layer feedforward form of
neural network.
A 3 Layer structure
The first layer neurons: single input ,single output buffers
arrays of feature detecting.
Each neuron is a threshold unit: gives a one if the input is within a limited
range
when an input is presented at the input layer, a fixed number of neurons will
give out ones, the others will give zeros.
• The second layer = dual input AND gates
•perform logic AND operation on all it’s input from it’s related input from layer 1
•The third layer = Or gates =summation unit
computes the weight sum and produce an output.
Resolution
0
=quantization steps
1
y1
y2
0
1
Z1,i
Z2,j
Quantization
• quantization steps=15
association unit (AU)
• = lookup table
0
1
y1
y2
0
1
The AU tables store one weight value in each cell. Each AU has cells
which are na times
larger than the input
quantization cell size,
and are also
displaced along each
axis by some
constant.
Each layer 2 neuron
has a receptive field
that is na*na units in
size
association unit (AU)
Number of AUs
Number of AUs
Displacement
good overlay displacements are achieved
when na is prime and equal to 2ny +1,
bad overlay displacements are achieved when na is divisible by 6.
Table Index
• Find index tables
•na = the number of association neurons
How a CMAC network operates
• having two inputs (N = 2)
• the input vector
s = [s1; s2].
• resolution elements =
(Qi = 6; i = 1; 2).
• quantizing functions =
(K = 4)
• Point A
sA = [2.35; 2.36]
• Point B is a point nearby
sB = [2.65; 2.56]
learning
• Learning is done in the
third layer.
• supervised learning
• output is compared with the
desired output, and CMAC
uses the difference to adjust
its weights to get the
desired response.
•similar to the back propagation learning
algorithm used in the MLP model
1)it only has one layer of weight to
modify
2)it does not do any back
propagation like the MLP model
Training
• Least Mean Square (LMS)
• weights are adjusted based on the desired
and actual outputs
• Each of the na referenced weights (one
per lookup table) is increased so that the
outputs come closer
The input/desired-output pairs (training points) are normally
presented to the CMAC in one of two ways:
• Random order:
• Function approximation:
– a selection of training points from the function can be
presented in random order to minimize learning
interference
• Trajectory order:
• used in an online controller :
– the training point inputs will probably vary gradually,
as they are tied to the system sensors.
• each training point will be close to the previous one in the
input space.
The number of CMAC weights
• The na¨ıve approach: storing each weight as an separate number in a
large floating point array: not possible.
– The maximum value of the table index pij is (if the largest displacements are
assumed):
•The total number of CMAC weights is the number of AU tables times the
number of weights stored per AU:
• increasing na : reduces the number weights
increases the number of weight tables.
• for typical values of res, na and ny the number of weights is huge.
• Ex: suppose res = 200, na = 10 and ny = 6
• Number of weights is 640 million.
• At four bytes per floating point number this would require 2441 megabytes of memory,
which is presently unreasonable.
• The problem is that the number of weights is exponential in the number of input
dimensions ny.
• One way to reduce the number of weights is to use hashing.
•assume that resj is sufficiently large
Hashing
• Idea : to take an address into a large “virtual” memory and map it
into an address into a smaller physical memory.
• Ex: suppose a program is required to quickly store and recall 100 seven digit
telephone numbers.
1)use the telephone number as an index into an array of 10^7 entries.
2)Use an array of only (say) 251
the index into this array:
The modulo function hashes the large phone number address into a smaller
index in the range 0 ,.., 250.
•more efficient use of memory
•hash collision = there is the problem that some of the phone numbers will
hash to the same physical memory location
How the CMAC uses hashing
– Hash collisions are ignored— if any two
weights with the same physical address are
written, the second one will simply replace the
first.
– Sensor values may be correlated with one
another
– only a small subset of all possible input
trajectories are seen
Drawbacks
• some cells still are not thoroughly visited.
– it is difficult for them to achieve a 100% memory
utilization rate.
• when collisions occur, hash coding algorithms regard the
weights stored in the colliding memories as a
validated one.
• Therefore, most CMACs with hash coding algorithms may
retrieve, output, and update irrelevant data
• From the viewpoint of system control: this may cause
CMACs to introduce control noises to control
process.
Training interference
• Points A and B have overlapping LG areas :
they share some weights
• Training at B will affect the value of xA because
v of its na weights will be altered
• If v = 0 : obviously no interference (there is
no overlap)
• if v = na: A and B are the same point and
the interference is maximized.
– This can be particularly problematic during
trajectory training, because then successive
training points are very likely to be close to
each other.
– problem can be reduced with a lower learning rate.
Generalization
• individual association cells may be
activated by more than one input patterns
• May be beneficial or nonbeneficial :
– if two input patterns are going to produce
similar outputs : desirable
– when different output values are needed : a
problem may occur
Training sparsity
• To ensure good generalization,
how far apart (in the input space) can the
training points be in trajectory ordered data?
• The
approximation
becomes
smoother with
more training
points
• Because the
points
tend to be
closer
together.
Results from comparison
•The low resolution CMAC network performed better than the high resolution
CMAC network when noise was present in the data.
•The high resolution CMAC network was able to model noise-free data better.
•The functions learnt by the MLP and RBF networks are smooth despite the
presence of noise in the data.
•The mesh-plots for the CMAC network are jagged.
•The output values of a CMAC network tend to track the noise in target values ((c)
and (d)).
Another Comparision
Advantages
very fast learning
( less complex calculations and weight involved
to be considered)
quickly training certain areas of memory
without affecting the whole memory
structure.
very important in real-time applications
Disadvantages
The storage requirement :
-less than that of look-up tables,
-significantly greater than MLP and RBF networks.
-worse when the dimensionality of the input space
increases.
not a major issue!
-the abundant availability of memory chips recently +
reusing memory space
• The function that is approximated by the CMAC is
not smooth, but jagged.
• Tend to track the noise in target values.
Applications
robot control
for Path planning for manipulators and mobile robots
in Control of manipulators
Industrial processes
Pole balancers and walking machines
in Character recognition
Reinforcement learning
as a Classifier system
Color correction ,
Rapid computation,
Function approximation
Computing plant models for feedforward control on-line in real-time.
Feel free to ask questions!
an example of a two-dimensional space
input vector u of size d as a point in d-
dimensional space.
The input space is quantized
using a set of overlapping tiles.
• An input vector or query point activates
one tile from each set.
• For input spaces of high dimensionality,
the tiles form hypercubes.
• A query is performed by first activating
all the tiles that contain a query point.
• The activated tiles in turn activate
memory cells, which contain stored
values: the weights of the system
• Each tile is associated with a weight
value.
• The summing of these values produces
an overall output.
• The CMAC output is therefore stored in
a distributed fashion, such that the
output corresponding to any point in
input space is derived from the value
stored in a number of memory cells.
Parameters
• The network’s n-dimensional input
vector is denoted by x, and the
network’s sparse internal
representation is represented by the
p-dimensional vector a;
• this vector is called transformed input
vector or the basis function vec-tor.
Fig. 1. shows the schematic
illustration of the CMAC neural
network.
• The transfer input vector a has as
many element as the weight vector
W.
• The second mapping is the output
computation from the weight vector
as the sum of the activated weights.
• The output is i1The vector a has only
ρ active elements and all others are
equal zero.
• The network’s n-dimensional input vector
is denoted by x, and the network’s
sparse internal representation is
represented by the p-dimensional vector
a;
• this vector is called transformed input
vector or the basis function vector. Fig. 1.
shows the schematic illustration of the
CMAC neural network.
• The transfer input vector a has as many
element as the weight vector W.
• The second mapping is the output
computation from the weight vector as
the sum of the activated weights.
• The output is i1
• The vector a has only ρ active elements
and all others are equal zero.
Hash coding algorithms
• Hardware implementation of the CMAC is difficult to achieve since its conceptual
memory theoretically needs a huge space to address the encoded input information.
• Hash coding algorithms are generally applied to reduce the space to a more reasonable
scale.
• However, this approach has some drawbacks :
• First, collisions frequently occur during mapping from a wide input domain to a relatively
small field.
• For instance, the cells of c3 and a2, as illustrated in Fig. 1, are projected onto a memory
cell p1 simultaneously.
• Some algorithms have been introduced to achieve low collision rate and high memory
utilization.
• However, in these improved algorithms, some cells still are not thoroughly visited.
• In other words, it is difficult for them to achieve a 100% memory utilization rate.
• Second, when collisions occur, hash coding algorithms regard the weights stored in the
colliding memories as a validated one. Therefore, most CMACs with hash coding
algorithms may retrieve, output, and update irrelevant data in terms of a mapping crash.
• From the viewpoint of system control, this may cause CMACs to introduce control
noises to control process.
Theoretical diagram of CMAC.
• If the distance between the input vectors is relatively short,
then there should be some overlapping addresses, as is the
case with fc(S2) and fc(S3) in Fig. 1.
• This situation is referred to as the generalization problem.
• This problem can also be explained as follows:
• if input vectors S2 and S3 are similar, and if adequately
tuned weights have already been stored in the memory with
respect to S2 , then S3 can refer to these overlapping
addresses associated with S2 to get more suitable weights
for producing output before updating.
Cerebellar Model Articulation Controller

More Related Content

What's hot

A Heart Disease Prediction Model using Logistic Regression
A Heart Disease Prediction Model using Logistic RegressionA Heart Disease Prediction Model using Logistic Regression
A Heart Disease Prediction Model using Logistic Regressionijtsrd
 
Predictive Maintenance Using Recurrent Neural Networks
Predictive Maintenance Using Recurrent Neural NetworksPredictive Maintenance Using Recurrent Neural Networks
Predictive Maintenance Using Recurrent Neural NetworksJustin Brandenburg
 
REDES NEURONALES ADALINE
REDES NEURONALES ADALINEREDES NEURONALES ADALINE
REDES NEURONALES ADALINEESCOM
 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagationParveenMalik18
 
End to End Communication protection
End to End Communication protectionEnd to End Communication protection
End to End Communication protectionSibiKrishnan
 
OPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHM
OPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHMOPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHM
OPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHMJitendra Choudhary
 
SIALESA - Sistema de Alarma Escolar Automtizada
SIALESA - Sistema de Alarma Escolar AutomtizadaSIALESA - Sistema de Alarma Escolar Automtizada
SIALESA - Sistema de Alarma Escolar AutomtizadaElias Log
 
EtherCAT as a Master Machine Control Tool
EtherCAT as a Master Machine Control ToolEtherCAT as a Master Machine Control Tool
EtherCAT as a Master Machine Control ToolDesign World
 
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA SystemsUsing OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA SystemsInfluxData
 
Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...
Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...
Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...Mohamed Abuella
 
RISC (reduced instruction set computer)
RISC (reduced instruction set computer)RISC (reduced instruction set computer)
RISC (reduced instruction set computer)LokmanArman
 
Clases de complejidad computacional
Clases de complejidad computacionalClases de complejidad computacional
Clases de complejidad computacionalvmtorrealba
 

What's hot (20)

A Heart Disease Prediction Model using Logistic Regression
A Heart Disease Prediction Model using Logistic RegressionA Heart Disease Prediction Model using Logistic Regression
A Heart Disease Prediction Model using Logistic Regression
 
Predictive Maintenance Using Recurrent Neural Networks
Predictive Maintenance Using Recurrent Neural NetworksPredictive Maintenance Using Recurrent Neural Networks
Predictive Maintenance Using Recurrent Neural Networks
 
REDES NEURONALES ADALINE
REDES NEURONALES ADALINEREDES NEURONALES ADALINE
REDES NEURONALES ADALINE
 
Lecture 5 backpropagation
Lecture 5 backpropagationLecture 5 backpropagation
Lecture 5 backpropagation
 
Proyecto de arduino
Proyecto de arduinoProyecto de arduino
Proyecto de arduino
 
Controllogix 5000 Training
Controllogix 5000 TrainingControllogix 5000 Training
Controllogix 5000 Training
 
End to End Communication protection
End to End Communication protectionEnd to End Communication protection
End to End Communication protection
 
Siemens PLC Programming Example #2
Siemens PLC Programming Example #2Siemens PLC Programming Example #2
Siemens PLC Programming Example #2
 
OPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHM
OPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHMOPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHM
OPTIMIZATION OF LZ77 DATA COMPRESSION ALGORITHM
 
SIALESA - Sistema de Alarma Escolar Automtizada
SIALESA - Sistema de Alarma Escolar AutomtizadaSIALESA - Sistema de Alarma Escolar Automtizada
SIALESA - Sistema de Alarma Escolar Automtizada
 
Estimating program run time
Estimating program run timeEstimating program run time
Estimating program run time
 
EtherCAT as a Master Machine Control Tool
EtherCAT as a Master Machine Control ToolEtherCAT as a Master Machine Control Tool
EtherCAT as a Master Machine Control Tool
 
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA SystemsUsing OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems
Using OPC-UA to Extract IIoT Time Series Data from PLC and SCADA Systems
 
Neural Networks Using C#
Neural Networks Using C#Neural Networks Using C#
Neural Networks Using C#
 
Notación Asintótica
Notación AsintóticaNotación Asintótica
Notación Asintótica
 
Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...
Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...
Forecasting Solar Power Ramp Events Using Machine Learning Classification Tec...
 
RISC (reduced instruction set computer)
RISC (reduced instruction set computer)RISC (reduced instruction set computer)
RISC (reduced instruction set computer)
 
Neural networks introduction
Neural networks introductionNeural networks introduction
Neural networks introduction
 
Clases de complejidad computacional
Clases de complejidad computacionalClases de complejidad computacional
Clases de complejidad computacional
 
Tutorial on Deep Learning
Tutorial on Deep LearningTutorial on Deep Learning
Tutorial on Deep Learning
 

Viewers also liked

Cerebellum And Learning
Cerebellum And LearningCerebellum And Learning
Cerebellum And LearningSaanika
 
Jack rental-car-problem
Jack rental-car-problemJack rental-car-problem
Jack rental-car-problemZahra Sadeghi
 
Parametric and non parametric classifiers
Parametric and non parametric classifiersParametric and non parametric classifiers
Parametric and non parametric classifiersZahra Sadeghi
 
Self Organization Map
Self Organization MapSelf Organization Map
Self Organization MapZahra Sadeghi
 
Bluetooth Technoloty
Bluetooth TechnolotyBluetooth Technoloty
Bluetooth TechnolotyZahra Sadeghi
 
Y2 s2 locomotion coordination 2014
Y2 s2 locomotion coordination 2014Y2 s2 locomotion coordination 2014
Y2 s2 locomotion coordination 2014vajira54
 
Cerebellum by DR.ARSHAD
Cerebellum by DR.ARSHADCerebellum by DR.ARSHAD
Cerebellum by DR.ARSHADSMS_2015
 
A survey on ant colony clustering papers
A survey on ant colony clustering papersA survey on ant colony clustering papers
A survey on ant colony clustering papersZahra Sadeghi
 
Y2 s2 locomotion coordination 2013
Y2 s2 locomotion coordination 2013Y2 s2 locomotion coordination 2013
Y2 s2 locomotion coordination 2013vajira54
 
Cerebellum 2010
Cerebellum 2010Cerebellum 2010
Cerebellum 2010PS Deb
 
Cerebellum by dr.gourav thakre 20 03-2012
Cerebellum by dr.gourav thakre 20 03-2012Cerebellum by dr.gourav thakre 20 03-2012
Cerebellum by dr.gourav thakre 20 03-2012Gourav Thakre
 

Viewers also liked (20)

Cerebellum And Learning
Cerebellum And LearningCerebellum And Learning
Cerebellum And Learning
 
Jack rental-car-problem
Jack rental-car-problemJack rental-car-problem
Jack rental-car-problem
 
Parametric and non parametric classifiers
Parametric and non parametric classifiersParametric and non parametric classifiers
Parametric and non parametric classifiers
 
Neural networks
Neural networksNeural networks
Neural networks
 
Self Organization Map
Self Organization MapSelf Organization Map
Self Organization Map
 
Bluetooth Technoloty
Bluetooth TechnolotyBluetooth Technoloty
Bluetooth Technoloty
 
Cerebellum 1
Cerebellum 1Cerebellum 1
Cerebellum 1
 
PERKINS_ICOS
PERKINS_ICOSPERKINS_ICOS
PERKINS_ICOS
 
10.cerebellum.kjg
10.cerebellum.kjg10.cerebellum.kjg
10.cerebellum.kjg
 
Penalty function
Penalty function Penalty function
Penalty function
 
Long Term Potenciation & Depression
Long Term Potenciation & DepressionLong Term Potenciation & Depression
Long Term Potenciation & Depression
 
Y2 s2 locomotion coordination 2014
Y2 s2 locomotion coordination 2014Y2 s2 locomotion coordination 2014
Y2 s2 locomotion coordination 2014
 
Cerebellum
Cerebellum  Cerebellum
Cerebellum
 
Cerebellum by DR.ARSHAD
Cerebellum by DR.ARSHADCerebellum by DR.ARSHAD
Cerebellum by DR.ARSHAD
 
A survey on ant colony clustering papers
A survey on ant colony clustering papersA survey on ant colony clustering papers
A survey on ant colony clustering papers
 
Cns 9
Cns 9Cns 9
Cns 9
 
Cerebellum by suresh aadi8888
Cerebellum by suresh aadi8888Cerebellum by suresh aadi8888
Cerebellum by suresh aadi8888
 
Y2 s2 locomotion coordination 2013
Y2 s2 locomotion coordination 2013Y2 s2 locomotion coordination 2013
Y2 s2 locomotion coordination 2013
 
Cerebellum 2010
Cerebellum 2010Cerebellum 2010
Cerebellum 2010
 
Cerebellum by dr.gourav thakre 20 03-2012
Cerebellum by dr.gourav thakre 20 03-2012Cerebellum by dr.gourav thakre 20 03-2012
Cerebellum by dr.gourav thakre 20 03-2012
 

Similar to Cerebellar Model Articulation Controller

backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
An Introduction to Deep Learning
An Introduction to Deep LearningAn Introduction to Deep Learning
An Introduction to Deep Learningmilad abbasi
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep LearningMehrnaz Faraz
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learningNimrita Koul
 
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...Tahmid Abtahi
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkIshaneeSharma
 
Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...jemin lee
 
Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)DonghyunKang12
 
Sess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptxSess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptxSarthakKabi2
 
201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture SearchDaeJin Kim
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMayuraD1
 
ehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptx
ehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptxehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptx
ehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptxEliasPetros
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxPoonam60376
 

Similar to Cerebellar Model Articulation Controller (20)

backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
An Introduction to Deep Learning
An Introduction to Deep LearningAn Introduction to Deep Learning
An Introduction to Deep Learning
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learning
 
lec10svm.ppt
lec10svm.pptlec10svm.ppt
lec10svm.ppt
 
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
 
lecture_16.pptx
lecture_16.pptxlecture_16.pptx
lecture_16.pptx
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...Integer quantization for deep learning inference: principles and empirical ev...
Integer quantization for deep learning inference: principles and empirical ev...
 
cnn ppt.pptx
cnn ppt.pptxcnn ppt.pptx
cnn ppt.pptx
 
Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)
 
Sess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptxSess07 Clustering02_KohonenNet.pptx
Sess07 Clustering02_KohonenNet.pptx
 
Chap8 slides
Chap8 slidesChap8 slides
Chap8 slides
 
201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search
 
Module-3_SVM_Kernel_KNN.pptx
Module-3_SVM_Kernel_KNN.pptxModule-3_SVM_Kernel_KNN.pptx
Module-3_SVM_Kernel_KNN.pptx
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
 
ehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptx
ehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptxehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptx
ehhhhhhhhhhhhhhhhhhhhhhhhhjjjjjllaye.pptx
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptx
 

More from Zahra Sadeghi

Maritime Anomaly Detection
Maritime Anomaly DetectionMaritime Anomaly Detection
Maritime Anomaly DetectionZahra Sadeghi
 
Quality Assurance in Modern Software Development
Quality Assurance in Modern Software DevelopmentQuality Assurance in Modern Software Development
Quality Assurance in Modern Software DevelopmentZahra Sadeghi
 
Attention mechanism in brain and deep neural network
Attention mechanism in brain and deep neural networkAttention mechanism in brain and deep neural network
Attention mechanism in brain and deep neural networkZahra Sadeghi
 
Perception, representation, structure, and recognition
Perception, representation, structure, and recognitionPerception, representation, structure, and recognition
Perception, representation, structure, and recognitionZahra Sadeghi
 
An introduction to Autonomous mobile robots
An introduction to Autonomous mobile robotsAn introduction to Autonomous mobile robots
An introduction to Autonomous mobile robotsZahra Sadeghi
 
Pittssburgh approach
Pittssburgh approachPittssburgh approach
Pittssburgh approachZahra Sadeghi
 
Semantic Search with Semantic Web
Semantic Search with Semantic WebSemantic Search with Semantic Web
Semantic Search with Semantic WebZahra Sadeghi
 
Interval programming
Interval programming Interval programming
Interval programming Zahra Sadeghi
 
16-bit microprocessors
16-bit microprocessors16-bit microprocessors
16-bit microprocessorsZahra Sadeghi
 
Ms dos boot process
Ms dos boot process Ms dos boot process
Ms dos boot process Zahra Sadeghi
 
An Introduction to threads
An Introduction to threadsAn Introduction to threads
An Introduction to threadsZahra Sadeghi
 
An intoroduction to Multimedia
An intoroduction to MultimediaAn intoroduction to Multimedia
An intoroduction to MultimediaZahra Sadeghi
 

More from Zahra Sadeghi (14)

Maritime Anomaly Detection
Maritime Anomaly DetectionMaritime Anomaly Detection
Maritime Anomaly Detection
 
Quality Assurance in Modern Software Development
Quality Assurance in Modern Software DevelopmentQuality Assurance in Modern Software Development
Quality Assurance in Modern Software Development
 
Attention mechanism in brain and deep neural network
Attention mechanism in brain and deep neural networkAttention mechanism in brain and deep neural network
Attention mechanism in brain and deep neural network
 
Perception, representation, structure, and recognition
Perception, representation, structure, and recognitionPerception, representation, structure, and recognition
Perception, representation, structure, and recognition
 
An introduction to Autonomous mobile robots
An introduction to Autonomous mobile robotsAn introduction to Autonomous mobile robots
An introduction to Autonomous mobile robots
 
Pittssburgh approach
Pittssburgh approachPittssburgh approach
Pittssburgh approach
 
Semantic Search with Semantic Web
Semantic Search with Semantic WebSemantic Search with Semantic Web
Semantic Search with Semantic Web
 
Interval programming
Interval programming Interval programming
Interval programming
 
16-bit microprocessors
16-bit microprocessors16-bit microprocessors
16-bit microprocessors
 
Logic converter
Logic converterLogic converter
Logic converter
 
Ms dos boot process
Ms dos boot process Ms dos boot process
Ms dos boot process
 
An Introduction to threads
An Introduction to threadsAn Introduction to threads
An Introduction to threads
 
An intoroduction to Multimedia
An intoroduction to MultimediaAn intoroduction to Multimedia
An intoroduction to Multimedia
 
sampling
samplingsampling
sampling
 

Recently uploaded

Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfSumit Kumar yadav
 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformationAreesha Ahmad
 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfrohankumarsinghrore1
 
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICESAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICEayushi9330
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...ssifa0344
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxFarihaAbdulRasheed
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...Lokesh Kothari
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)Areesha Ahmad
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)Areesha Ahmad
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfSumit Kumar yadav
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)Areesha Ahmad
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryFAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryAlex Henderson
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...ssuser79fe74
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sérgio Sacani
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPirithiRaju
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learninglevieagacer
 
Factory Acceptance Test( FAT).pptx .
Factory Acceptance Test( FAT).pptx       .Factory Acceptance Test( FAT).pptx       .
Factory Acceptance Test( FAT).pptx .Poonam Aher Patil
 

Recently uploaded (20)

Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Alandi Call Me 7737669865 Budget Friendly No Advance Booking
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdf
 
Conjugation, transduction and transformation
Conjugation, transduction and transformationConjugation, transduction and transformation
Conjugation, transduction and transformation
 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdf
 
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICESAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdf
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryFAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
 
Factory Acceptance Test( FAT).pptx .
Factory Acceptance Test( FAT).pptx       .Factory Acceptance Test( FAT).pptx       .
Factory Acceptance Test( FAT).pptx .
 

Cerebellar Model Articulation Controller

  • 2. Introduction • originally proposed by James Albus(1975) • A class of sparse coarse-coded associative memory algorithms that mimic the functionality of the mammalian cerebellum • A localized three-layer feedforward form of neural network.
  • 3. A 3 Layer structure The first layer neurons: single input ,single output buffers arrays of feature detecting. Each neuron is a threshold unit: gives a one if the input is within a limited range when an input is presented at the input layer, a fixed number of neurons will give out ones, the others will give zeros. • The second layer = dual input AND gates •perform logic AND operation on all it’s input from it’s related input from layer 1 •The third layer = Or gates =summation unit computes the weight sum and produce an output.
  • 4.
  • 7. association unit (AU) • = lookup table 0 1 y1 y2 0 1 The AU tables store one weight value in each cell. Each AU has cells which are na times larger than the input quantization cell size, and are also displaced along each axis by some constant. Each layer 2 neuron has a receptive field that is na*na units in size
  • 9.
  • 12. Displacement good overlay displacements are achieved when na is prime and equal to 2ny +1, bad overlay displacements are achieved when na is divisible by 6.
  • 13. Table Index • Find index tables •na = the number of association neurons
  • 14.
  • 15. How a CMAC network operates • having two inputs (N = 2) • the input vector s = [s1; s2]. • resolution elements = (Qi = 6; i = 1; 2). • quantizing functions = (K = 4) • Point A sA = [2.35; 2.36] • Point B is a point nearby sB = [2.65; 2.56]
  • 16. learning • Learning is done in the third layer. • supervised learning • output is compared with the desired output, and CMAC uses the difference to adjust its weights to get the desired response. •similar to the back propagation learning algorithm used in the MLP model 1)it only has one layer of weight to modify 2)it does not do any back propagation like the MLP model
  • 17. Training • Least Mean Square (LMS) • weights are adjusted based on the desired and actual outputs • Each of the na referenced weights (one per lookup table) is increased so that the outputs come closer
  • 18. The input/desired-output pairs (training points) are normally presented to the CMAC in one of two ways: • Random order: • Function approximation: – a selection of training points from the function can be presented in random order to minimize learning interference • Trajectory order: • used in an online controller : – the training point inputs will probably vary gradually, as they are tied to the system sensors. • each training point will be close to the previous one in the input space.
  • 19. The number of CMAC weights • The na¨ıve approach: storing each weight as an separate number in a large floating point array: not possible. – The maximum value of the table index pij is (if the largest displacements are assumed): •The total number of CMAC weights is the number of AU tables times the number of weights stored per AU:
  • 20. • increasing na : reduces the number weights increases the number of weight tables. • for typical values of res, na and ny the number of weights is huge. • Ex: suppose res = 200, na = 10 and ny = 6 • Number of weights is 640 million. • At four bytes per floating point number this would require 2441 megabytes of memory, which is presently unreasonable. • The problem is that the number of weights is exponential in the number of input dimensions ny. • One way to reduce the number of weights is to use hashing. •assume that resj is sufficiently large
  • 21.
  • 22. Hashing • Idea : to take an address into a large “virtual” memory and map it into an address into a smaller physical memory. • Ex: suppose a program is required to quickly store and recall 100 seven digit telephone numbers. 1)use the telephone number as an index into an array of 10^7 entries. 2)Use an array of only (say) 251 the index into this array: The modulo function hashes the large phone number address into a smaller index in the range 0 ,.., 250. •more efficient use of memory •hash collision = there is the problem that some of the phone numbers will hash to the same physical memory location
  • 23. How the CMAC uses hashing – Hash collisions are ignored— if any two weights with the same physical address are written, the second one will simply replace the first. – Sensor values may be correlated with one another – only a small subset of all possible input trajectories are seen
  • 24. Drawbacks • some cells still are not thoroughly visited. – it is difficult for them to achieve a 100% memory utilization rate. • when collisions occur, hash coding algorithms regard the weights stored in the colliding memories as a validated one. • Therefore, most CMACs with hash coding algorithms may retrieve, output, and update irrelevant data • From the viewpoint of system control: this may cause CMACs to introduce control noises to control process.
  • 25.
  • 26. Training interference • Points A and B have overlapping LG areas : they share some weights • Training at B will affect the value of xA because v of its na weights will be altered
  • 27. • If v = 0 : obviously no interference (there is no overlap) • if v = na: A and B are the same point and the interference is maximized. – This can be particularly problematic during trajectory training, because then successive training points are very likely to be close to each other. – problem can be reduced with a lower learning rate.
  • 28. Generalization • individual association cells may be activated by more than one input patterns • May be beneficial or nonbeneficial : – if two input patterns are going to produce similar outputs : desirable – when different output values are needed : a problem may occur
  • 29. Training sparsity • To ensure good generalization, how far apart (in the input space) can the training points be in trajectory ordered data?
  • 30. • The approximation becomes smoother with more training points • Because the points tend to be closer together.
  • 31.
  • 32. Results from comparison •The low resolution CMAC network performed better than the high resolution CMAC network when noise was present in the data. •The high resolution CMAC network was able to model noise-free data better. •The functions learnt by the MLP and RBF networks are smooth despite the presence of noise in the data. •The mesh-plots for the CMAC network are jagged. •The output values of a CMAC network tend to track the noise in target values ((c) and (d)).
  • 34.
  • 35. Advantages very fast learning ( less complex calculations and weight involved to be considered) quickly training certain areas of memory without affecting the whole memory structure. very important in real-time applications
  • 36. Disadvantages The storage requirement : -less than that of look-up tables, -significantly greater than MLP and RBF networks. -worse when the dimensionality of the input space increases. not a major issue! -the abundant availability of memory chips recently + reusing memory space • The function that is approximated by the CMAC is not smooth, but jagged. • Tend to track the noise in target values.
  • 37. Applications robot control for Path planning for manipulators and mobile robots in Control of manipulators Industrial processes Pole balancers and walking machines in Character recognition Reinforcement learning as a Classifier system Color correction , Rapid computation, Function approximation Computing plant models for feedforward control on-line in real-time.
  • 38. Feel free to ask questions!
  • 39. an example of a two-dimensional space input vector u of size d as a point in d- dimensional space. The input space is quantized using a set of overlapping tiles. • An input vector or query point activates one tile from each set. • For input spaces of high dimensionality, the tiles form hypercubes. • A query is performed by first activating all the tiles that contain a query point. • The activated tiles in turn activate memory cells, which contain stored values: the weights of the system • Each tile is associated with a weight value. • The summing of these values produces an overall output. • The CMAC output is therefore stored in a distributed fashion, such that the output corresponding to any point in input space is derived from the value stored in a number of memory cells.
  • 41. • The network’s n-dimensional input vector is denoted by x, and the network’s sparse internal representation is represented by the p-dimensional vector a; • this vector is called transformed input vector or the basis function vec-tor. Fig. 1. shows the schematic illustration of the CMAC neural network. • The transfer input vector a has as many element as the weight vector W. • The second mapping is the output computation from the weight vector as the sum of the activated weights. • The output is i1The vector a has only ρ active elements and all others are equal zero.
  • 42. • The network’s n-dimensional input vector is denoted by x, and the network’s sparse internal representation is represented by the p-dimensional vector a; • this vector is called transformed input vector or the basis function vector. Fig. 1. shows the schematic illustration of the CMAC neural network. • The transfer input vector a has as many element as the weight vector W. • The second mapping is the output computation from the weight vector as the sum of the activated weights. • The output is i1 • The vector a has only ρ active elements and all others are equal zero.
  • 43. Hash coding algorithms • Hardware implementation of the CMAC is difficult to achieve since its conceptual memory theoretically needs a huge space to address the encoded input information. • Hash coding algorithms are generally applied to reduce the space to a more reasonable scale. • However, this approach has some drawbacks : • First, collisions frequently occur during mapping from a wide input domain to a relatively small field. • For instance, the cells of c3 and a2, as illustrated in Fig. 1, are projected onto a memory cell p1 simultaneously. • Some algorithms have been introduced to achieve low collision rate and high memory utilization. • However, in these improved algorithms, some cells still are not thoroughly visited. • In other words, it is difficult for them to achieve a 100% memory utilization rate. • Second, when collisions occur, hash coding algorithms regard the weights stored in the colliding memories as a validated one. Therefore, most CMACs with hash coding algorithms may retrieve, output, and update irrelevant data in terms of a mapping crash. • From the viewpoint of system control, this may cause CMACs to introduce control noises to control process.
  • 45. • If the distance between the input vectors is relatively short, then there should be some overlapping addresses, as is the case with fc(S2) and fc(S3) in Fig. 1. • This situation is referred to as the generalization problem. • This problem can also be explained as follows: • if input vectors S2 and S3 are similar, and if adequately tuned weights have already been stored in the memory with respect to S2 , then S3 can refer to these overlapping addresses associated with S2 to get more suitable weights for producing output before updating.