SlideShare a Scribd company logo
1 of 53
Download to read offline
SS2016 Modern Neural
Computation
Lecture 5: Neural Networks
and Neuroscience
Hirokazu Tanaka
School of Information Science
Japan Institute of Science and Technology
Supervised learning as functional approximation.
In this lecture we will learn:
• Single-layer neural networks
Perceptron and the perceptron theorem.
Cerebellum as a perceptron.
• Multi-layer feedforward neural networks
Universal functional approximations, Back-propagation
algorithms
• Recurrent neural networks
Back-propagation-through-time (BPTT) algorithms
• Tempotron
Spike-based perceptron
Gradient-descent learning for optimization.
• Classification problem: to output discrete labels.
For a binary classification (i.e., 0 or 1), a cross-entropy is
often used.
• Regression problem: to output continuous values.
Sum of squared errors is often used.
Cost function: classification and regression.
• Classification problem: to output discrete labels.
For a binary classification (i.e., 0 or 1), a cross-entropy is
often used.
• Regression problem: to output continuous values.
Sum of squared errors is often used.
ˆ:output of network, :desired outputi iy y
( ) ( ) ( )
ˆ1ˆ
: samples: samples
ˆ ˆlog 1 log 1 log 1ii
i i i i i
yy
i
ii
y y y y y y
−
− − =− + − −  ∑∏
( )
: sa p e
2
m l s
ˆi
i
iy y−∑
Perceptron: single-layer neural network.
• Assume a single-layer neural network with an input layer
composed of N units and an output layer composed of
one unit.
• Input units are specified by
and an output unit are determined by
( )1
T
Nx x=x 
( )T
0
1
0
n
i
i iy f w x fw w
=
 
= + = + 
 
∑ w x
( )
1 if 0
0 if 0
u
f
u
u
≥
= 
<
Perceptron: single-layer neural network.
feature 1
feature 2
Perceptron: single-layer neural network.
• [Remark] Instead of using
often, an augmented input vector
are used. Then,
( )1
T
Nx x=x 
( ) ( )T T
0y f w f= + =w x w x
( )11
T
Nx x=x 
( )10
T
Nw w w=w 
Perceptron Learning Algorithm.
( ) ( ) ( ){ }21 1 2, , ,, , ,P Pd d dx x x
• Given a training set:
• Perceptron learning rule:
( )i i iydη −∆ =w x
while err>1e-4 && count<10
y = sign(w'*X)';
wnew = w + X*(d-y)/P;
wnew = wnew/norm(wnew);
count = count+1;
err = norm(w-wnew)/norm(w)
w = wnew;
end
Perceptron Learning Algorithm.
Case 1: Linearly separable case
Perceptron Learning Algorithm.
Case 2: Linearly non-separable case
Perceptron’s capacity: Cover’s Counting Theorem.
• Question: Suppose that there are P vectors in N-
dimensional Euclidean space.
There are 2P possible patterns of two classes. How many
of them are linearly separable?
[Remark] They are assumed to be in general position.
• Answer: Cover’s Counting Theorem.
{ }1, ,, N
P i ∈x x x 
( )
1
0
1
, 2
N
k
P
C P N
k
−
=
− 
=  
 
∑
Perceptron’s capacity: Cover’s Counting Theorem.
• Cover’s Counting Theorem.
• Case 𝑃𝑃 ≤ 𝑁𝑁:
• Case 𝑃𝑃 = 2𝑁𝑁:
• Case 𝑃𝑃 ≫ 𝑁𝑁:
( )
1
0
1
, 2
N
k
P
C P N
k
−
=
− 
=  
 
∑
( ), 2P
C P N =
( ) 1
, 2P
C P N −
=
( ), N
C P N AP≈
Cover (1965) IEEE Information; Sompolinsky (2013) MIT lecture note
Perceptron’s capacity: Cover’s Counting Theorem.
• Case for large P:
Orhan (2014) “Cover’s Function Counting Theorem”
( ) 1 2
1 e
2
rf
,
2 2P
pC P
N
N
p
  
+ −   
   
≈
Cerebellum as a Perceptron.
Llinas (1974) Scientific American
Cerebellum as a Perceptron.
• Cerebellar cortex has a feedforward structure:
mossy fibers -> granule cells -> parallel fibers -> Purkinje
cells
Ito (1984) “Cerebellum and Neural Control”
Cerebellum as a Perceptron (or its extensions)
• Perceptron model
Marr (1969): Long-term potentiation (LTP) learning.
Albus (1971): Long-term depression (LTD) learning.
• Adaptive filter theory
Fujita (1982): Reverberation among granule and Golgi
cells for generating temporal templates.
• Liquid-state machine model
Yamazaki and Tanaka (2007):
Perceptron: a new perspective.
• Evaluation of memory capacity of a Purkinje cell using
perceptron methods (the Gardner limit).
Brunel, N., Hakim, V., Isope, P., Nadal, J. P., & Barbour, B. (2004). Optimal
information storage and the distribution of synaptic weights: perceptron versus
Purkinje cell. Neuron, 43(5), 745-757.
• Estimation of dimensions of neural representations
during visual memory task in the prefrontal cortex using
perceptron methods (Cover’s counting theorem).
Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi,
S. (2013). The importance of mixed selectivity in complex cognitive tasks.
Nature, 497(7451), 585-590.
Limitation of Perceptron.
• Only linearly separable input-output sets can be learned.
• Non-linear sets, even a simple one like XOR, CANNOT be
learned.
Multilayer neural network: feedforward design
( )n
ix
( )1n
jx −
Layer 1 Layer n-1 Layer n Layer N
( )1n
ijw
−
• Feedforward network: a unit in layer n receives inputs
from layer n-1 and projects to layer n+1.
Multilayer neural network: feedforward design
( )n
ix
( )1n
jx −
Layer 1 Layer n-1 Layer n Layer N
( )1n
ijw
−
• Feedforward network: a unit in layer n receives inputs
from layer n-1 and projects to layer n+1.
Multilayer neural network: forward propagation.
( ) ( )
( ) ( ) ( )1 1
1
n n n n
i i ij j
j
x f u f w x− −
=
 
= =  
 
∑
( )
1
1 u
f u
e−
=
+
( )
( )
( ) ( )( )2
1 1
1
1
1
11
u
u uu
f
e
e e
u
e
f u f u
−
− −−
 
= = − = 
+ + +
′ −
Layer n-1 Layer n
( )n
ix
( )1n
jx
−
( )1n
ijw
−
( ) ( ) ( )1 1
1
n n n
i ij j
j
u w x
− −
=
= ∑
In a feedforward multilayer neural network propagates its activities
from one layer to another in one direction:
Inputs to neurons in layer n are a
summation of activities of neurons in
layer n-1:
The function f is called an activation function, and its derivative is
easy to compute:
Multilayer neural network: error backpropagation
• Define an cost function as a squared sum of errors in
output units:
Gradients of cost function with respect to weights:
( )
( ) ( )
( )
2 21 1
2 2
N N
i i i
i i
x z= − = ∆∑ ∑
Layer n-1 Layer n
( ) ( ) ( ) ( )
( ) ( )1 1
1
n n n n n
i j j j ji
j
x x w
− −
∆ = ∆ −∑
( )1n
j
−
∆
( )n
i∆
The neurons in the output layer has
explicit supervised errors (the difference
between the network outputs and the
desired outputs). How, then, to compute
the supervising signals for neurons in
intermediate layers?
Multilayer neural network: error backpropagation
1. Compute activations of units in all layers.
2. Compute errors in the output units, .
3. “Back-propagate” the errors to lower layers using
4. Update the weights
( )
{ } ( )
{ } ( )
{ }1
,, , ,
n N
i i ix x x 
( )
{ }N
i∆
( ) ( ) ( ) ( )
( ) ( )1 1
1n n n n n
i j j j ji
j
x x w
− −
∆ = ∆ −∑
( ) ( ) ( ) ( )
( ) ( )1 1 1
1
n n n n n
ij i i i jw x x xη + + +
∆ =− ∆ −
Multilayer neural network as universal machine for
functional approximation.
A multilayer neural network is in principle able to approximate any
functional relationship between inputs and outputs at any desired
accuracy (Funahashi, 1988).
Intuition: A sum or a difference of two sigmoid functions is a “bump-
like” function. And, a sufficiently large number of bump functions
can approximate any function.
NETtalk: A parallel network that learns to read aloud.
Sejnowski & Rosenberg (1987) Complex Systems
A feedforward three-layer neural network with delay lines.
NETtalk: A parallel network that learns to read aloud.
Sejnowski & Rosenberg (1987) Complex Systems; https://www.youtube.com/watch?v=gakJlr3GecE
A feedforward three-layer neural network with delay lines.
NETtalk: A parallel network that learns to read aloud.
Sejnowski & Rosenberg (1987) Complex Systems
Activations of hidden units for a same sound but different inputs
Hinton diagrams: characterizing and visualizing
connection to and from hidden units.
Hinton (1992) Sci Am
Activations of hidden units for a same sound but different inputs
Autonomous driving learning by backpropagation.
Pomerleau (1991) Neural Comput
Activations of hidden units for a same sound but different inputs
Autonomous driving learning by backpropagation.
Pomerleau (1991) Neural Comput; https://www.youtube.com/watch?v=ilP4aPDTBPE
Gradient vanishing problem: why is training a multi-layer
neural network so difficult?
Hochreiter et al. (1991)
• The back-propagation algorithm works only for neural networks of
three or four layers.
• Training neural networks with many hidden layers – called “deep
neural networks”- is notoriously difficult.
( ) ( ) ( ) ( )
( ) ( )1 1
1N N N N N
j i i i ij
i
x x w− −
∆ = ∆ −∑
( ) ( ) ( ) ( )
( ) ( )
( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( )
2 1 1 1 2
1 1 1 2
1
1 1
N N N N N
k j j j jk
j
N N N N N N N
i i i ij j j jk
j i
x x w
x x w x x w
− − − − −
− − − −
∆ = ∆ −
 
= ∆ − − 
 
∑
∑ ∑
( )
( ) ( ) ( ) ( )( 1) ( 1) ( 1) ( 1) ( ) ( )
~ 1 1 1
n Nn n N N N N
x x x x x x+ + − −
∆ − × × − × − ×∆
Multilayer neural network: recurrent connections
• A feedforward neural network can represent an
instantaneous relationship between inputs and outputs
- memoryless: it depends on current inputs but not on
previous inputs.
• In order to describe a history, a neural network should
have its own dynamics.
• One way to incorporate dynamics into a neural network
is to introduce recurrent connections between units.
Working memory in the parietal cortex.
• A feedforward neural network can represent an
instantaneous relationship between inputs and outputs
- memoryless: it depends on current inputs x(t) but not
on previous inputs x(t-1), x(t-2), ...
• In order to describe a history, a neural network should
have its own dynamics.
• One way to incorporate dynamics into a neural network
is to introduce recurrent connections between units.
Multilayer neural network: recurrent connections
( ) ( )( ) ( ) ( )( )( )1 1ii i
x t f u t f t t+= += +Wx Ua
( ) ( )( )iz t g t= Vx
Recurrent dynamics of neural network:
Output readout:
a x z
U VW
Temporal unfolding: backpropagation through time (BPTT)
1t−a
1t−x tztx
{ }10 2 1,, , ,, ,t T −a a a aa  
{ }1 2 3, , , ,, ,t Tzz z zz  
,U W V
Training set for a recurrent network:
Input series:
Output series:
Optimize the weight matrices so as to approximate the training set:
Temporal unfolding: backpropagation through time (BPTT)
0a 1z1x,U W V
0a
2z1x,U W
V,U W
1a 2x
0a
3z
1x,U W
V
,U W
1a 3x2x
,U W
2a
1t−a
1t−x tztx,U W V
Working-memory related activity in parietal cortex.
Gnadt & Andersen (1988) Exp Brain Res
Temporal unfolding: backpropagation through time (BPTT)
Zipser (1991) Neural Comput
Temporal unfolding: backpropagation through time (BPTT)
Zipser (1991) Neural Comput
Model
Experiment
Model
Experiment
Spike pattern discrimination in humans.
Johansson & Birznieks (2004); Johansson & Flanagan (2009)
Spike pattern discrimination in dendrites.
Branco et al. (2009) Science
Tempotron: Spike-based perceptron.
Consider five neurons and each emitting one spike but at different timings:
Rate coding: Information is coded in numbers of spikes in a given period.
( ) ( )31 2 4 5, , , , 1,1,1,1,1r r r r r =
Temporal coding: Information is coded in temporal patterns of spiking.
Tempotron: Spike-based perceptron.
Consider five neurons and each emitting one spike but at different timings:
Tempotron: Spike-based perceptron.
Basic idea: Expand the spike pattern into time:
N
T
N×T
Now
Tempotron: Spike-based perceptron.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + + > ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + + <
( ) ( )
3 2 2
1
2 3 2
1 2
2
1
, ,
1
t t t
t t t
w e e e
w e e e
− ∆ − ∆ − ∆
− ∆ − ∆ − ∆
   + + 
= = =    
+ +     
w x x
( ) ( )T T1 2
,θ θ> <w x w x
Consider a classification problem of two spike patterns:
If a vector notation is introduced:
This classification problem is reduced to a perceptron problem:
Tempotron: Spike-based perceptron.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + + > ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + + <
( ) ( )
3 2 2
1
2 3 2
1 2
2
1
, ,
1
t t t
t t t
w e e e
w e e e
− ∆ − ∆ − ∆
− ∆ − ∆ − ∆
   + + 
= = =    
+ +     
w x x
( ) ( )T T1 2
,θ θ> <w x w x
Consider a classification problem of two spike patterns:
If a vector notation is introduced:
This classification problem is reduced to a perceptron problem:
Learning a tempotron: intuition.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + + > ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + >+
What was wrong if the second pattern was misclassified?
The last spike of neuron #1 (red one) is most responsible for the error, so
the synaptic strength of this neuron should be reduced.
1w λ∆ = −
Learning a tempotron: intuition.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + <+ ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + + <
What was wrong if the second pattern was misclassified?
The last spike of neuron #2 (red one) is most responsible for the error, so
the synaptic strength of this neuron should be potentiated.
2w λ∆ = +
Exercise: Capacity of perceptron.
• Generate a set of random vectors.
• Write a code for the Perceptron learning algorithm.
• By randomly relabeling, count how many of them are
linearly separable.
Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi, S.
(2013). The importance of mixed selectivity in complex cognitive tasks. Nature,
497(7451), 585-590.
Exercise: Training of recurrent neural networks.
0
α
=
I
P
T
1 T
1
n n n n
n n
n n n
+= −
+
P r r P
P P
r P r
Goal: Investigate the effects of chaos and feedback in a recurrent
network.
( )1t n n n t+= −+ + ∆x x x Mr
T
tanhnn nz = w x
tanhn n=r x
1 nn n n ne+= −w w P r
nn ne z f= −
Recurrent dynamics without feedback:
Update of covariance matrix:
Update of weight matrix:
force_internal_all2all.m
Exercise: Training of recurrent neural networks.
0
α
=
I
P
T
1 T
1
n n n n
n n
n n n
+= −
+
P r r P
P P
r P r
Goal: Investigate the effects of chaos and feedback in a recurrent
network.
( )1
f
t n nn n n tz+= − ++ + ∆x x Mr wx
T
tanhnn nz = w x
tanhn n=r x
1 nn n n ne+= −w w P r
nn ne z f= −
Recurrent dynamics with feedback:
Update of covariance matrix:
Update of weight matrix:
force_external_feedback_loop.m
Exercise: Training of recurrent neural networks.
Goal: Investigate the effects of chaos and feedback in a recurrent
network.
• Investigate the effect of output feedback. Are there any difference
in the activities of recurrent units?
• Investigate the effect of gain parameter g. What happens if the gain
parameter is smaller than 1?
• Try to approximate some other time series such as chaotic ones.
Use the Lorentz model, for example.
References
• Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by back-
propagating errors. Cognitive modeling, 5(3), 1.
• Sejnowski, T. J., & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text.
Complex systems, 1(1), 145-168.
• Funahashi, K. I. (1989). On the approximate realization of continuous mappings by neural networks.
Neural networks, 2(3), 183-192.
• S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies
• Zipser, D. (1991). Recurrent network model of the neural mechanism of short-term active memory.
Neural Computation, 3(2), 179-193.
• Johansson, R. S., & Birznieks, I. (2004). First spikes in ensembles of human tactile afferents code
complex spatial fingertip events. Nature neuroscience, 7(2), 170-177.
• Branco, T., Clark, B. A., & Häusser, M. (2010). Dendritic discrimination of temporal input sequences
in cortical neurons. Science, 329(5999), 1671-1675.
• Gütig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing–based
decisions. Nature neuroscience, 9(3), 420-428.
• Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural
networks. Neuron, 63(4), 544-557.

More Related Content

What's hot

Neural Processes Family
Neural Processes FamilyNeural Processes Family
Neural Processes FamilyKota Matsui
 
Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkHiroshi Kuwajima
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Fabian Pedregosa
 
Neural Processes
Neural ProcessesNeural Processes
Neural ProcessesSangwoo Mo
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersMohammed Bennamoun
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertaintiesUniversity of Glasgow
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Stochastic Alternating Direction Method of Multipliers
Stochastic Alternating Direction Method of MultipliersStochastic Alternating Direction Method of Multipliers
Stochastic Alternating Direction Method of MultipliersTaiji Suzuki
 
05 history of cv a machine learning (theory) perspective on computer vision
05  history of cv a machine learning (theory) perspective on computer vision05  history of cv a machine learning (theory) perspective on computer vision
05 history of cv a machine learning (theory) perspective on computer visionzukun
 
Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)Osame Kinouchi
 
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERS
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERSINFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERS
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERSZac Darcy
 
Neuronal self-organized criticality
Neuronal self-organized criticalityNeuronal self-organized criticality
Neuronal self-organized criticalityOsame Kinouchi
 
Artificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimizationArtificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimizationMohammed Bennamoun
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphsRevanth Kumar
 

What's hot (20)

Neural Processes Family
Neural Processes FamilyNeural Processes Family
Neural Processes Family
 
Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2
 
Neural Processes
Neural ProcessesNeural Processes
Neural Processes
 
Annintro
AnnintroAnnintro
Annintro
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
 
Nn3
Nn3Nn3
Nn3
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Stochastic Alternating Direction Method of Multipliers
Stochastic Alternating Direction Method of MultipliersStochastic Alternating Direction Method of Multipliers
Stochastic Alternating Direction Method of Multipliers
 
05 history of cv a machine learning (theory) perspective on computer vision
05  history of cv a machine learning (theory) perspective on computer vision05  history of cv a machine learning (theory) perspective on computer vision
05 history of cv a machine learning (theory) perspective on computer vision
 
Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)
 
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERS
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERSINFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERS
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERS
 
03 image transform
03 image transform03 image transform
03 image transform
 
Neuronal self-organized criticality
Neuronal self-organized criticalityNeuronal self-organized criticality
Neuronal self-organized criticality
 
Artificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimizationArtificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimization
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphs
 

Viewers also liked

最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-Shintaro Fukushima
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編Ryota Kamoshida
 
Why dont you_create_new_spark_jl
Why dont you_create_new_spark_jlWhy dont you_create_new_spark_jl
Why dont you_create_new_spark_jlShintaro Fukushima
 
Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1Takuma Yagi
 
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)Takuma Yagi
 
Women in Tech: How to Build A Human Company
Women in Tech: How to Build A Human CompanyWomen in Tech: How to Build A Human Company
Women in Tech: How to Build A Human CompanyLuminary Labs
 
Rユーザのためのspark入門
Rユーザのためのspark入門Rユーザのためのspark入門
Rユーザのためのspark入門Shintaro Fukushima
 
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...Takuma Yagi
 
機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話Ryota Kamoshida
 
What is the maker movement?
What is the maker movement?What is the maker movement?
What is the maker movement?Luminary Labs
 
The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0Luminary Labs
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerLuminary Labs
 
A Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with StrokeA Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with StrokeHariohm Pandian
 
From epilepsy to migraine to stroke: A unifying framework.
From epilepsy to migraine to stroke: A unifying framework. From epilepsy to migraine to stroke: A unifying framework.
From epilepsy to migraine to stroke: A unifying framework. MPI Dresden / HU Berlin
 

Viewers also liked (20)

KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent) KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent)
 
【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016
 
最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編
 
Kerberos
KerberosKerberos
Kerberos
 
Os module 2 d
Os module 2 dOs module 2 d
Os module 2 d
 
強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)
 
Why dont you_create_new_spark_jl
Why dont you_create_new_spark_jlWhy dont you_create_new_spark_jl
Why dont you_create_new_spark_jl
 
Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1
 
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
 
Women in Tech: How to Build A Human Company
Women in Tech: How to Build A Human CompanyWomen in Tech: How to Build A Human Company
Women in Tech: How to Build A Human Company
 
Rユーザのためのspark入門
Rユーザのためのspark入門Rユーザのためのspark入門
Rユーザのためのspark入門
 
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
 
機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話
 
What is the maker movement?
What is the maker movement?What is the maker movement?
What is the maker movement?
 
Network security
Network securityNetwork security
Network security
 
The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI Explainer
 
A Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with StrokeA Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with Stroke
 
From epilepsy to migraine to stroke: A unifying framework.
From epilepsy to migraine to stroke: A unifying framework. From epilepsy to migraine to stroke: A unifying framework.
From epilepsy to migraine to stroke: A unifying framework.
 

Similar to JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience

SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1sravanthi computers
 
MLIP - Chapter 2 - Preliminaries to deep learning
MLIP - Chapter 2 - Preliminaries to deep learningMLIP - Chapter 2 - Preliminaries to deep learning
MLIP - Chapter 2 - Preliminaries to deep learningCharles Deledalle
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Machine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - PerceptronMachine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - PerceptronAndrew Ferlitsch
 
Machine Learning - Introduction to Neural Networks
Machine Learning - Introduction to Neural NetworksMachine Learning - Introduction to Neural Networks
Machine Learning - Introduction to Neural NetworksAndrew Ferlitsch
 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)SungminYou
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)sai anjaneya
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Mohd Faiz
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognitionHưng Đặng
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognitionHưng Đặng
 

Similar to JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience (20)

SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
MLIP - Chapter 2 - Preliminaries to deep learning
MLIP - Chapter 2 - Preliminaries to deep learningMLIP - Chapter 2 - Preliminaries to deep learning
MLIP - Chapter 2 - Preliminaries to deep learning
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
6
66
6
 
Machine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - PerceptronMachine Learning - Neural Networks - Perceptron
Machine Learning - Neural Networks - Perceptron
 
Machine Learning - Introduction to Neural Networks
Machine Learning - Introduction to Neural NetworksMachine Learning - Introduction to Neural Networks
Machine Learning - Introduction to Neural Networks
 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Perceptron
PerceptronPerceptron
Perceptron
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
tutorial.ppt
tutorial.ppttutorial.ppt
tutorial.ppt
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
 
Lecture artificial neural networks and pattern recognition
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
 

More from hirokazutanaka

東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1hirokazutanaka
 
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2hirokazutanaka
 
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1hirokazutanaka
 
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2hirokazutanaka
 
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1hirokazutanaka
 
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2hirokazutanaka
 
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1hirokazutanaka
 
東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2hirokazutanaka
 
東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1hirokazutanaka
 
Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course) Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course) hirokazutanaka
 
Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)hirokazutanaka
 

More from hirokazutanaka (11)

東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
 
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
 
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
 
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
 
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
 
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
 
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
 
東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2
 
東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1
 
Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course) Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course)
 
Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)
 

Recently uploaded

Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.MateoGardella
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxVishalSingh1417
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfSanaAli374401
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.christianmathematics
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 

Recently uploaded (20)

Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 

JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience

  • 1. SS2016 Modern Neural Computation Lecture 5: Neural Networks and Neuroscience Hirokazu Tanaka School of Information Science Japan Institute of Science and Technology
  • 2. Supervised learning as functional approximation. In this lecture we will learn: • Single-layer neural networks Perceptron and the perceptron theorem. Cerebellum as a perceptron. • Multi-layer feedforward neural networks Universal functional approximations, Back-propagation algorithms • Recurrent neural networks Back-propagation-through-time (BPTT) algorithms • Tempotron Spike-based perceptron
  • 3. Gradient-descent learning for optimization. • Classification problem: to output discrete labels. For a binary classification (i.e., 0 or 1), a cross-entropy is often used. • Regression problem: to output continuous values. Sum of squared errors is often used.
  • 4. Cost function: classification and regression. • Classification problem: to output discrete labels. For a binary classification (i.e., 0 or 1), a cross-entropy is often used. • Regression problem: to output continuous values. Sum of squared errors is often used. ˆ:output of network, :desired outputi iy y ( ) ( ) ( ) ˆ1ˆ : samples: samples ˆ ˆlog 1 log 1 log 1ii i i i i i yy i ii y y y y y y − − − =− + − −  ∑∏ ( ) : sa p e 2 m l s ˆi i iy y−∑
  • 5. Perceptron: single-layer neural network. • Assume a single-layer neural network with an input layer composed of N units and an output layer composed of one unit. • Input units are specified by and an output unit are determined by ( )1 T Nx x=x  ( )T 0 1 0 n i i iy f w x fw w =   = + = +    ∑ w x ( ) 1 if 0 0 if 0 u f u u ≥ =  <
  • 6. Perceptron: single-layer neural network. feature 1 feature 2
  • 7. Perceptron: single-layer neural network. • [Remark] Instead of using often, an augmented input vector are used. Then, ( )1 T Nx x=x  ( ) ( )T T 0y f w f= + =w x w x ( )11 T Nx x=x  ( )10 T Nw w w=w 
  • 8. Perceptron Learning Algorithm. ( ) ( ) ( ){ }21 1 2, , ,, , ,P Pd d dx x x • Given a training set: • Perceptron learning rule: ( )i i iydη −∆ =w x while err>1e-4 && count<10 y = sign(w'*X)'; wnew = w + X*(d-y)/P; wnew = wnew/norm(wnew); count = count+1; err = norm(w-wnew)/norm(w) w = wnew; end
  • 9. Perceptron Learning Algorithm. Case 1: Linearly separable case
  • 10. Perceptron Learning Algorithm. Case 2: Linearly non-separable case
  • 11. Perceptron’s capacity: Cover’s Counting Theorem. • Question: Suppose that there are P vectors in N- dimensional Euclidean space. There are 2P possible patterns of two classes. How many of them are linearly separable? [Remark] They are assumed to be in general position. • Answer: Cover’s Counting Theorem. { }1, ,, N P i ∈x x x  ( ) 1 0 1 , 2 N k P C P N k − = −  =     ∑
  • 12. Perceptron’s capacity: Cover’s Counting Theorem. • Cover’s Counting Theorem. • Case 𝑃𝑃 ≤ 𝑁𝑁: • Case 𝑃𝑃 = 2𝑁𝑁: • Case 𝑃𝑃 ≫ 𝑁𝑁: ( ) 1 0 1 , 2 N k P C P N k − = −  =     ∑ ( ), 2P C P N = ( ) 1 , 2P C P N − = ( ), N C P N AP≈ Cover (1965) IEEE Information; Sompolinsky (2013) MIT lecture note
  • 13. Perceptron’s capacity: Cover’s Counting Theorem. • Case for large P: Orhan (2014) “Cover’s Function Counting Theorem” ( ) 1 2 1 e 2 rf , 2 2P pC P N N p    + −        ≈
  • 14. Cerebellum as a Perceptron. Llinas (1974) Scientific American
  • 15. Cerebellum as a Perceptron. • Cerebellar cortex has a feedforward structure: mossy fibers -> granule cells -> parallel fibers -> Purkinje cells Ito (1984) “Cerebellum and Neural Control”
  • 16. Cerebellum as a Perceptron (or its extensions) • Perceptron model Marr (1969): Long-term potentiation (LTP) learning. Albus (1971): Long-term depression (LTD) learning. • Adaptive filter theory Fujita (1982): Reverberation among granule and Golgi cells for generating temporal templates. • Liquid-state machine model Yamazaki and Tanaka (2007):
  • 17. Perceptron: a new perspective. • Evaluation of memory capacity of a Purkinje cell using perceptron methods (the Gardner limit). Brunel, N., Hakim, V., Isope, P., Nadal, J. P., & Barbour, B. (2004). Optimal information storage and the distribution of synaptic weights: perceptron versus Purkinje cell. Neuron, 43(5), 745-757. • Estimation of dimensions of neural representations during visual memory task in the prefrontal cortex using perceptron methods (Cover’s counting theorem). Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi, S. (2013). The importance of mixed selectivity in complex cognitive tasks. Nature, 497(7451), 585-590.
  • 18. Limitation of Perceptron. • Only linearly separable input-output sets can be learned. • Non-linear sets, even a simple one like XOR, CANNOT be learned.
  • 19. Multilayer neural network: feedforward design ( )n ix ( )1n jx − Layer 1 Layer n-1 Layer n Layer N ( )1n ijw − • Feedforward network: a unit in layer n receives inputs from layer n-1 and projects to layer n+1.
  • 20. Multilayer neural network: feedforward design ( )n ix ( )1n jx − Layer 1 Layer n-1 Layer n Layer N ( )1n ijw − • Feedforward network: a unit in layer n receives inputs from layer n-1 and projects to layer n+1.
  • 21. Multilayer neural network: forward propagation. ( ) ( ) ( ) ( ) ( )1 1 1 n n n n i i ij j j x f u f w x− − =   = =     ∑ ( ) 1 1 u f u e− = + ( ) ( ) ( ) ( )( )2 1 1 1 1 1 11 u u uu f e e e u e f u f u − − −−   = = − =  + + + ′ − Layer n-1 Layer n ( )n ix ( )1n jx − ( )1n ijw − ( ) ( ) ( )1 1 1 n n n i ij j j u w x − − = = ∑ In a feedforward multilayer neural network propagates its activities from one layer to another in one direction: Inputs to neurons in layer n are a summation of activities of neurons in layer n-1: The function f is called an activation function, and its derivative is easy to compute:
  • 22. Multilayer neural network: error backpropagation • Define an cost function as a squared sum of errors in output units: Gradients of cost function with respect to weights: ( ) ( ) ( ) ( ) 2 21 1 2 2 N N i i i i i x z= − = ∆∑ ∑ Layer n-1 Layer n ( ) ( ) ( ) ( ) ( ) ( )1 1 1 n n n n n i j j j ji j x x w − − ∆ = ∆ −∑ ( )1n j − ∆ ( )n i∆ The neurons in the output layer has explicit supervised errors (the difference between the network outputs and the desired outputs). How, then, to compute the supervising signals for neurons in intermediate layers?
  • 23. Multilayer neural network: error backpropagation 1. Compute activations of units in all layers. 2. Compute errors in the output units, . 3. “Back-propagate” the errors to lower layers using 4. Update the weights ( ) { } ( ) { } ( ) { }1 ,, , , n N i i ix x x  ( ) { }N i∆ ( ) ( ) ( ) ( ) ( ) ( )1 1 1n n n n n i j j j ji j x x w − − ∆ = ∆ −∑ ( ) ( ) ( ) ( ) ( ) ( )1 1 1 1 n n n n n ij i i i jw x x xη + + + ∆ =− ∆ −
  • 24. Multilayer neural network as universal machine for functional approximation. A multilayer neural network is in principle able to approximate any functional relationship between inputs and outputs at any desired accuracy (Funahashi, 1988). Intuition: A sum or a difference of two sigmoid functions is a “bump- like” function. And, a sufficiently large number of bump functions can approximate any function.
  • 25. NETtalk: A parallel network that learns to read aloud. Sejnowski & Rosenberg (1987) Complex Systems A feedforward three-layer neural network with delay lines.
  • 26. NETtalk: A parallel network that learns to read aloud. Sejnowski & Rosenberg (1987) Complex Systems; https://www.youtube.com/watch?v=gakJlr3GecE A feedforward three-layer neural network with delay lines.
  • 27. NETtalk: A parallel network that learns to read aloud. Sejnowski & Rosenberg (1987) Complex Systems Activations of hidden units for a same sound but different inputs
  • 28. Hinton diagrams: characterizing and visualizing connection to and from hidden units. Hinton (1992) Sci Am Activations of hidden units for a same sound but different inputs
  • 29. Autonomous driving learning by backpropagation. Pomerleau (1991) Neural Comput Activations of hidden units for a same sound but different inputs
  • 30. Autonomous driving learning by backpropagation. Pomerleau (1991) Neural Comput; https://www.youtube.com/watch?v=ilP4aPDTBPE
  • 31. Gradient vanishing problem: why is training a multi-layer neural network so difficult? Hochreiter et al. (1991) • The back-propagation algorithm works only for neural networks of three or four layers. • Training neural networks with many hidden layers – called “deep neural networks”- is notoriously difficult. ( ) ( ) ( ) ( ) ( ) ( )1 1 1N N N N N j i i i ij i x x w− − ∆ = ∆ −∑ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 1 1 1 2 1 1 1 2 1 1 1 N N N N N k j j j jk j N N N N N N N i i i ij j j jk j i x x w x x w x x w − − − − − − − − − ∆ = ∆ −   = ∆ − −    ∑ ∑ ∑ ( ) ( ) ( ) ( ) ( )( 1) ( 1) ( 1) ( 1) ( ) ( ) ~ 1 1 1 n Nn n N N N N x x x x x x+ + − − ∆ − × × − × − ×∆
  • 32. Multilayer neural network: recurrent connections • A feedforward neural network can represent an instantaneous relationship between inputs and outputs - memoryless: it depends on current inputs but not on previous inputs. • In order to describe a history, a neural network should have its own dynamics. • One way to incorporate dynamics into a neural network is to introduce recurrent connections between units.
  • 33. Working memory in the parietal cortex. • A feedforward neural network can represent an instantaneous relationship between inputs and outputs - memoryless: it depends on current inputs x(t) but not on previous inputs x(t-1), x(t-2), ... • In order to describe a history, a neural network should have its own dynamics. • One way to incorporate dynamics into a neural network is to introduce recurrent connections between units.
  • 34. Multilayer neural network: recurrent connections ( ) ( )( ) ( ) ( )( )( )1 1ii i x t f u t f t t+= += +Wx Ua ( ) ( )( )iz t g t= Vx Recurrent dynamics of neural network: Output readout: a x z U VW
  • 35. Temporal unfolding: backpropagation through time (BPTT) 1t−a 1t−x tztx { }10 2 1,, , ,, ,t T −a a a aa   { }1 2 3, , , ,, ,t Tzz z zz   ,U W V Training set for a recurrent network: Input series: Output series: Optimize the weight matrices so as to approximate the training set:
  • 36. Temporal unfolding: backpropagation through time (BPTT) 0a 1z1x,U W V 0a 2z1x,U W V,U W 1a 2x 0a 3z 1x,U W V ,U W 1a 3x2x ,U W 2a 1t−a 1t−x tztx,U W V
  • 37. Working-memory related activity in parietal cortex. Gnadt & Andersen (1988) Exp Brain Res
  • 38. Temporal unfolding: backpropagation through time (BPTT) Zipser (1991) Neural Comput
  • 39. Temporal unfolding: backpropagation through time (BPTT) Zipser (1991) Neural Comput Model Experiment Model Experiment
  • 40. Spike pattern discrimination in humans. Johansson & Birznieks (2004); Johansson & Flanagan (2009)
  • 41. Spike pattern discrimination in dendrites. Branco et al. (2009) Science
  • 42. Tempotron: Spike-based perceptron. Consider five neurons and each emitting one spike but at different timings: Rate coding: Information is coded in numbers of spikes in a given period. ( ) ( )31 2 4 5, , , , 1,1,1,1,1r r r r r = Temporal coding: Information is coded in temporal patterns of spiking.
  • 43. Tempotron: Spike-based perceptron. Consider five neurons and each emitting one spike but at different timings:
  • 44. Tempotron: Spike-based perceptron. Basic idea: Expand the spike pattern into time: N T N×T Now
  • 45. Tempotron: Spike-based perceptron. 3 1 1 t t w e w e− ∆ −∆ + 2 2 2 t w e w− ∆ + 2 1 1 t w e w− ∆ + 3 2 2 t t w e w e− ∆ −∆ + ( ) ( )2 1 2 3 2 1t t t w e e w e θ− ∆ − ∆ − ∆ + + + > ( ) ( )2 1 2 2 3 1t t t w e w e e θ− ∆ − ∆ − ∆ + + + < ( ) ( ) 3 2 2 1 2 3 2 1 2 2 1 , , 1 t t t t t t w e e e w e e e − ∆ − ∆ − ∆ − ∆ − ∆ − ∆    + +  = = =     + +      w x x ( ) ( )T T1 2 ,θ θ> <w x w x Consider a classification problem of two spike patterns: If a vector notation is introduced: This classification problem is reduced to a perceptron problem:
  • 46. Tempotron: Spike-based perceptron. 3 1 1 t t w e w e− ∆ −∆ + 2 2 2 t w e w− ∆ + 2 1 1 t w e w− ∆ + 3 2 2 t t w e w e− ∆ −∆ + ( ) ( )2 1 2 3 2 1t t t w e e w e θ− ∆ − ∆ − ∆ + + + > ( ) ( )2 1 2 2 3 1t t t w e w e e θ− ∆ − ∆ − ∆ + + + < ( ) ( ) 3 2 2 1 2 3 2 1 2 2 1 , , 1 t t t t t t w e e e w e e e − ∆ − ∆ − ∆ − ∆ − ∆ − ∆    + +  = = =     + +      w x x ( ) ( )T T1 2 ,θ θ> <w x w x Consider a classification problem of two spike patterns: If a vector notation is introduced: This classification problem is reduced to a perceptron problem:
  • 47. Learning a tempotron: intuition. 3 1 1 t t w e w e− ∆ −∆ + 2 2 2 t w e w− ∆ + 2 1 1 t w e w− ∆ + 3 2 2 t t w e w e− ∆ −∆ + ( ) ( )2 1 2 3 2 1t t t w e e w e θ− ∆ − ∆ − ∆ + + + > ( ) ( )2 1 2 2 3 1t t t w e w e e θ− ∆ − ∆ − ∆ + + >+ What was wrong if the second pattern was misclassified? The last spike of neuron #1 (red one) is most responsible for the error, so the synaptic strength of this neuron should be reduced. 1w λ∆ = −
  • 48. Learning a tempotron: intuition. 3 1 1 t t w e w e− ∆ −∆ + 2 2 2 t w e w− ∆ + 2 1 1 t w e w− ∆ + 3 2 2 t t w e w e− ∆ −∆ + ( ) ( )2 1 2 3 2 1t t t w e e w e θ− ∆ − ∆ − ∆ + + <+ ( ) ( )2 1 2 2 3 1t t t w e w e e θ− ∆ − ∆ − ∆ + + + < What was wrong if the second pattern was misclassified? The last spike of neuron #2 (red one) is most responsible for the error, so the synaptic strength of this neuron should be potentiated. 2w λ∆ = +
  • 49. Exercise: Capacity of perceptron. • Generate a set of random vectors. • Write a code for the Perceptron learning algorithm. • By randomly relabeling, count how many of them are linearly separable. Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi, S. (2013). The importance of mixed selectivity in complex cognitive tasks. Nature, 497(7451), 585-590.
  • 50. Exercise: Training of recurrent neural networks. 0 α = I P T 1 T 1 n n n n n n n n n += − + P r r P P P r P r Goal: Investigate the effects of chaos and feedback in a recurrent network. ( )1t n n n t+= −+ + ∆x x x Mr T tanhnn nz = w x tanhn n=r x 1 nn n n ne+= −w w P r nn ne z f= − Recurrent dynamics without feedback: Update of covariance matrix: Update of weight matrix: force_internal_all2all.m
  • 51. Exercise: Training of recurrent neural networks. 0 α = I P T 1 T 1 n n n n n n n n n += − + P r r P P P r P r Goal: Investigate the effects of chaos and feedback in a recurrent network. ( )1 f t n nn n n tz+= − ++ + ∆x x Mr wx T tanhnn nz = w x tanhn n=r x 1 nn n n ne+= −w w P r nn ne z f= − Recurrent dynamics with feedback: Update of covariance matrix: Update of weight matrix: force_external_feedback_loop.m
  • 52. Exercise: Training of recurrent neural networks. Goal: Investigate the effects of chaos and feedback in a recurrent network. • Investigate the effect of output feedback. Are there any difference in the activities of recurrent units? • Investigate the effect of gain parameter g. What happens if the gain parameter is smaller than 1? • Try to approximate some other time series such as chaotic ones. Use the Lorentz model, for example.
  • 53. References • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by back- propagating errors. Cognitive modeling, 5(3), 1. • Sejnowski, T. J., & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text. Complex systems, 1(1), 145-168. • Funahashi, K. I. (1989). On the approximate realization of continuous mappings by neural networks. Neural networks, 2(3), 183-192. • S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies • Zipser, D. (1991). Recurrent network model of the neural mechanism of short-term active memory. Neural Computation, 3(2), 179-193. • Johansson, R. S., & Birznieks, I. (2004). First spikes in ensembles of human tactile afferents code complex spatial fingertip events. Nature neuroscience, 7(2), 170-177. • Branco, T., Clark, B. A., & Häusser, M. (2010). Dendritic discrimination of temporal input sequences in cortical neurons. Science, 329(5999), 1671-1675. • Gütig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing–based decisions. Nature neuroscience, 9(3), 420-428. • Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4), 544-557.