SlideShare a Scribd company logo
1 of 85
Neural Networks,
             Key Notes
An introduction to Neural Networks, eight edition, 1996
Authors: Ben Krose, Faculty of Mathematics & Computer Science,
University of Amsterdam. Patrick wan der Smagt, Institute of Robotics
and Systems Dynamics, German Aerospace Research Establishment
Keynote: Nelson Piedra, Computer Sciences School - Advanced Tech,
Technical University of Loja UTPL, Ecuador.
Part I Fundamentals
  1. Introduction
First wave of interest
First wave of interest

• First wave of interest emerged after the
  introduction of simplified neurons by
  McCullock and Pitts in 1943.
First wave of interest

• First wave of interest emerged after the
  introduction of simplified neurons by
  McCullock and Pitts in 1943.
• These neurons were introduced as models
  of biological neurons and as
  conceptual components for circuts
  that could perform computational tasks.
ANN, “black age”
ANN, “black age”
• Perceptrons book (Minsky &
 Papert, 1969): showed deficiencies of
 perceptrons models, most neural network
 funding was redirected and researches left
 the field
ANN, “black age”
• Perceptrons book (Minsky &
  Papert, 1969): showed deficiencies of
  perceptrons models, most neural network
  funding was redirected and researches left
  the field
• Only a few researchers continued their
  efforts, most notably Teuvo Kohoen, Stephen
  Grossberg, James Anderson, and Kunihiko
  Fukushima
ANN re-emerged
ANN re-emerged
• Early eighties: ANN, re-emerged only after
  some important theorical results, most
  notably the discovery of error back-
  propagation, and new hardware
  developments increased the processing
  capacities.
ANN re-emerged
• Early eighties: ANN, re-emerged only after
    some important theorical results, most
    notably the discovery of error back-
    propagation, and new hardware
    developments increased the processing
    capacities.
•    Nowdays most universities have a neural
    networks groups (i.e. Advanced Tech - UTPL)
¿How be can adequality
 characterised A.N.N.?
¿How be can adequality
  characterised A.N.N.?
• Artificial neural networks can be most
  adequately characterised as “computational
  models” with particular properties such as the
  ability,
¿How be can adequality
  characterised A.N.N.?
• Artificial neural networks can be most
  adequately characterised as “computational
  models” with particular properties such as the
  ability,
 •  to adapt or learn,
¿How be can adequality
  characterised A.N.N.?
• Artificial neural networks can be most
  adequately characterised as “computational
  models” with particular properties such as the
  ability,
 •  to adapt or learn,
 •  to generalise, or
¿How be can adequality
  characterised A.N.N.?
• Artificial neural networks can be most
  adequately characterised as “computational
  models” with particular properties such as the
  ability,
 •  to adapt or learn,
 •  to generalise, or
 •  to cluster or organise data, and
¿How be can adequality
  characterised A.N.N.?
• Artificial neural networks can be most
  adequately characterised as “computational
  models” with particular properties such as the
  ability,
 •  to adapt or learn,
 •  to generalise, or
 •  to cluster or organise data, and
 •  which operation is based on parallel
    processing.
¿How be can adequality
     characterised A.N.N.?
• Artificial neural networks can be most
    adequately characterised as “computational
    models” with particular properties such as the
    ability,
    • to adapt or learn,
    • to generalise, or
    • to cluster or organise data, and
    • which operation is based on parallel
      processing.
•   Also exist parallels with biological systems
to
adapt
to       to
adapt   learn
to                  to
adapt              learn
        parallel
        process
to                  to
adapt              learn
                              to
        parallel
                           organise
        process
                             data
to
 to                  to
                                  cluster
adapt              learn
                              to
        parallel
                           organise
        process
                             data
to
 to                  to
                                  cluster
adapt              learn
                              to
        parallel
                           organise
        process
                             data



 Above slide shows properties can be
 attributed to neural network models
   and existing (non-neural) models
Extent the neural approach
 proves to be better suited for
certain applications than existing
             models
Part I Fundamentals
 2. Fundamentals
A framework for
distributed representation
A framework for
distributed representation
• To understand ANN, thinking on the parallel
  distributed processing (PDP) idea
A framework for
distributed representation
• To understand ANN, thinking on the parallel
  distributed processing (PDP) idea
• An artifitial network consists of a pool of
  simple processing units wich
  comunicate by sending signals to each
  other over a large number of
  weighted connections.
• 1/2 Rumelhart and McClelland, 1986:
• 1/2 Rumelhart and McClelland, 1986:
 • a set o processing units (‘neurons’, ‘cells’);
• 1/2 Rumelhart and McClelland, 1986:
 • a set o processing units (‘neurons’, ‘cells’);
 • a state o activation y for every unit, wich
                          k
     equivalent to the output of the unit;
• 1/2 Rumelhart and McClelland, 1986:
 • a set o processing units (‘neurons’, ‘cells’);
 • a state o activation y for every unit, wich
                          k
    equivalent to the output of the unit;
 • connections between the units. Generally
     each conection is defined by a weight wjk
     wich determines the effect wich the signal
     of unit j has on unit k;
• 1/2 Rumelhart and McClelland, 1986:
 • a set o processing units (‘neurons’, ‘cells’);
 • a state o activation y for every unit, wich
                            k
    equivalent to the output of the unit;
 • connections between the units. Generally
      each conection is defined by a weight wjk
      wich determines the effect wich the signal
      of unit j has on unit k;
  •   a propagation rule, wich determines the
      effective input sk of a unit from its external
      inputs.
• 2/2 Rumelhart and McClelland, 1986:
• 2/2 Rumelhart and McClelland, 1986:
 • an activation function F , wich determines
                           k
    the new level of activation based on the
    effective input sk(t) and the current
    activation yk(t);
• 2/2 Rumelhart and McClelland, 1986:
 • an activation function F , wich determines
                              k
      the new level of activation based on the
      effective input sk(t) and the current
      activation yk(t);
  •   an external input (aka bias, offset) θk for
      each unit;
• 2/2 Rumelhart and McClelland, 1986:
 • an activation function F , wich determines
                              k
      the new level of activation based on the
      effective input sk(t) and the current
      activation yk(t);
  •   an external input (aka bias, offset) θk for
      each unit;
  •   a method for information gathering (the
      learning rule);
• 2/2 Rumelhart and McClelland, 1986:
 • an activation function F , wich determines
                             k
      the new level of activation based on the
      effective input sk(t) and the current
      activation yk(t);
  •   an external input (aka bias, offset) θk for
      each unit;
  •   a method for information gathering (the
      learning rule);
  •   an environment within wich the system
      must operate, provinding input signals and
      -if necesary- error signals
Processing Units
Processing Units
• Each unit performs a relatively simple job:
Processing Units
• Each unit performs a relatively simple job:
 • a) receive input from neighbours or
    external sources an use this to compute
    an output which is propagated to other
    units;
Processing Units
• Each unit performs a relatively simple job:
 • a) receive input from neighbours or
    external sources an use this to compute
    an output which is propagated to other
    units;
  • b) adjustment of the weights
Processing Units
• Each unit performs a relatively simple job:
 • a) receive input from neighbours or
    external sources an use this to compute
    an output which is propagated to other
    units;
  • b) adjustment of the weights
• The system is inherently parallel in the sense
  that many units can carry out their
  computations at the same time
k
              w1k
                                                               yk
              w2k
                                               fk
                    sk = Σj wjk yj + θk
              wjk
              wnk



j     y                   θk

The basic components of an artificial neural network. The
propagation rule used here is the standard wighted summation
Thre types of units
input units, i: which receive data
from outside the neural network

output units, o: which send data out
of neural network

hidden units, h: whose input and
output signals remain within the
neural network
update of units
Synchronously: all units update their
activation simultanously


Asynchronously: each unit has a
(usually fixed) probability of updating its
activation at a time t, and usually only one
unit will be to do this at a time; in some
cases the latter model has some
advantages
Conections between units




   sk (t) = Σ wjk (t) yj (t)+ θk
            j
Conections between units
• Assume that unit provides an additive contribution
  to the input of the unit which it is connected




          sk (t) = Σ wjk (t) yj (t)+ θk
                     j
Conections between units
• Assume that unit provides an additive contribution
  to the input of the unit which it is connected
• The total input to unit k is simply the weighted
  sum of the separate outputs from each of the
  connected units plus a bias or offset term θk




          sk (t) = Σ wjk (t) yj (t)+ θk
                     j
Conections between units
• Assume that unit provides an additive contribution
  to the input of the unit which it is connected
• The total input to unit k is simply the weighted
  sum of the separate outputs from each of the
  connected units plus a bias or offset term θk
• A positive w   is considerad excitation and
                 jk
  negative wjk as inhibition.



          sk (t) = Σ wjk (t) yj (t)+ θk
                      j
Conections between units
• Assume that unit provides an additive contribution
  to the input of the unit which it is connected
• The total input to unit k is simply the weighted
  sum of the separate outputs from each of the
  connected units plus a bias or offset term θk
• A positive w   is considerad excitation and
                 jk
  negative wjk as inhibition.
• The units of propagation rule be call sigma units
          sk (t) = Σ wjk (t) yj (t)+ θk
                      j
Different propagation rule




 sk (t) = Σ wjk (t) ∏ yjm (t)+ θk (t)
          j          m
Different propagation rule
• Propagation rule for the sigma - Pi unit, Feldman and
  Ballard, 1982.




    sk (t) = Σ wjk (t) ∏ yjm (t)+ θk (t)
                   j          m
Different propagation rule
• Propagation rule for the sigma - Pi unit, Feldman and
  Ballard, 1982.
• Often, the yjm are weighted before multiplication.
  Although these units are not frequently used, they
  their value for gating of input, as well as
  implementation of lookup tables (Mel 1990)


    sk (t) = Σ wjk (t) ∏ yjm (t)+ θk (t)
                   j          m
Activation and output
          rules
• New value de activation: we need a function
  fk which takes the total input sk (t) and the
  current activation yk (t) and produced a
  new value of the activation of the unit k.

     yk (t+1) = fk(yk (t) , sk (t) )
• Often, the activation function is a
  nondecreasing function of the total input of
  the unit

          yk (t+1) = fk( sk (t) ) =
        fk( Σ wjk (t) yj (t)+ θk (t) )
                  j




       Sgn                                        sigmoid i
                       i                      i
                            semi linear
     hard limiting         linear o semi linear   smoothly limiting
  threshold function             function            threshold
• For this smoonthly limiting function often a
    sigmoid (S-shaped) function like:

         yk = fk( sk )=1 / ( 1    +e-sk )

• In some cases, the output of a unit can be a
  stochastic function of the total input of the
  unit. In that case the activation is not
  deterministically determined by the neuron
  input, but the neuron input determines the
  probability p that a neuron get a high
  activation rule

      p( yk ← 1 ) = 1/ ( 1        +e-sk /T   )
Network topologies

• This section focuses on the pattern of
  connections between the units and
  the propagation of data:
• Feed - forward networks
• Recurrent networks that do contain
  feedback connections
Feed-forward networks

• The data processing can extend over
  multiple (layers of) units, but no
  feedback connections are present,
  that is, connections extending from outputs
  of units to input of units in the same layer or
  previous layers
Recurrent networks that do
contain feedback connections
Recurrent networks that do
       contain feedback connections
•   Contrary to feed-forward networks, the dynamical properties
    of the network are important.
Recurrent networks that do
       contain feedback connections
•   Contrary to feed-forward networks, the dynamical properties
    of the network are important.

•   In some cases, the activation values of the units under go a
    relaxation process such that the network will evolve to a
    stable state in wich these activations do not change anymore.
Recurrent networks that do
       contain feedback connections
•   Contrary to feed-forward networks, the dynamical properties
    of the network are important.

•   In some cases, the activation values of the units under go a
    relaxation process such that the network will evolve to a
    stable state in wich these activations do not change anymore.

•   In other applications, the change of the activation values of
    the output neurons are significant, such that the dynamical
    behaviour constitutes the output of the network
    (Pearlmutter, 1990)
Recurrent networks that do
       contain feedback connections
•   Contrary to feed-forward networks, the dynamical properties
    of the network are important.

•   In some cases, the activation values of the units under go a
    relaxation process such that the network will evolve to a
    stable state in wich these activations do not change anymore.

•   In other applications, the change of the activation values of
    the output neurons are significant, such that the dynamical
    behaviour constitutes the output of the network
    (Pearlmutter, 1990)

•   Classical examples of feed-forward networks are the
    Perceptron and Adaline.
Training of artificial
 neural networks
Training of artificial
    neural networks
• A neural network has to be configured such
  that the application of a set of inputs
  produces (either ‘direct’ or via a relaxation
  process) the desired set ot output.
Training of artificial
    neural networks
• A neural network has to be configured such
  that the application of a set of inputs
  produces (either ‘direct’ or via a relaxation
  process) the desired set ot output.
• One way is to set the weights explicity,
  using a priori knowledge.
Training of artificial
    neural networks
• A neural network has to be configured such
  that the application of a set of inputs
  produces (either ‘direct’ or via a relaxation
  process) the desired set ot output.
• One way is to set the weights explicity,
  using a priori knowledge.
• Other   way is to ‘train’ the neural network
  by feeding it teaching patterns and letting it
  change its weights according to some
  learning rule.
Paradigms of learning
Paradigms of learning

• Supervised learning or Associative
  learning in which the network is trained
  by providing in with input and matching
  output patterns. These input-output pairs
  can be provided by an external teacher, or by
  the system which contains the network (self-
  supervised)
Paradigms of learning
Paradigms of learning
• Unsupervised learning or Self-
  organisation in which an (output) unit is
  trained to respond to clusters of pattern
  within the input. In this paradigm the system
  is supposed to discover statistically salient
  features of the input population. Unlike the
  supervised learning paradigm, there is no a
  priori set of categories into which the
  patterns are to be classified; rather the
  system must develop its own representation
  of the input stimuli.
Modifying patters of
   connectivity
Modifying patters of
        connectivity
 Hebbian learning rule
Widrow - Hoff
 In the next chapters some of these update rules will be
discussed
Hebbian learning rule
Hebbian learning rule
• Suggested by Hebb in his classic book
  Organization of Behaviour (Hebb, 1949)
• The basic idea is that if two units j and k are
  active simultaneously, their interconnection
  must be strengthened. If j receives input
  from k, the simplest version of Hebbian
  learning prescribes to modify the weight wjk
  with:
Hebbian learning rule
• Suggested by Hebb in his classic book
  Organization of Behaviour (Hebb, 1949)
• The basic idea is that if two units j and k are
  active simultaneously, their interconnection
  must be strengthened. If j receives input
  from k, the simplest version of Hebbian
  learning prescribes to modify the weight wjk
  with:
          ∆wjk = ϒyjyk;      ϒ is a positive constant of
          proportionality representing the learning rate
Widrow-Hoff rule or
   the delta rule
Widrow-Hoff rule or
    the delta rule
• Another common rule uses not the actual
  activation of unit k but the difference
  between the actual and desired activation
  for adjusting the weights.
•d   is the desired activation provided by a
     k
  teacher
Widrow-Hoff rule or
    the delta rule
• Another common rule uses not the actual
  activation of unit k but the difference
  between the actual and desired activation
  for adjusting the weights.
•d   is the desired activation provided by a
     k
  teacher

                  ∆wjk = γyj(dk - yk)
Terminology
   Output vs activation of a unit: to be and the same
thing; that is, the output of each neuron equals its activation rule

Bias, offset, threshold: These terms all refer to a constant
  term which is input to a unit. This external input is usually
implemented (and can be written) as a weight from a unit with
                      activation value 1
 Number of layers: In a feed-forward network, the inputs
   perform no computation and their layer is therefore not
counted. Thus a network with one input layer, one hidden layer,
and one output layer is referred to as a network with two layer.

More Related Content

What's hot

2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagationKrish_ver2
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural networkMojammilHusain
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
 
Introduction to artificial neural network
Introduction to artificial neural networkIntroduction to artificial neural network
Introduction to artificial neural networkDr. C.V. Suresh Babu
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptronomaraldabash
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANNMohamed Talaat
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural networkSopheaktra YONG
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshareRed Innovators
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsKasun Chinthaka Piyarathna
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsDrBaljitSinghKhehra
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural networkNagarajan
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationMohammed Bennamoun
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Simplilearn
 
Neural network
Neural networkNeural network
Neural networkSilicon
 
Machine Learning - Ensemble Methods
Machine Learning - Ensemble MethodsMachine Learning - Ensemble Methods
Machine Learning - Ensemble MethodsAndrew Ferlitsch
 

What's hot (20)

2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
Neural network
Neural networkNeural network
Neural network
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
 
Cnn
CnnCnn
Cnn
 
Introduction to artificial neural network
Introduction to artificial neural networkIntroduction to artificial neural network
Introduction to artificial neural network
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Deep Learning for Computer Vision: Data Augmentation (UPC 2016)
Deep Learning for Computer Vision: Data Augmentation (UPC 2016)Deep Learning for Computer Vision: Data Augmentation (UPC 2016)
Deep Learning for Computer Vision: Data Augmentation (UPC 2016)
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
 
Neural network
Neural networkNeural network
Neural network
 
Machine Learning - Ensemble Methods
Machine Learning - Ensemble MethodsMachine Learning - Ensemble Methods
Machine Learning - Ensemble Methods
 

Viewers also liked

Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksstellajoseph
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceDeep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksChristian Perone
 
Neural networks.cheungcannonnotes
Neural networks.cheungcannonnotesNeural networks.cheungcannonnotes
Neural networks.cheungcannonnotesAbhi Mediratta
 
Fundamentals of Neural Networks
Fundamentals of Neural NetworksFundamentals of Neural Networks
Fundamentals of Neural NetworksGagan Deep
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
Introduction to Neural Networks - Perceptron
Introduction to Neural Networks - PerceptronIntroduction to Neural Networks - Perceptron
Introduction to Neural Networks - PerceptronHannes Hapke
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
 
Mạng neuron, trí tuệ nhân tạo
Mạng neuron, trí tuệ nhân tạoMạng neuron, trí tuệ nhân tạo
Mạng neuron, trí tuệ nhân tạoKien Nguyen
 
Introduction to Thevenin's theorem
Introduction to Thevenin's theorem Introduction to Thevenin's theorem
Introduction to Thevenin's theorem abhijith prabha
 
Introduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksIntroduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksESCOM
 
Intro Ch 09 A
Intro Ch 09 AIntro Ch 09 A
Intro Ch 09 Aali00061
 
Neural networks
Neural networksNeural networks
Neural networksBasil John
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksFrancesco Collova'
 
Introduction to Computer Networks
Introduction to Computer NetworksIntroduction to Computer Networks
Introduction to Computer NetworksVenkatesh Iyer
 
Machine learning with scikitlearn
Machine learning with scikitlearnMachine learning with scikitlearn
Machine learning with scikitlearnPratap Dangeti
 

Viewers also liked (20)

Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceDeep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial Intelligence
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
Neural networks.cheungcannonnotes
Neural networks.cheungcannonnotesNeural networks.cheungcannonnotes
Neural networks.cheungcannonnotes
 
L2 binomial operations
L2 binomial operationsL2 binomial operations
L2 binomial operations
 
Fundamentals of Neural Networks
Fundamentals of Neural NetworksFundamentals of Neural Networks
Fundamentals of Neural Networks
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
Introduction to Neural Networks - Perceptron
Introduction to Neural Networks - PerceptronIntroduction to Neural Networks - Perceptron
Introduction to Neural Networks - Perceptron
 
Neural networks introduction
Neural networks introductionNeural networks introduction
Neural networks introduction
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Mạng neuron, trí tuệ nhân tạo
Mạng neuron, trí tuệ nhân tạoMạng neuron, trí tuệ nhân tạo
Mạng neuron, trí tuệ nhân tạo
 
Introduction to Thevenin's theorem
Introduction to Thevenin's theorem Introduction to Thevenin's theorem
Introduction to Thevenin's theorem
 
Introduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksIntroduction to Radial Basis Function Networks
Introduction to Radial Basis Function Networks
 
Artificial Neural Network Topology
Artificial Neural Network TopologyArtificial Neural Network Topology
Artificial Neural Network Topology
 
Intro Ch 09 A
Intro Ch 09 AIntro Ch 09 A
Intro Ch 09 A
 
Neural networks
Neural networksNeural networks
Neural networks
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
 
Introduction to Computer Networks
Introduction to Computer NetworksIntroduction to Computer Networks
Introduction to Computer Networks
 
Machine learning with scikitlearn
Machine learning with scikitlearnMachine learning with scikitlearn
Machine learning with scikitlearn
 

Similar to Fundamental, An Introduction to Neural Networks

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1ncct
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseMohaiminur Rahman
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...DurgadeviParamasivam
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...vijaym148
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalizationKamal Bhatt
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipMaria Perkins
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworksAastha Kohli
 
Artificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaEr. Arpit Sharma
 

Similar to Fundamental, An Introduction to Neural Networks (20)

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
ANN.ppt
ANN.pptANN.ppt
ANN.ppt
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
MNN
MNNMNN
MNN
 
ANN - UNIT 1.pptx
ANN - UNIT 1.pptxANN - UNIT 1.pptx
ANN - UNIT 1.pptx
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
Neural network
Neural networkNeural network
Neural network
 
Nn devs
Nn devsNn devs
Nn devs
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA Chip
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Artificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharma
 

More from Nelson Piedra

DBpedia Latinoamérica en ENC 2015
DBpedia Latinoamérica en ENC 2015DBpedia Latinoamérica en ENC 2015
DBpedia Latinoamérica en ENC 2015Nelson Piedra
 
Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)
Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)
Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)Nelson Piedra
 
Tutoría sobre Nuevas Tecnologías - Maestría en Gestión Empresarial
Tutoría sobre Nuevas Tecnologías - Maestría en Gestión EmpresarialTutoría sobre Nuevas Tecnologías - Maestría en Gestión Empresarial
Tutoría sobre Nuevas Tecnologías - Maestría en Gestión EmpresarialNelson Piedra
 
Escuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de Egreso
Escuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de EgresoEscuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de Egreso
Escuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de EgresoNelson Piedra
 
Where the Social Web Meets the Semantic Web. Tom Gruber
Where the Social Web Meets the Semantic Web. Tom GruberWhere the Social Web Meets the Semantic Web. Tom Gruber
Where the Social Web Meets the Semantic Web. Tom GruberNelson Piedra
 
PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.
PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.
PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.Nelson Piedra
 
SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...
SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...
SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...Nelson Piedra
 
Cosas Que Deberíamos Aprender en La Universidad
Cosas Que  Deberíamos Aprender en La UniversidadCosas Que  Deberíamos Aprender en La Universidad
Cosas Que Deberíamos Aprender en La UniversidadNelson Piedra
 
Overview, CMMI 1.1 Vs CMMI 1.2
Overview, CMMI 1.1 Vs CMMI 1.2Overview, CMMI 1.1 Vs CMMI 1.2
Overview, CMMI 1.1 Vs CMMI 1.2Nelson Piedra
 
Overview of CMMI and Software Process Improvement
Overview of CMMI and Software Process ImprovementOverview of CMMI and Software Process Improvement
Overview of CMMI and Software Process ImprovementNelson Piedra
 
Agentes Inteligentes Key Note 2007
Agentes Inteligentes Key Note 2007Agentes Inteligentes Key Note 2007
Agentes Inteligentes Key Note 2007Nelson Piedra
 
Humor de la Era Digital
Humor de la Era DigitalHumor de la Era Digital
Humor de la Era DigitalNelson Piedra
 
Modelo de Créditos ECTS para ECC - UTPL
Modelo de Créditos ECTS para ECC - UTPLModelo de Créditos ECTS para ECC - UTPL
Modelo de Créditos ECTS para ECC - UTPLNelson Piedra
 
Competencias Universitarias
Competencias UniversitariasCompetencias Universitarias
Competencias UniversitariasNelson Piedra
 

More from Nelson Piedra (18)

DBpedia Latinoamérica en ENC 2015
DBpedia Latinoamérica en ENC 2015DBpedia Latinoamérica en ENC 2015
DBpedia Latinoamérica en ENC 2015
 
Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)
Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)
Interoperabilidad semántica y re-uso de datos en la Web (HackEc15)
 
Tutoría sobre Nuevas Tecnologías - Maestría en Gestión Empresarial
Tutoría sobre Nuevas Tecnologías - Maestría en Gestión EmpresarialTutoría sobre Nuevas Tecnologías - Maestría en Gestión Empresarial
Tutoría sobre Nuevas Tecnologías - Maestría en Gestión Empresarial
 
E learning ecuador
E learning ecuadorE learning ecuador
E learning ecuador
 
Escuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de Egreso
Escuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de EgresoEscuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de Egreso
Escuela de Ciencias de la Computación: Proyecto de Mejoramiento Perfil de Egreso
 
Where the Social Web Meets the Semantic Web. Tom Gruber
Where the Social Web Meets the Semantic Web. Tom GruberWhere the Social Web Meets the Semantic Web. Tom Gruber
Where the Social Web Meets the Semantic Web. Tom Gruber
 
Welcome To TeX
Welcome To TeXWelcome To TeX
Welcome To TeX
 
PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.
PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.
PMO: Técnicas Financieras para valoración de proyectos de S.I./T.I.
 
Sepg 2007 Pmo
Sepg 2007 PmoSepg 2007 Pmo
Sepg 2007 Pmo
 
SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...
SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...
SEPGLA 2007 Migración a Ambientes de Arquitectura Orientada a Servicios (SOA...
 
Cosas Que Deberíamos Aprender en La Universidad
Cosas Que  Deberíamos Aprender en La UniversidadCosas Que  Deberíamos Aprender en La Universidad
Cosas Que Deberíamos Aprender en La Universidad
 
Overview, CMMI 1.1 Vs CMMI 1.2
Overview, CMMI 1.1 Vs CMMI 1.2Overview, CMMI 1.1 Vs CMMI 1.2
Overview, CMMI 1.1 Vs CMMI 1.2
 
Overview of CMMI and Software Process Improvement
Overview of CMMI and Software Process ImprovementOverview of CMMI and Software Process Improvement
Overview of CMMI and Software Process Improvement
 
IDEAL step by step
IDEAL step by stepIDEAL step by step
IDEAL step by step
 
Agentes Inteligentes Key Note 2007
Agentes Inteligentes Key Note 2007Agentes Inteligentes Key Note 2007
Agentes Inteligentes Key Note 2007
 
Humor de la Era Digital
Humor de la Era DigitalHumor de la Era Digital
Humor de la Era Digital
 
Modelo de Créditos ECTS para ECC - UTPL
Modelo de Créditos ECTS para ECC - UTPLModelo de Créditos ECTS para ECC - UTPL
Modelo de Créditos ECTS para ECC - UTPL
 
Competencias Universitarias
Competencias UniversitariasCompetencias Universitarias
Competencias Universitarias
 

Recently uploaded

DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 

Recently uploaded (20)

DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 

Fundamental, An Introduction to Neural Networks

  • 1. Neural Networks, Key Notes An introduction to Neural Networks, eight edition, 1996 Authors: Ben Krose, Faculty of Mathematics & Computer Science, University of Amsterdam. Patrick wan der Smagt, Institute of Robotics and Systems Dynamics, German Aerospace Research Establishment Keynote: Nelson Piedra, Computer Sciences School - Advanced Tech, Technical University of Loja UTPL, Ecuador.
  • 2. Part I Fundamentals 1. Introduction
  • 3. First wave of interest
  • 4. First wave of interest • First wave of interest emerged after the introduction of simplified neurons by McCullock and Pitts in 1943.
  • 5. First wave of interest • First wave of interest emerged after the introduction of simplified neurons by McCullock and Pitts in 1943. • These neurons were introduced as models of biological neurons and as conceptual components for circuts that could perform computational tasks.
  • 7. ANN, “black age” • Perceptrons book (Minsky & Papert, 1969): showed deficiencies of perceptrons models, most neural network funding was redirected and researches left the field
  • 8. ANN, “black age” • Perceptrons book (Minsky & Papert, 1969): showed deficiencies of perceptrons models, most neural network funding was redirected and researches left the field • Only a few researchers continued their efforts, most notably Teuvo Kohoen, Stephen Grossberg, James Anderson, and Kunihiko Fukushima
  • 10. ANN re-emerged • Early eighties: ANN, re-emerged only after some important theorical results, most notably the discovery of error back- propagation, and new hardware developments increased the processing capacities.
  • 11. ANN re-emerged • Early eighties: ANN, re-emerged only after some important theorical results, most notably the discovery of error back- propagation, and new hardware developments increased the processing capacities. • Nowdays most universities have a neural networks groups (i.e. Advanced Tech - UTPL)
  • 12. ¿How be can adequality characterised A.N.N.?
  • 13. ¿How be can adequality characterised A.N.N.? • Artificial neural networks can be most adequately characterised as “computational models” with particular properties such as the ability,
  • 14. ¿How be can adequality characterised A.N.N.? • Artificial neural networks can be most adequately characterised as “computational models” with particular properties such as the ability, • to adapt or learn,
  • 15. ¿How be can adequality characterised A.N.N.? • Artificial neural networks can be most adequately characterised as “computational models” with particular properties such as the ability, • to adapt or learn, • to generalise, or
  • 16. ¿How be can adequality characterised A.N.N.? • Artificial neural networks can be most adequately characterised as “computational models” with particular properties such as the ability, • to adapt or learn, • to generalise, or • to cluster or organise data, and
  • 17. ¿How be can adequality characterised A.N.N.? • Artificial neural networks can be most adequately characterised as “computational models” with particular properties such as the ability, • to adapt or learn, • to generalise, or • to cluster or organise data, and • which operation is based on parallel processing.
  • 18. ¿How be can adequality characterised A.N.N.? • Artificial neural networks can be most adequately characterised as “computational models” with particular properties such as the ability, • to adapt or learn, • to generalise, or • to cluster or organise data, and • which operation is based on parallel processing. • Also exist parallels with biological systems
  • 19.
  • 21. to to adapt learn
  • 22. to to adapt learn parallel process
  • 23. to to adapt learn to parallel organise process data
  • 24. to to to cluster adapt learn to parallel organise process data
  • 25. to to to cluster adapt learn to parallel organise process data Above slide shows properties can be attributed to neural network models and existing (non-neural) models
  • 26. Extent the neural approach proves to be better suited for certain applications than existing models
  • 27. Part I Fundamentals 2. Fundamentals
  • 29. A framework for distributed representation • To understand ANN, thinking on the parallel distributed processing (PDP) idea
  • 30. A framework for distributed representation • To understand ANN, thinking on the parallel distributed processing (PDP) idea • An artifitial network consists of a pool of simple processing units wich comunicate by sending signals to each other over a large number of weighted connections.
  • 31.
  • 32. • 1/2 Rumelhart and McClelland, 1986:
  • 33. • 1/2 Rumelhart and McClelland, 1986: • a set o processing units (‘neurons’, ‘cells’);
  • 34. • 1/2 Rumelhart and McClelland, 1986: • a set o processing units (‘neurons’, ‘cells’); • a state o activation y for every unit, wich k equivalent to the output of the unit;
  • 35. • 1/2 Rumelhart and McClelland, 1986: • a set o processing units (‘neurons’, ‘cells’); • a state o activation y for every unit, wich k equivalent to the output of the unit; • connections between the units. Generally each conection is defined by a weight wjk wich determines the effect wich the signal of unit j has on unit k;
  • 36. • 1/2 Rumelhart and McClelland, 1986: • a set o processing units (‘neurons’, ‘cells’); • a state o activation y for every unit, wich k equivalent to the output of the unit; • connections between the units. Generally each conection is defined by a weight wjk wich determines the effect wich the signal of unit j has on unit k; • a propagation rule, wich determines the effective input sk of a unit from its external inputs.
  • 37.
  • 38. • 2/2 Rumelhart and McClelland, 1986:
  • 39. • 2/2 Rumelhart and McClelland, 1986: • an activation function F , wich determines k the new level of activation based on the effective input sk(t) and the current activation yk(t);
  • 40. • 2/2 Rumelhart and McClelland, 1986: • an activation function F , wich determines k the new level of activation based on the effective input sk(t) and the current activation yk(t); • an external input (aka bias, offset) θk for each unit;
  • 41. • 2/2 Rumelhart and McClelland, 1986: • an activation function F , wich determines k the new level of activation based on the effective input sk(t) and the current activation yk(t); • an external input (aka bias, offset) θk for each unit; • a method for information gathering (the learning rule);
  • 42. • 2/2 Rumelhart and McClelland, 1986: • an activation function F , wich determines k the new level of activation based on the effective input sk(t) and the current activation yk(t); • an external input (aka bias, offset) θk for each unit; • a method for information gathering (the learning rule); • an environment within wich the system must operate, provinding input signals and -if necesary- error signals
  • 44. Processing Units • Each unit performs a relatively simple job:
  • 45. Processing Units • Each unit performs a relatively simple job: • a) receive input from neighbours or external sources an use this to compute an output which is propagated to other units;
  • 46. Processing Units • Each unit performs a relatively simple job: • a) receive input from neighbours or external sources an use this to compute an output which is propagated to other units; • b) adjustment of the weights
  • 47. Processing Units • Each unit performs a relatively simple job: • a) receive input from neighbours or external sources an use this to compute an output which is propagated to other units; • b) adjustment of the weights • The system is inherently parallel in the sense that many units can carry out their computations at the same time
  • 48. k w1k yk w2k fk sk = Σj wjk yj + θk wjk wnk j y θk The basic components of an artificial neural network. The propagation rule used here is the standard wighted summation
  • 49. Thre types of units input units, i: which receive data from outside the neural network output units, o: which send data out of neural network hidden units, h: whose input and output signals remain within the neural network
  • 50. update of units Synchronously: all units update their activation simultanously Asynchronously: each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be to do this at a time; in some cases the latter model has some advantages
  • 51. Conections between units sk (t) = Σ wjk (t) yj (t)+ θk j
  • 52. Conections between units • Assume that unit provides an additive contribution to the input of the unit which it is connected sk (t) = Σ wjk (t) yj (t)+ θk j
  • 53. Conections between units • Assume that unit provides an additive contribution to the input of the unit which it is connected • The total input to unit k is simply the weighted sum of the separate outputs from each of the connected units plus a bias or offset term θk sk (t) = Σ wjk (t) yj (t)+ θk j
  • 54. Conections between units • Assume that unit provides an additive contribution to the input of the unit which it is connected • The total input to unit k is simply the weighted sum of the separate outputs from each of the connected units plus a bias or offset term θk • A positive w is considerad excitation and jk negative wjk as inhibition. sk (t) = Σ wjk (t) yj (t)+ θk j
  • 55. Conections between units • Assume that unit provides an additive contribution to the input of the unit which it is connected • The total input to unit k is simply the weighted sum of the separate outputs from each of the connected units plus a bias or offset term θk • A positive w is considerad excitation and jk negative wjk as inhibition. • The units of propagation rule be call sigma units sk (t) = Σ wjk (t) yj (t)+ θk j
  • 56. Different propagation rule sk (t) = Σ wjk (t) ∏ yjm (t)+ θk (t) j m
  • 57. Different propagation rule • Propagation rule for the sigma - Pi unit, Feldman and Ballard, 1982. sk (t) = Σ wjk (t) ∏ yjm (t)+ θk (t) j m
  • 58. Different propagation rule • Propagation rule for the sigma - Pi unit, Feldman and Ballard, 1982. • Often, the yjm are weighted before multiplication. Although these units are not frequently used, they their value for gating of input, as well as implementation of lookup tables (Mel 1990) sk (t) = Σ wjk (t) ∏ yjm (t)+ θk (t) j m
  • 59. Activation and output rules • New value de activation: we need a function fk which takes the total input sk (t) and the current activation yk (t) and produced a new value of the activation of the unit k. yk (t+1) = fk(yk (t) , sk (t) )
  • 60. • Often, the activation function is a nondecreasing function of the total input of the unit yk (t+1) = fk( sk (t) ) = fk( Σ wjk (t) yj (t)+ θk (t) ) j Sgn sigmoid i i i semi linear hard limiting linear o semi linear smoothly limiting threshold function function threshold
  • 61. • For this smoonthly limiting function often a sigmoid (S-shaped) function like: yk = fk( sk )=1 / ( 1 +e-sk ) • In some cases, the output of a unit can be a stochastic function of the total input of the unit. In that case the activation is not deterministically determined by the neuron input, but the neuron input determines the probability p that a neuron get a high activation rule p( yk ← 1 ) = 1/ ( 1 +e-sk /T )
  • 62. Network topologies • This section focuses on the pattern of connections between the units and the propagation of data: • Feed - forward networks • Recurrent networks that do contain feedback connections
  • 63. Feed-forward networks • The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to input of units in the same layer or previous layers
  • 64. Recurrent networks that do contain feedback connections
  • 65. Recurrent networks that do contain feedback connections • Contrary to feed-forward networks, the dynamical properties of the network are important.
  • 66. Recurrent networks that do contain feedback connections • Contrary to feed-forward networks, the dynamical properties of the network are important. • In some cases, the activation values of the units under go a relaxation process such that the network will evolve to a stable state in wich these activations do not change anymore.
  • 67. Recurrent networks that do contain feedback connections • Contrary to feed-forward networks, the dynamical properties of the network are important. • In some cases, the activation values of the units under go a relaxation process such that the network will evolve to a stable state in wich these activations do not change anymore. • In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the network (Pearlmutter, 1990)
  • 68. Recurrent networks that do contain feedback connections • Contrary to feed-forward networks, the dynamical properties of the network are important. • In some cases, the activation values of the units under go a relaxation process such that the network will evolve to a stable state in wich these activations do not change anymore. • In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the network (Pearlmutter, 1990) • Classical examples of feed-forward networks are the Perceptron and Adaline.
  • 69. Training of artificial neural networks
  • 70. Training of artificial neural networks • A neural network has to be configured such that the application of a set of inputs produces (either ‘direct’ or via a relaxation process) the desired set ot output.
  • 71. Training of artificial neural networks • A neural network has to be configured such that the application of a set of inputs produces (either ‘direct’ or via a relaxation process) the desired set ot output. • One way is to set the weights explicity, using a priori knowledge.
  • 72. Training of artificial neural networks • A neural network has to be configured such that the application of a set of inputs produces (either ‘direct’ or via a relaxation process) the desired set ot output. • One way is to set the weights explicity, using a priori knowledge. • Other way is to ‘train’ the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.
  • 74. Paradigms of learning • Supervised learning or Associative learning in which the network is trained by providing in with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the network (self- supervised)
  • 76. Paradigms of learning • Unsupervised learning or Self- organisation in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.
  • 77. Modifying patters of connectivity
  • 78. Modifying patters of connectivity Hebbian learning rule Widrow - Hoff In the next chapters some of these update rules will be discussed
  • 80. Hebbian learning rule • Suggested by Hebb in his classic book Organization of Behaviour (Hebb, 1949) • The basic idea is that if two units j and k are active simultaneously, their interconnection must be strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes to modify the weight wjk with:
  • 81. Hebbian learning rule • Suggested by Hebb in his classic book Organization of Behaviour (Hebb, 1949) • The basic idea is that if two units j and k are active simultaneously, their interconnection must be strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes to modify the weight wjk with: ∆wjk = ϒyjyk; ϒ is a positive constant of proportionality representing the learning rate
  • 82. Widrow-Hoff rule or the delta rule
  • 83. Widrow-Hoff rule or the delta rule • Another common rule uses not the actual activation of unit k but the difference between the actual and desired activation for adjusting the weights. •d is the desired activation provided by a k teacher
  • 84. Widrow-Hoff rule or the delta rule • Another common rule uses not the actual activation of unit k but the difference between the actual and desired activation for adjusting the weights. •d is the desired activation provided by a k teacher ∆wjk = γyj(dk - yk)
  • 85. Terminology Output vs activation of a unit: to be and the same thing; that is, the output of each neuron equals its activation rule Bias, offset, threshold: These terms all refer to a constant term which is input to a unit. This external input is usually implemented (and can be written) as a weight from a unit with activation value 1 Number of layers: In a feed-forward network, the inputs perform no computation and their layer is therefore not counted. Thus a network with one input layer, one hidden layer, and one output layer is referred to as a network with two layer.