SlideShare a Scribd company logo
1 of 50
Neural Networks and its
     Applications


           Presented By:
              Ahmed Hashmi
              Chinmoy Das
What is neural network


An Artificial Neural Network (ANN) is an information
processing paradigm that is inspired by biological
nervous systems.
It is composed of a large number of highly
interconnected processing elements called neurons.
An ANN is configured for a specific application, such
as pattern recognition or data classification
Why use neural networks


ability to derive meaning from complicated or
imprecise data
extract patterns and detect trends that are too
complex to be noticed by either humans or other
computer techniques
Adaptive learning
Real Time Operation
Neural Networks v/s Conventional
          Computers


 Conventional computers use an algorithmic
 approach, but neural networks works similar to
 human brain and learns by example.
Inspiration from Neurobiology
A neuron: many-inputs / one-
output unit
output can be excited or not
excited
incoming signals from other
neurons determine if the
neuron shall excite ("fire")
Output subject to attenuation
in the synapses, which are
junction parts of the neuron
A simple neuron


Takes the Inputs .
Calculate the summation
of the Inputs .
Compare it with the
threshold being set
during the learning stage.
Firing Rules


A firing rule determines how one calculates whether a
neuron should fire for any input pattern.
 some sets cause it to fire (the 1-taught set of
patterns) and others which prevent it from doing so
(the 0-taught set)
Example…


                               X
For example, a 3-input         1:
                                    0   0   0   0   1   1   1    1
neuron is taught to            X
                                    0   0   1   1   0   0   1    1
output 1 when the input        2:
(X1,X2 and X3) is 111 or 101   X
                                    0   1   0   1   0   1   0    1
and to output 0 when the       3:
input is 000 or 001.
                               O
                                            0/ 0/ 0/        0/
                               U    0   0               1        1
                                            1 1 1           1
                               T:
Example…


 Take the pattern 010. It differs   X
from 000 in 1 element, from 001          0   0   0   0   1   1   1   1
                                    1:
in 2 elements, from 101 in 3
elements and from 111 in 2          X
elements. Therefore, the                 0   0   1   1   0   0   1   1
                                    2:
'nearest' pattern is 000 which
belongs in the 0-taught set. Thus   X
                                         0   1   0   1   0   1   0   1
the firing rule requires that the   3:
neuron should not fire when the
input is 001. On the other hand,
011 is equally distant from two     O
taught patterns that have                            0/ 0/
                                    U    0   0   0           1   1   1
different outputs and thus the                       1 1
                                    T:
output stays undefined (0/1).
Types of neural network


fixed networks in which the weights cannot be
changed, ie dW/dt=0. In such networks, the weights
are fixed a priori according to the problem to solve.
 adaptive networks which are able to change their
weights, ie dW/dt not= 0.
The Learning Process
Associative mapping in which the network learns to
produce a particular pattern on the set of input units
whenever another particular pattern is applied on the set
of input units. The associative mapping can generally be
broken down into two mechanisms:
Hetero-association: is related to two recall mechanisms:

Nearest-neighbour recall, where the output pattern
produced corresponds to the input pattern stored, which
is closest to the pattern presented, and

 Interpolative recall, where the output pattern is a
similarity dependent interpolation of the patterns stored
corresponding to the pattern presented. Yet another
paradigm, which is a variant associative mapping is
classification, ie when there is a fixed set of categories into
which the input patterns are to be classified.
Supervised Learning
 Supervised learning which incorporates an external
teacher, so that each output unit is told what its desired
response to input signals ought to be. During the learning
process global information may be required. Paradigms of
supervised learning include error-correction
learning, reinforcement learning and stochastic learning.
An important issue concerning supervised learning is the
problem of error convergence, ie the minimisation of error
between the desired and computed unit values. The aim is
to determine a set of weights which minimises the error.
One well-known method, which is common to many
learning paradigms is the least mean square (LMS)
convergence.
Unsupervised Learning
Unsupervised learning uses no external teacher and is
based upon only local information. It is also referred to as
self-organisation, in the sense that it self-organises data
presented to the network and detects their emergent
collective properties.
From Human Neurons to Artificial Neurons their aspect of
learning concerns the distinction or not of a separate
phase, during which the network is trained, and a
subsequent operation phase. We say that a neural network
learns off-line if the learning phase and the operation
phase are distinct. A neural network learns on-line if it
learns and operates at the same time. Usually, supervised
learning is performed off-line, whereas unsupervised
learning is performed on-line.
Back-propagation Algorithm


it calculates how the error changes as each weight is
increased or decreased slightly.
 The algorithm computes each EW by first computing
the EA, the rate at which the error changes as the
activity level of a unit is changed. For output
units, the EA is simply the difference between the
actual and the desired output.
Transfer Function
The behaviour of an ANN (Artificial Neural Network) depends on both
the weights and the input-output function (transfer function) that is
specified for the units. This function typically falls into one of three
categories:
 linear (or ramp)
 threshold
 sigmoid
For linear units, the output activity is proportional to the total
weighted output.
For threshold units, the output is set at one of two levels, depending
on whether the total input is greater than or less than some
threshold value.
For sigmoid units, the output varies continuously but not linearly as
the input changes. Sigmoid units bear a greater resemblance to real
neurones than do linear or threshold units, but all three must be
considered rough approximations.
Application


    INTRODUCTION
       Features of finger prints
       Finger print recognition system
       Why neural networks?
       Goal of the system
   Preprocessing system
   Feature extraction using neural networks
   Classification
   result
Features of finger prints

Finger prints are the unique
  pattern of ridges and
  valleys in every person’s
  fingers.
      Their patterns are
      permanent and
      unchangeable for whole life
      of a person.
      They are unique and the
      probability that two
      fingerprints are alike is only 1
      in 1.9x10^15.
      Their uniqueness is used for
      identification of a person.
Finger print recognition system


       Image       edge      Ridge       Thinin   Feature    classifi
       acquisiti   detecti   extractio   g        extracti   cation
       on          on        n                    on


Image acquisition: the acquired image is digitalized into 512x512
image with each pixel assigned a particular gray scale value
(raster image).
edge detection and thinning: these are preprocessing of the
image , remove noise and enhance the image.
Finger print recognition system

Feature extraction: this
the step where we point
out the features such as
ridge bifurcation and
ridge endings of the
finger print with the help
of neural network.
Classification: here a class
label is assigned to the
image depending on the
extracted features.
Why using neural networks?


Neural networks enable us to find solution
where algorithmic methods are
computationally intensive or do not exist.
There is no need to program neural networks
they learn with examples.
Neural networks offer significant speed
advantage over conventional techniques.
Preprocessing system


The first phase of finger print recognition is to capture a
  image .
         The image is captured using total internal reflection of light
         (TIR).
         The image is stored as a two dimensional array of 512x512
         size, each element of array representing a pixel and assigned a
         gray scale value from 256 gray scale levels.
Preprocessing system

After image is captured ,noise is
  removed using edge
  detection, ridge extraction and
  thinning.
      Edge detection: the edge of the
      image is defined where the gray
      scale levels changes greatly.
      also, orientation of ridges is
      determined for each 32x32 block of
      pixels using gray scale gradient.
      Ridge extraction: ridges are
      extracted using the fact that gray
      scale value of pixels are maximum
      along the direction normal to the
      ridge orientation.
Preprocessing system

Thinning: the extracted ridges
are converted into skeletal
structure in which ridges are
only one pixel wide. thinning
should not-
      Remove isolated as well as
      surrounded pixel.
      Break connectedness.
      Make the image shorter.
Feature extraction using neural
          networks

Multilayer perceptron network of
three layers is trained to detect
minutiae in the thinned image.
      The first layer has nine perceptrons
      The hidden layer has five
      perceptrons
      The output layer has one perceptron.
The network is trained to output ‘1’
  when the input window is centered
  at the minutiae and it outputs ‘0’
  when minutiae are not present.
Feature extraction using neural
          networks
Trained neural networks
are used to analyze the
image by scanning the
image with a 3x3 window.
To avoid falsely reported
features which are due to
noise –
      The size of scanning
      window is increased to
      5x5
      If the minutiae are too
      close to each other than
      we ignore all of them.
classification


finger prints can be
   classified mainly in four
   classes depending upon
   their general pattern-
       Arch
       Tented arch
       Right loop
       Left loop
Applications of Fingerprint
            Recognition


As finger print recognition system can be easily
  embedded in any system. It is used in-
         Recognition of criminals in law enforcement bodies.
         Used to provide security to cars, lockers, banks ,shops.
         To differentiate between a person who has voted and those
         who have not voted in govt. elections.
         To count individuals.
Neural Network Toolbox in MATLAB
Neural Network Toolbox™ provides tools for
designing, implementing, visualizing, and simulating neural
networks. Neural networks are used for applications where
formal analysis would be difficult or impossible, such as
pattern recognition and nonlinear system identification and
control. Neural Network Toolbox supports feedforward
networks, radial basis networks, dynamic networks, self-
organizing maps, and other proven network paradigms.
Key Features
Neural network design, training, and simulation
Pattern recognition, clustering, and data-fitting tools
Supervised networks including feedforward, radial basis, LVQ, time
delay, nonlinear autoregressive (NARX), and layer-recurrent
Unsupervised networks including self-organizing maps and
competitive layers
Preprocessing and postprocessing for improving the efficiency of
network training and assessing network performance
Modular network representation for managing and visualizing
networks of arbitrary size
Routines for improving generalization to prevent overfitting
Simulink blocks for building and evaluating neural networks, and
advanced blocks for control systems applications
Working with Neural Network
            Toolbox
Like its counterpart in the biological nervous system, a neural
network can learn and therefore can be trained to find
solutions, recognize patterns, classify data, and forecast future
events. The behavior of a neural network is defined by the way its
individual computing elements are connected and by the strength
of those connections, or weights. The weights are automatically
adjusted by training the network according to a specified learning
rule until it performs the desired task correctly.
Neural Network Toolbox includes command-line functions and
graphical tools for creating, training, and simulating neural
networks. Graphical tools make it easy to develop neural
networks for tasks such as data fitting (including time-series
data), pattern recognition, and clustering. After creating your
networks in these tools, you can automatically
generate MATLAB code to capture your work and automate
tasks.
Network Architectures
Neural Network Toolbox supports a variety of supervised
and unsupervised network architectures. With the toolbox’s
modular approach to building networks, you can develop
custom architectures for your specific problem. You can
view the network architecture including all
inputs, layers, outputs, and interconnections.
Supervised Networks
Supervised neural networks are trained to produce desired outputs in response to
sample inputs, making them particularly well-suited to modeling and controlling
dynamic systems, classifying noisy data, and predicting future events.
Neural Network Toolbox supports four types of supervised networks:
Feedforward networks have one-way connections from input to output layers. They
are most commonly used for prediction, pattern recognition, and nonlinear function
fitting. Supported feedforward networks include feedforward backpropagation,
cascade-forward backpropagation, feedforward input-delay backpropagation, linear,
and perceptron networks.
Radial basis networks provide an alternative, fast method for designing nonlinear
feedforward networks. Supported variations include generalized regression and
probabilistic neural networks.
Dynamic networks use memory and recurrent feedback connections to recognize
spatial and temporal patterns in data. They are commonly used for time-series
prediction, nonlinear dynamic system modeling, and control systems applications.
Prebuilt dynamic networks in the toolbox include focused and distributed time-delay,
nonlinear autoregressive (NARX), layer-recurrent, Elman, and Hopfield networks. The
toolbox also supports dynamic training of custom networks with arbitrary connections.
Learning vector quantization (LVQ) is a powerful method for classifying patterns that
are not linearly separable. LVQ lets you specify class boundaries and the granularity of
classification.
Unsupervised Networks
Unsupervised neural networks are trained by letting the
network continually adjust itself to new inputs. They find
relationships within data and can automatically define
classification schemes.
Neural Network Toolbox supports two types of self-organizing,
unsupervised networks:
Competitive layers recognize and group similar input vectors,
enabling them to automatically sort inputs into categories.
Competitive layers are commonly used for classification and
pattern recognition.
Self-organizing maps learn to classify input vectors according to
similarity. Like competitive layers, they are used for classification
and pattern recognition tasks; however, they differ from
competitive layers because they are able to preserve the
topology of the input vectors, assigning nearby inputs to nearby
categories.
Training and Learning Functions
Training and learning functions are mathematical procedures used to
automatically adjust the network's weights and biases. The training
function dictates a global algorithm that affects all the weights and
biases of a given network. The learning function can be applied to
individual weights and biases within a network.
Neural Network Toolbox supports a variety of training algorithms,
including several gradient descent methods, conjugate gradient
methods, the Levenberg-Marquardt algorithm (LM), and the resilient
backpropagation algorithm (Rprop). The toolbox’s modular framework
lets you quickly develop custom training algorithms that can be
integrated with built-in algorithms. While training your neural network,
you can use error weights to define the relative importance of desired
outputs, which can be prioritized in terms of sample, timestep (for
time-series problems), output element, or any combination of these.
You can access training algorithms from the command line or via a
graphical tool that shows a diagram of the network being trained and
provides network performance plots and status information to help
you monitor the training process.
Improving Generalization
Improving the network’s ability to generalize helps prevent overfitting,
a common problem in neural network design. Overfitting occurs when
a network has memorized the training set but has not learned to
generalize to new inputs. Overfitting produces a relatively small error
on the training set but a much larger error when new data is presented
to the network.

Neural Network Toolbox provides two solutions to improve
generalization:
Regularization modifies the network’s performance function (the
measure of error that the training process minimizes). By including the
sizes of the weights and biases, regularization produces a network that
performs well with the training data and exhibits smoother behavior
when presented with new data.
Early stopping uses two different data sets: the training set, to update
the weights and biases, and the validation set, to stop training when
the network begins to overfit the data.
Some different applications
Character Recognition - The idea of character recognition has
become very important as handheld devices like the Palm Pilot
are becoming increasingly popular. Neural networks can be
used to recognize handwritten characters.

Image Compression - Neural networks can receive and process
vast amounts of information at once, making them useful in
image compression. With the Internet explosion and more sites
using more images on their sites, using neural networks for
image compression is worth a look.
Stock Market Prediction - The day-to-day business of the stock
market is extremely complicated. Many factors weigh in
whether a given stock will go up or down on any given day. Since
neural networks can examine a lot of information quickly and
sort it all out, they can be used to predict stock prices.

Traveling Salesman Problem- Interestingly enough, neural
networks can solve the traveling salesman problem, but only to
a certain degree of approximation.

Medicine, Electronic Nose, Security, and Loan Applications -
These are some applications that are in their proof-of-concept
stage, with the acceptance of a neural network that will decide
whether or not to grant a loan, something that has already been
used more successfully than many humans.

Miscellaneous Applications - These are some very interesting
(albeit at times a little absurd) applications of neural networks.
Application principles

The solution of a problem must be the simple.




Complicated solutions waste time and resources.



If a problem can be solved with a small look-up table that can be
easily calculated that is a more preferred solution than a complex
neural network with many layers that learns with back-
propagation.
Application principles

The speed is crucial for computer game applications.


If it is possible on-line neural network solutions should be avoided,
because they are big time consumers. Preferably, neural networks should
be applied in an off-line fashion, when the learning phase doesn’t happen
during the game playing time.
Application principles

On-line neural network solutions should be very simple.


Using many layer neural networks should be avoided, if possible.
Complex learning algorithms should be avoided. If possible a priori
knowledge should be used to set the initial parameters such that very
short training is needed for optimal performance.
Application principles

All the available data should be collected about the problem.


Having redundant data is usually a smaller problem than not having the
necessary data.


The data should be partitioned in training, validation and testing data.
Application principles

The neural network solution of a problem should be selected from a
large enough pool of potential solutions.


Because of the nature of the neural networks, it is likely that if a single
solution is build than that will not be the optimal one.


If a pool of potential solutions is generated and trained, it is more likely
that one which is close to the optimal one is found.
Problem

Problem analysis:
    • variables
    • modularisation into sub-problems
    • objectives
    • data collection
Neural network solution

Data collection and organization:
           training, validation and testing data sets


Example:
           Training set: ~ 75% of the data
           Validation set: ~ 10% of the data
           Testing set: ~ 5% of the data
Neural network solution

   Neural network solution selection
             each candidate solution is tested with the                            5
                                                                                  2.5                                                              5
   validation data and the best performing network is                                  0
                                                                                                                                           4
                                                                                  -2.5
   selected
                                                                                        1                                          3
                                                                                               2
                                                                                                   3                       2

                                                                                                       4
                                                                                                                   1
           Network 11                         Network 4               Network 7                                5




                                                                                  7.5
  5                                   5                                              5
                                       5
                                     2.5                                            5
2.5                                                                                2.5
   0                                   0                                             0
                                    4                                         4                                                                4
-2.5                                 -2.5                                         -2.5

                                          1                               3                1                                           3
   1                            3
                                               2                                               2
       2
                                                   3                  2                            3                           2
           3                2
                                                          4                                                4
                4                                                                                                      1
                                                                  1                                                5
                        1                                     5
Neural network solution

Choosing a solution representation:
         the solution can be represented directly as a neural
         network specifying the parameters of the neurons
         alternatively the solution can be represented as a
         multi-dimensional look-up table
         the representation should allow fast use of the solution
         within the application
Summary
• Neural network solutions should be kept as simple as possible.
• For the sake of the gaming speed neural networks should be applied preferably
off-line.
• A large data set should be collected and it should be divided into training,
validation, and testing data.
• Neural networks fit as solutions of complex problems.
• A pool of candidate solutions should be generated, and the best candidate
solution should be selected using the validation data.
• The solution should be represented to allow fast application.

More Related Content

What's hot

Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptronomaraldabash
 
Types of Machine Learning
Types of Machine LearningTypes of Machine Learning
Types of Machine LearningSamra Shahzadi
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshareRed Innovators
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksstellajoseph
 
Neural networks.ppt
Neural networks.pptNeural networks.ppt
Neural networks.pptSrinivashR3
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural networkNagarajan
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsKasun Chinthaka Piyarathna
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learningmilad abbasi
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learningHaris Jamil
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentMuhammad Rasel
 

What's hot (20)

Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Self-organizing map
Self-organizing mapSelf-organizing map
Self-organizing map
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
 
Types of Machine Learning
Types of Machine LearningTypes of Machine Learning
Types of Machine Learning
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Neural networks.ppt
Neural networks.pptNeural networks.ppt
Neural networks.ppt
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Deep learning
Deep learningDeep learning
Deep learning
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
 
Deep learning presentation
Deep learning presentationDeep learning presentation
Deep learning presentation
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
neural networks
 neural networks neural networks
neural networks
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learning
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Neural networks
Neural networksNeural networks
Neural networks
 

Viewers also liked

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSREHMAT ULLAH
 
الفصل الثامن
الفصل الثامنالفصل الثامن
الفصل الثامنguestb0490b3d
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKSESCOM
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Use of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionUse of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionkamalsrit
 
Pattern Recognition final project
Pattern Recognition final projectPattern Recognition final project
Pattern Recognition final projectMaher Nadar
 
Neural networks...
Neural networks...Neural networks...
Neural networks...Molly Chugh
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Victor Miagkikh
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimizationMeenakshi Devi
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimizationJoy Dutta
 
The Travelling Salesman Problem
The Travelling Salesman ProblemThe Travelling Salesman Problem
The Travelling Salesman Problemguest3d82c4
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
 
Neural network
Neural networkNeural network
Neural networkSilicon
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimizationvk1dadhich
 

Viewers also liked (20)

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKS
 
الفصل الثامن
الفصل الثامنالفصل الثامن
الفصل الثامن
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKS
 
Types of reports
Types of reportsTypes of reports
Types of reports
 
Neural networks
Neural networksNeural networks
Neural networks
 
Use of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionUse of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognition
 
Pattern Recognition final project
Pattern Recognition final projectPattern Recognition final project
Pattern Recognition final project
 
Pattern recognition
Pattern recognitionPattern recognition
Pattern recognition
 
Neural networks...
Neural networks...Neural networks...
Neural networks...
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?Learning in Networks: were Pavlov and Hebb right?
Learning in Networks: were Pavlov and Hebb right?
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimization
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimization
 
The Travelling Salesman Problem
The Travelling Salesman ProblemThe Travelling Salesman Problem
The Travelling Salesman Problem
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
 
Neural network
Neural networkNeural network
Neural network
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimization
 

Similar to Neural network & its applications

Neural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics CourseNeural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics CourseMohaiminur Rahman
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxpratik610182
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
KAIST 2012 Fall 전자공학개론 6조 발표 PPT
KAIST 2012 Fall 전자공학개론 6조 발표 PPTKAIST 2012 Fall 전자공학개론 6조 발표 PPT
KAIST 2012 Fall 전자공학개론 6조 발표 PPTpjknkda
 
Soft Computering Technics - Unit2
Soft Computering Technics - Unit2Soft Computering Technics - Unit2
Soft Computering Technics - Unit2sravanthi computers
 
Neural networks
Neural networksNeural networks
Neural networksBasil John
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1sravanthi computers
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learningRADO7900
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIUProf. Neeta Awasthy
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligencealldesign
 
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائيةتطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائيةssuserfdec151
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 

Similar to Neural network & its applications (20)

neural-networks (1)
neural-networks (1)neural-networks (1)
neural-networks (1)
 
Neural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics CourseNeural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics Course
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Nural network ER. Abhishek k. upadhyay
Nural network ER. Abhishek  k. upadhyayNural network ER. Abhishek  k. upadhyay
Nural network ER. Abhishek k. upadhyay
 
Lec-02.pdf
Lec-02.pdfLec-02.pdf
Lec-02.pdf
 
KAIST 2012 Fall 전자공학개론 6조 발표 PPT
KAIST 2012 Fall 전자공학개론 6조 발표 PPTKAIST 2012 Fall 전자공학개론 6조 발표 PPT
KAIST 2012 Fall 전자공학개론 6조 발표 PPT
 
Soft Computering Technics - Unit2
Soft Computering Technics - Unit2Soft Computering Technics - Unit2
Soft Computering Technics - Unit2
 
Neural networks
Neural networksNeural networks
Neural networks
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائيةتطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
 
tutorial.ppt
tutorial.ppttutorial.ppt
tutorial.ppt
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 

Recently uploaded

"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 

Recently uploaded (20)

"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 

Neural network & its applications

  • 1. Neural Networks and its Applications Presented By: Ahmed Hashmi Chinmoy Das
  • 2. What is neural network An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by biological nervous systems. It is composed of a large number of highly interconnected processing elements called neurons. An ANN is configured for a specific application, such as pattern recognition or data classification
  • 3. Why use neural networks ability to derive meaning from complicated or imprecise data extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques Adaptive learning Real Time Operation
  • 4. Neural Networks v/s Conventional Computers Conventional computers use an algorithmic approach, but neural networks works similar to human brain and learns by example.
  • 5. Inspiration from Neurobiology A neuron: many-inputs / one- output unit output can be excited or not excited incoming signals from other neurons determine if the neuron shall excite ("fire") Output subject to attenuation in the synapses, which are junction parts of the neuron
  • 6. A simple neuron Takes the Inputs . Calculate the summation of the Inputs . Compare it with the threshold being set during the learning stage.
  • 7. Firing Rules A firing rule determines how one calculates whether a neuron should fire for any input pattern. some sets cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set)
  • 8. Example… X For example, a 3-input 1: 0 0 0 0 1 1 1 1 neuron is taught to X 0 0 1 1 0 0 1 1 output 1 when the input 2: (X1,X2 and X3) is 111 or 101 X 0 1 0 1 0 1 0 1 and to output 0 when the 3: input is 000 or 001. O 0/ 0/ 0/ 0/ U 0 0 1 1 1 1 1 1 T:
  • 9. Example… Take the pattern 010. It differs X from 000 in 1 element, from 001 0 0 0 0 1 1 1 1 1: in 2 elements, from 101 in 3 elements and from 111 in 2 X elements. Therefore, the 0 0 1 1 0 0 1 1 2: 'nearest' pattern is 000 which belongs in the 0-taught set. Thus X 0 1 0 1 0 1 0 1 the firing rule requires that the 3: neuron should not fire when the input is 001. On the other hand, 011 is equally distant from two O taught patterns that have 0/ 0/ U 0 0 0 1 1 1 different outputs and thus the 1 1 T: output stays undefined (0/1).
  • 10. Types of neural network fixed networks in which the weights cannot be changed, ie dW/dt=0. In such networks, the weights are fixed a priori according to the problem to solve. adaptive networks which are able to change their weights, ie dW/dt not= 0.
  • 11. The Learning Process Associative mapping in which the network learns to produce a particular pattern on the set of input units whenever another particular pattern is applied on the set of input units. The associative mapping can generally be broken down into two mechanisms:
  • 12. Hetero-association: is related to two recall mechanisms: Nearest-neighbour recall, where the output pattern produced corresponds to the input pattern stored, which is closest to the pattern presented, and Interpolative recall, where the output pattern is a similarity dependent interpolation of the patterns stored corresponding to the pattern presented. Yet another paradigm, which is a variant associative mapping is classification, ie when there is a fixed set of categories into which the input patterns are to be classified.
  • 13. Supervised Learning Supervised learning which incorporates an external teacher, so that each output unit is told what its desired response to input signals ought to be. During the learning process global information may be required. Paradigms of supervised learning include error-correction learning, reinforcement learning and stochastic learning. An important issue concerning supervised learning is the problem of error convergence, ie the minimisation of error between the desired and computed unit values. The aim is to determine a set of weights which minimises the error. One well-known method, which is common to many learning paradigms is the least mean square (LMS) convergence.
  • 14. Unsupervised Learning Unsupervised learning uses no external teacher and is based upon only local information. It is also referred to as self-organisation, in the sense that it self-organises data presented to the network and detects their emergent collective properties. From Human Neurons to Artificial Neurons their aspect of learning concerns the distinction or not of a separate phase, during which the network is trained, and a subsequent operation phase. We say that a neural network learns off-line if the learning phase and the operation phase are distinct. A neural network learns on-line if it learns and operates at the same time. Usually, supervised learning is performed off-line, whereas unsupervised learning is performed on-line.
  • 15. Back-propagation Algorithm it calculates how the error changes as each weight is increased or decreased slightly. The algorithm computes each EW by first computing the EA, the rate at which the error changes as the activity level of a unit is changed. For output units, the EA is simply the difference between the actual and the desired output.
  • 16. Transfer Function The behaviour of an ANN (Artificial Neural Network) depends on both the weights and the input-output function (transfer function) that is specified for the units. This function typically falls into one of three categories: linear (or ramp) threshold sigmoid For linear units, the output activity is proportional to the total weighted output. For threshold units, the output is set at one of two levels, depending on whether the total input is greater than or less than some threshold value. For sigmoid units, the output varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurones than do linear or threshold units, but all three must be considered rough approximations.
  • 17. Application INTRODUCTION  Features of finger prints  Finger print recognition system  Why neural networks?  Goal of the system  Preprocessing system  Feature extraction using neural networks  Classification  result
  • 18. Features of finger prints Finger prints are the unique pattern of ridges and valleys in every person’s fingers. Their patterns are permanent and unchangeable for whole life of a person. They are unique and the probability that two fingerprints are alike is only 1 in 1.9x10^15. Their uniqueness is used for identification of a person.
  • 19. Finger print recognition system Image edge Ridge Thinin Feature classifi acquisiti detecti extractio g extracti cation on on n on Image acquisition: the acquired image is digitalized into 512x512 image with each pixel assigned a particular gray scale value (raster image). edge detection and thinning: these are preprocessing of the image , remove noise and enhance the image.
  • 20. Finger print recognition system Feature extraction: this the step where we point out the features such as ridge bifurcation and ridge endings of the finger print with the help of neural network. Classification: here a class label is assigned to the image depending on the extracted features.
  • 21. Why using neural networks? Neural networks enable us to find solution where algorithmic methods are computationally intensive or do not exist. There is no need to program neural networks they learn with examples. Neural networks offer significant speed advantage over conventional techniques.
  • 22. Preprocessing system The first phase of finger print recognition is to capture a image . The image is captured using total internal reflection of light (TIR). The image is stored as a two dimensional array of 512x512 size, each element of array representing a pixel and assigned a gray scale value from 256 gray scale levels.
  • 23. Preprocessing system After image is captured ,noise is removed using edge detection, ridge extraction and thinning. Edge detection: the edge of the image is defined where the gray scale levels changes greatly. also, orientation of ridges is determined for each 32x32 block of pixels using gray scale gradient. Ridge extraction: ridges are extracted using the fact that gray scale value of pixels are maximum along the direction normal to the ridge orientation.
  • 24. Preprocessing system Thinning: the extracted ridges are converted into skeletal structure in which ridges are only one pixel wide. thinning should not- Remove isolated as well as surrounded pixel. Break connectedness. Make the image shorter.
  • 25. Feature extraction using neural networks Multilayer perceptron network of three layers is trained to detect minutiae in the thinned image. The first layer has nine perceptrons The hidden layer has five perceptrons The output layer has one perceptron. The network is trained to output ‘1’ when the input window is centered at the minutiae and it outputs ‘0’ when minutiae are not present.
  • 26. Feature extraction using neural networks Trained neural networks are used to analyze the image by scanning the image with a 3x3 window. To avoid falsely reported features which are due to noise – The size of scanning window is increased to 5x5 If the minutiae are too close to each other than we ignore all of them.
  • 27. classification finger prints can be classified mainly in four classes depending upon their general pattern- Arch Tented arch Right loop Left loop
  • 28. Applications of Fingerprint Recognition As finger print recognition system can be easily embedded in any system. It is used in- Recognition of criminals in law enforcement bodies. Used to provide security to cars, lockers, banks ,shops. To differentiate between a person who has voted and those who have not voted in govt. elections. To count individuals.
  • 29. Neural Network Toolbox in MATLAB Neural Network Toolbox™ provides tools for designing, implementing, visualizing, and simulating neural networks. Neural networks are used for applications where formal analysis would be difficult or impossible, such as pattern recognition and nonlinear system identification and control. Neural Network Toolbox supports feedforward networks, radial basis networks, dynamic networks, self- organizing maps, and other proven network paradigms.
  • 30. Key Features Neural network design, training, and simulation Pattern recognition, clustering, and data-fitting tools Supervised networks including feedforward, radial basis, LVQ, time delay, nonlinear autoregressive (NARX), and layer-recurrent Unsupervised networks including self-organizing maps and competitive layers Preprocessing and postprocessing for improving the efficiency of network training and assessing network performance Modular network representation for managing and visualizing networks of arbitrary size Routines for improving generalization to prevent overfitting Simulink blocks for building and evaluating neural networks, and advanced blocks for control systems applications
  • 31. Working with Neural Network Toolbox Like its counterpart in the biological nervous system, a neural network can learn and therefore can be trained to find solutions, recognize patterns, classify data, and forecast future events. The behavior of a neural network is defined by the way its individual computing elements are connected and by the strength of those connections, or weights. The weights are automatically adjusted by training the network according to a specified learning rule until it performs the desired task correctly. Neural Network Toolbox includes command-line functions and graphical tools for creating, training, and simulating neural networks. Graphical tools make it easy to develop neural networks for tasks such as data fitting (including time-series data), pattern recognition, and clustering. After creating your networks in these tools, you can automatically generate MATLAB code to capture your work and automate tasks.
  • 32.
  • 33. Network Architectures Neural Network Toolbox supports a variety of supervised and unsupervised network architectures. With the toolbox’s modular approach to building networks, you can develop custom architectures for your specific problem. You can view the network architecture including all inputs, layers, outputs, and interconnections.
  • 34. Supervised Networks Supervised neural networks are trained to produce desired outputs in response to sample inputs, making them particularly well-suited to modeling and controlling dynamic systems, classifying noisy data, and predicting future events. Neural Network Toolbox supports four types of supervised networks: Feedforward networks have one-way connections from input to output layers. They are most commonly used for prediction, pattern recognition, and nonlinear function fitting. Supported feedforward networks include feedforward backpropagation, cascade-forward backpropagation, feedforward input-delay backpropagation, linear, and perceptron networks. Radial basis networks provide an alternative, fast method for designing nonlinear feedforward networks. Supported variations include generalized regression and probabilistic neural networks. Dynamic networks use memory and recurrent feedback connections to recognize spatial and temporal patterns in data. They are commonly used for time-series prediction, nonlinear dynamic system modeling, and control systems applications. Prebuilt dynamic networks in the toolbox include focused and distributed time-delay, nonlinear autoregressive (NARX), layer-recurrent, Elman, and Hopfield networks. The toolbox also supports dynamic training of custom networks with arbitrary connections. Learning vector quantization (LVQ) is a powerful method for classifying patterns that are not linearly separable. LVQ lets you specify class boundaries and the granularity of classification.
  • 35. Unsupervised Networks Unsupervised neural networks are trained by letting the network continually adjust itself to new inputs. They find relationships within data and can automatically define classification schemes. Neural Network Toolbox supports two types of self-organizing, unsupervised networks: Competitive layers recognize and group similar input vectors, enabling them to automatically sort inputs into categories. Competitive layers are commonly used for classification and pattern recognition. Self-organizing maps learn to classify input vectors according to similarity. Like competitive layers, they are used for classification and pattern recognition tasks; however, they differ from competitive layers because they are able to preserve the topology of the input vectors, assigning nearby inputs to nearby categories.
  • 36.
  • 37. Training and Learning Functions Training and learning functions are mathematical procedures used to automatically adjust the network's weights and biases. The training function dictates a global algorithm that affects all the weights and biases of a given network. The learning function can be applied to individual weights and biases within a network. Neural Network Toolbox supports a variety of training algorithms, including several gradient descent methods, conjugate gradient methods, the Levenberg-Marquardt algorithm (LM), and the resilient backpropagation algorithm (Rprop). The toolbox’s modular framework lets you quickly develop custom training algorithms that can be integrated with built-in algorithms. While training your neural network, you can use error weights to define the relative importance of desired outputs, which can be prioritized in terms of sample, timestep (for time-series problems), output element, or any combination of these. You can access training algorithms from the command line or via a graphical tool that shows a diagram of the network being trained and provides network performance plots and status information to help you monitor the training process.
  • 38. Improving Generalization Improving the network’s ability to generalize helps prevent overfitting, a common problem in neural network design. Overfitting occurs when a network has memorized the training set but has not learned to generalize to new inputs. Overfitting produces a relatively small error on the training set but a much larger error when new data is presented to the network. Neural Network Toolbox provides two solutions to improve generalization: Regularization modifies the network’s performance function (the measure of error that the training process minimizes). By including the sizes of the weights and biases, regularization produces a network that performs well with the training data and exhibits smoother behavior when presented with new data. Early stopping uses two different data sets: the training set, to update the weights and biases, and the validation set, to stop training when the network begins to overfit the data.
  • 39. Some different applications Character Recognition - The idea of character recognition has become very important as handheld devices like the Palm Pilot are becoming increasingly popular. Neural networks can be used to recognize handwritten characters. Image Compression - Neural networks can receive and process vast amounts of information at once, making them useful in image compression. With the Internet explosion and more sites using more images on their sites, using neural networks for image compression is worth a look.
  • 40. Stock Market Prediction - The day-to-day business of the stock market is extremely complicated. Many factors weigh in whether a given stock will go up or down on any given day. Since neural networks can examine a lot of information quickly and sort it all out, they can be used to predict stock prices. Traveling Salesman Problem- Interestingly enough, neural networks can solve the traveling salesman problem, but only to a certain degree of approximation. Medicine, Electronic Nose, Security, and Loan Applications - These are some applications that are in their proof-of-concept stage, with the acceptance of a neural network that will decide whether or not to grant a loan, something that has already been used more successfully than many humans. Miscellaneous Applications - These are some very interesting (albeit at times a little absurd) applications of neural networks.
  • 41. Application principles The solution of a problem must be the simple. Complicated solutions waste time and resources. If a problem can be solved with a small look-up table that can be easily calculated that is a more preferred solution than a complex neural network with many layers that learns with back- propagation.
  • 42. Application principles The speed is crucial for computer game applications. If it is possible on-line neural network solutions should be avoided, because they are big time consumers. Preferably, neural networks should be applied in an off-line fashion, when the learning phase doesn’t happen during the game playing time.
  • 43. Application principles On-line neural network solutions should be very simple. Using many layer neural networks should be avoided, if possible. Complex learning algorithms should be avoided. If possible a priori knowledge should be used to set the initial parameters such that very short training is needed for optimal performance.
  • 44. Application principles All the available data should be collected about the problem. Having redundant data is usually a smaller problem than not having the necessary data. The data should be partitioned in training, validation and testing data.
  • 45. Application principles The neural network solution of a problem should be selected from a large enough pool of potential solutions. Because of the nature of the neural networks, it is likely that if a single solution is build than that will not be the optimal one. If a pool of potential solutions is generated and trained, it is more likely that one which is close to the optimal one is found.
  • 46. Problem Problem analysis: • variables • modularisation into sub-problems • objectives • data collection
  • 47. Neural network solution Data collection and organization: training, validation and testing data sets Example: Training set: ~ 75% of the data Validation set: ~ 10% of the data Testing set: ~ 5% of the data
  • 48. Neural network solution Neural network solution selection each candidate solution is tested with the 5 2.5 5 validation data and the best performing network is 0 4 -2.5 selected 1 3 2 3 2 4 1 Network 11 Network 4 Network 7 5 7.5 5 5 5 5 2.5 5 2.5 2.5 0 0 0 4 4 4 -2.5 -2.5 -2.5 1 3 1 3 1 3 2 2 2 3 2 3 2 3 2 4 4 4 1 1 5 1 5
  • 49. Neural network solution Choosing a solution representation: the solution can be represented directly as a neural network specifying the parameters of the neurons alternatively the solution can be represented as a multi-dimensional look-up table the representation should allow fast use of the solution within the application
  • 50. Summary • Neural network solutions should be kept as simple as possible. • For the sake of the gaming speed neural networks should be applied preferably off-line. • A large data set should be collected and it should be divided into training, validation, and testing data. • Neural networks fit as solutions of complex problems. • A pool of candidate solutions should be generated, and the best candidate solution should be selected using the validation data. • The solution should be represented to allow fast application.