3. Introduction
• Artificial Neural Network are computational models
inspired by biological neural network used for
processing large number of inputs which are mostly
unknown.
4. What is Problem?
•Our world is full of data. After collection and organization,
data, if we are lucky, becomes information. In today's
interconnected world, information exists in electronic form
that can be stored and transmitted instantly. Challenge is to
understand, integrate, and apply information to generate
useful knowledge.
5. What is Solution?
• Interpretation requires data acquisition, cleaning
(preparing the data for analysis), analysis, and
presentation in a way that permits knowledgeable
decision making and action. Key is to extract
information about data from relationships buried
within the data itself.
10. History
• 1943: McCulloch–Pitts “neuron”
– Started the field
• 1962: Rosenblatt’s perceptron
– Learned its own weight values; convergence proof
• 1969: Minsky & Papert book on perceptrons
– Proved limitations of single-layer perceptron networks
• 1982: Hopfield and convergence in symmetric networks
– Introduced energy-function concept
• 1986: Backpropagation of errors
– Method for training multilayer networks
• Present: Probabilistic interpretations, Bayesian and spiking
networks
14. • Instead of programming computational system
to do specific tasks, teach system how to
perform task
• To do this, generate Artificial Intelligence
System- AI
• AI systems must be adaptive – able to learn
from data on a continuous basis
22. Supervised
• Process of using desired output for
training the NN
• It employees a teacher to assist the
network by telling the network what the
desired response to a given input
• Weights are modified according to the
required output
• Not practicable in all cases
25. Supervised Learning
• It is based on a
labeled training set.
• The class of each
piece of data in
training set is
known.
• Class labels are
pre-determined and
provided in the
training phase.
A
B
A
B
A
B
ε Class
λ Class
λ Class
λ Class
ε Class
ε Class
27. 27
OR function
• The two-input perceptron can implement the OR function when
we set the weights: w0 = -0.3, w1 = w2 = 0.5
x1 x2 o utput
0 0 - 1
0 1 1
1 0 1
1 1 1
<Training e x ample s >
Decision hyperplane :
w0 + w1 x1 + w2 x2 = 0
-0.3 + 0.5 x1 + 0.5 x2 = 0
x1 x2 Σwixi output
0 0 -0.3 - 1
0 1 0.2 - 1
1 0 0.2 - 1
1 1 0.7 1
<Te s t Re s ults >
-
+
+
+
x1
x2
-0.3 + 0.5 x1 + 0.5 x2 = 0
-
+
+
+
x1
x2
-0.3 + 0.5 x1 + 0.5 x2 = 0
28. Unsupervised
• No teacher Required
• Similar to the students learning on their
own
• Adaption rules
• Adaption rule generate error signals
29. Reinforced
• A teacher is assumed to b present but
right answer is not given to the network
• Network is given an indication whether
output is right or wrong
• Network use this indication to improve
performance
30. Types of Neuron activation function
1. Heaviside
1 if s>=T
F(s)=
0 if s<T
36. 36
NEURAL NETWORK
APPLICATION DEVELOPMENT
The development process for an ANN application has eight steps.
• Step 1: (Data collection).
• Step 2: (Training and testing data separation For a moderately
sized data set, 80% of the data are randomly selected for
training, 10% for testing, and 10% secondary testing.
• Step 3: (Network architecture) Important considerations are the
exact number of perceptrons and the number of layers.
37. 37
• Step 4: (Parameter tuning and weight initialization)
• Step 5: (Data transformation) Transforms the application
data into the type and format required by the ANN.
• Step 6: (Training)
38. 38
• Step 7: (Testing)
– The testing examines the performance of the network using
the derived weights by measuring the ability of the network to
classify the testing data correctly.
– Black-box testing (comparing test results to historical
results) is the primary approach for verifying that inputs
produce the appropriate outputs.
• Step 8: (Implementation) Now a stable set of weights
are obtained.