Face Recognition is done using Raspberry pi mounted on a quadcopter. Coding is done in C++ using PCA for facial recognition. I have used a4tech usb camera which is 16 mega pixels and tplink wn722n for wifi link.
young call girls in Green Park🔝 9953056974 🔝 escort Service
Fyp
1. Face Recognition via Quad
Copter Using Raspberry Pi
Furqan Arshad 101519065
Hassam Umer 101519087
Zain ul Abidin 101519***
Advisor: Muhammad Ilyas Khan
2. Face recognition via quad copter using
Raspberry pi
Face
Detector
Detected
face
RGB to
greyscale
Face
Recognition
System
Face
Database
Found!
Capturing
Image
Detected
Face
9. Methodology
• Learning image processing techniques for enhancement of images.
• Learning and implementing of face detection using Viola Jones Algorithm.
• Making of photo editor software using opencv having features as brightness, contrast,
sharpness, histogram equalization and saturation of image.
• Face Recognition using PCA algorithm using yale database for testing.
11. Literature Review
Face Recognition using PCA
• What is PCA ?
• Why and where it is used ?
• What is a principle component/eigenface ?
• Benefits of dimensionality reduction?
• How results of PCA are measured ?
12. Why andWhere is PCA used ?
• Multivariate dataset (set of images) visualized as a set of coordinates in a
high dimensional date space
• PCA can supply the user with a lower dimensional picture, a “shadow” of
this object.
13. What is PCA and its relation to Face
Recognition?
• Principal component analysis (PCA) is a mathematical procedure that uses
an orthogonal transformation to convert a set of possibly correlated M
variables (a set of images) into a set of values of K uncorrelated variables
called principle components (Eigen Faces).
• The number of principle components is always less that or equal to the
number of original variables. i.e K<M
14. • This transformation is defined is such a way
that the first principle component shows the
most dominant “features” of the dataset and
each succeeding component in turn shows
the next most dominant “features”, under the
constraints that it be uncorrelated the
preceding components.
• To reduce the calculations needed for the
finding these principle components, the
dimensionality of the original data set is
reduced before they are calculated
PCA and tis relation to Face Recognition
15. • Since Principle Component show the “direction” of data, and each
proceeding component shows less “directions” and more “noise”, Only few
first principal components (say K ) are selected whereas the rest of last
components are discarded.
• The K principal components can safely represent the whole original dataset
because the depict the major features that makes up the dataset.
17. • Each variable in the original dataset can be represented in terms of these
principal components.
• Representing a data point reduces the number of values needed to
recognize it by using PCA.
• This makes reorganization process faster and also more free of error.
• Images weight vector and weights assigned
18. How is PCA done ?
∑
w1 w2 w3 w4 w5 w6 wk…..
+ mean image
Ω=
w1
w2
.
.
wk
19. The PCA FACE RECOGNITION ALGORITHM
How it works ?
20. ATraining set consisting ofTotal M images
N2 x 1
Images converted toVectors
FaceVector Space
Foreach (image in training set)
Training the recognizer
Step 1: Convert face Images inTraining Set to FaceVectors
N x N
21. ATraining set consisting ofTotal M images
Converted
FaceVector Space
Normalize face vectors:
Training the recognizer
Step 2: Normalize the FaceVectors
Calculate average face vector
Then subtract he mean face vector from
Each face vector to get the normalized
face vectors
Normalized face vector
A= [φ1, φ2, φ3, φ4,… φm]
Φi=Гi - ψΦi
ψ
T
AAC
22. ATraining set consisting ofTotal M images
Converted
FaceVector Space
N2 x M Mx N2=N2 by N2
(2500 x2500) if N=50
Each 2500 * 1 dimesnion
2500 eigenvectors
T
AAC
23. ATraining set consisting ofTotal M images
Converted
FaceVector Space
Each 2500 * 1 dimesnion
2500 eigenvectors
T
AAC
Warning
System may slow
down or run out of
memory
Computations
required are huge
Training the recognizer
The need for dimensionality reduction
24. ATraining set consisting ofTotal M images
Converted
FaceVector Space
Training the recognizer
Step 3: Reduce the dimensionality
Each 2500 * 1 dimesnion
2500 eigenvectors
Solution:
Dimensionality
reduction
TO reduce calculation and
effect of noise on the needed
eigenvectors calculate them
from a covariance matric of
reduced dimensionality
T
AAC
N2 x M Mx N2=N2 x N2
25. ATraining set consisting ofTotal M images
Converted
FaceVector Space
Training the recognizer
Step 3: Reduce the dimensionality
Each 2500 * 1 dimesnion
2500 eigenvectors
T
AAC
N2 x M Mx N2=N2 x N2
V/s
AAL T
Mx N2 N2 x M=M x M
26. ATraining set consisting ofTotal M images
Converted
FaceVector Space
Each 2500 * 1 dimesnion
2500 eigenvectors
T
AAC
N2 x M Mx N2=N2 x N2
V/s
AAL T
Mx N2 N2 x M=M x M
Training the recognizer
Step 4: Calculate the eigenvector from covariance matrix L
Lower dimensional
subspace
27. ATraining set consisting ofTotal M images
Converted
Training the recognizer
Step 5: Select K best eigenfaces such that K < M
2500 eigenvectors
ui= A λi
=A
AAL T
Lower dimensional
subspace
28. ATraining set consisting ofTotal M images
Converted
Training the recognizer
Step 5: Select K best eigenfaces such that K < M
Selected K Eigen Faces
29. How is PCA done ?
∑
w1 w2 w3 w4 w5 w6 wk…..
+ mean image
Ω=
w1
w2
.
.
wk
30. Convert the input image
to face vector
Convert the input image
to face vector
Project Normalize face
vector onto the
Eigenspace
Weight vector of input
image
Calculate distance
between input weight
vectors and all the
weight vectors
If
Dista>threshld
No
Recognized as
Unknown
person
Unknown person